The dragons of the Internet underworld come to life—and meet their fate—in a new animated video from Intel. This video, now playing on YouTube, offers a colorful look at the serious topics of cloud security and compliance.

 

The video is set in a medieval cloud-scape, where dragons that threaten cloud security are battled by an IT hero. The video makes the point that the cloud is great but it’s “not all rainbows and unicorns.” It comes with serious security issues that must be addressed.

 

A key takeaway is that cloud security is not an oxymoron. What’s more, it can begin at the hardware level with a root of trust based on Intel® Trusted Execution Technology (Intel® TXT). That’s where you first fight off dragons like malicious rootkits and other malware, and help keep them controlled throughout your extended cloud deployment.

 

To watch our IT action hero slay the dragons of the cloud, check out our Intel Trusted Compute Pools video—and learn how Intel TXT can help you control your piece of the cloud.

Customers that look to deploy mission critical applications  on Intel Xeon Servers always consider IBM as one of the key hardware  partners.  IBM offers a variety of system  configurations, and also invests both in benchmarks and software partners. With  these investments, IBM can differentiate its products and demonstrate the full  capability to handle enterprise server workloads.

 

Last week, IBM published the latest in a series of  benchmarks on Intel’s Xeon E7 processors performance.  This one is a very impressive 3 million  transactions per minute TPC-C benchmark, which is the highest performance  result ever published on an X86-64 system.   It also ranks fifth in the TPC-C Top Ten performance results for non-clustered  systems and also in the TPC-C Top Ten price/performance results for non-clustered  systems.   Housed in a 43U rack, this entire system  configuration is perfect for enterprise database applications.

 

The IBM x3850 X5 achieved this result by using IBM’s  innovative MAX5 technology. The MAX5 technology allows for a scalable, 1U,  memory expansion drawer. This expansion drawer provides an additional 32 DIMM  slot with a memory controller for added performance, and boosts scalability  with a node controller for the x3850.

 

The TPC-C configuration above had a total of 3TB of memory  (2TB in the server and 1TB in the IBM MAX5 for System x). Previously, IBM has  also published papers that indicate the effect of additional memory capacity on database performance.  While this paper focuses on in-memory database  performance, the memory expansion can also increase the performance of other  application workloads like web, file, virtualization and cloud computing.

 

It is worth noting that in addition to the TPC-C benchmark, IBM also published a result that sets new  records for 4-socket performance and overall price/performance on the TPC-E  benchmark that utilizes Microsoft SQL Server 2008 R2 Enterprise Addition  configured with SSD storage.  I mention  this because there is always lively discussion regarding the merits of the  TPC-C versus the TPC-E benchmarks and their  relationships to actual production workloads.

 

I believe these are all great examples of the workload and performance capability of Intel’s Xeon E7 chips     IBM has demonstrated that when partners work  collaboratively it is possible to implement unique features that deliver  additional capability to the customer.   The tradeoff between CPU and memory has always provided the ability to  tune configurations for database workloads.  With these new benchmarks IBM has validated those  options on its x3850 X5  server.

By now you’ve heard from a LOT of us on building a cloud. You have probably heard us talk about performance, efficiency, and trust. But have you ever seen it done? If not, you should take a look at this new video: Intel® Cloud Builders Reference Architecture VMware vCloud™ Director Demo.

 

In this how-to video, our Cloud Builders team built a mini data center and then deployed an actual cloud environment within it.

 

While the host of the demo is an animated character, the configuration example is entirely real. Using actual screen captures and configuration samples, this video walks you thought the process of creating a cloud based on VMware vSphere™, VMware vCloud Director™, VMware vShield™ manager, and Intel® Xeon® processor-based hardware.

 

Featured hardware components include:

  • 4 Urbanna 2U 3.5 HDD Xeon DP Servers with CPU: Xeon DP Nehalem-EP X5570 FC-LG8 2.93 GHz
  • 1 Timber Creek 2U Xeon DP Storage Server with CPU: Xeon DP Westmere-EP X5680 FC-LGA8 3.33GHz

 

Along the way, we captured lots of technical tips and tricks that you’ll find useful if you put our reference architecture into action—as we did in the video. Even if you’re just thinking about deploying a cloud, this video will provide valuable insights into the configuration process.

 

So, what’s your excuse now?

There isn't a hotter place for technology now than Asia Pacific. Rate of adoption of new technologies is at a fever pitch, and if you consider population present in this region, an uptick of 10% of the market represents adding the population of the US to the connected world. And while the whole region is growing, the unique thing about APAC is that it consists of established markets (i.e. Korea, Australia & New Zealand), relatively established growth markets (i.e. India), and go-go emerging markets (i.e. Malaysia, Vietnam) representing unique challenges for market advancement.

 

All of this growth represents major business opportunities for new cloud computing services, placing an urgency on building the data center infrastructure to keep pace with emerging requirements. Within this context, Intel delivered two days of deep training on cloud computing in Penang this week to the leading data center managers from throughout the region as well as leading regional press. We brought along our friends from the Cloud Builders program to highlight the latest in Reference Architectures and talked a bit about the Open Data Center Alliance's vision for the cloud. In all, it was one of the most engaged crowds I've been around in a long time, and learning flowed to both ends of the conversation as customers shared their unique challenges and we shared the latest innovations in cloud.

 

Over the next few days I'll be providing some specific thoughts on trends and opportunities observed from the event...in the meantime, let me know if you've got any specific input or questions on how the cloud looks to play out in Asia.

A few months ago, I attempted to scare you with information on why data security is important for small business. Well, I'm back here today to try again. Did you know that...

 

 

Of course, as a small business owner you are well aware that information is the lifeblood of your company. But what you may not know is that investing in a real server to run as the backbone of your IT infrastructure is one of the best ways to keep your data safe and secure. You may ask, "But why should I spend money on a new Intel Xeon processor based-server when my desktop system's holding down the fort just fine?"

 

Well, there are some excellent reasons why Intel Xeon processor-based servers are the best choice to handle your company's data:

  • Servers are built to run 24/7, so you have access to data 24/7. They have more robust cooling systems, and may have an uninterruptible power supply (UPS) or dual power supplies, both features you won't find on a desktop.
  • They are validated on server operating systems like Microsoft Windows SBS. Why is that important? Well, if something goes wrong, you're guaranteed to get support from the vendor - but if you're running their software on a system that's not validated, you may be out of luck.
  • They support Error Correcting Code (ECC) memory which automatically checks and corrects memory errors - 99.9% of all memory errors, in fact.
  • They also support Intel Rapid Storage Technology and an array (no pun intended) of RAID configurations, so you can seamlessly store copies of data on additional hard drives. If a hard drive fails, you won't suffer data loss or system downtime.

 

Plus, they aren't as large an investment as you might think - often only a little more than a new desktop system. Still not convinced? Check out this video and watch Cori and I try to explain in lighthearted terms why a "real server" is the best choice to keep your small business data secure.

 

Download Now


ncsa.jpgAs one of the premier high-performance computing (HPC) research institutions in the U.S., the National Center for Supercomputing Applications (NCSA) provides HPC resources for a wide range of scientific applications, including many whose performance needs require highly parallel shared-memory architectures. NCSA recently replaced its previous shared-memory system, an SGI Altix* supercomputer based on the Intel® Itanium® 2 processor, with an SGI Altix UV supercomputer powered by the Intel® Xeon® processor 7500 series. NCSA says the new system, which it calls Ember, consumes half the power while delivering double the performance and nearly triple the memory capacity.


“Our expectation was that we’d see a baseline of maybe a 20 to 50 or 70 percent increase in application performance over the previous system,” explained John Towns, director of persistent infrastructure for NCSA. “What we found is that many applications are seeing a factor of two increase in performance, and sometimes much more.”


To read all about it, download our new NCSA business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

A couple of weeks ago, I had a chance to engage with cloud computing service providers, industry analysts, a few enterprise customers and other cloud computing thought leaders at GigaOM’s Structure.  The event was a golden opportunity to get a snapshot of the cloud computing industry. Here's a few “cloudy” themes that emerged from the event:

 

  • Cloud service providers aren’t basing their entire portfolio on selling services based on a multi-tenant public cloud infrastructure. In fact, it seems like most of the smaller cloud providers specialized in variations of helping enterprise IT with the management of their private cloud (such as managing various aspects of data center real estate, or if outside the enterprise firewall, managing the enterprise’s cloud on a single-tenant environment). While you do hear cases of enterprises using multi-tenant public cloud (such as CRM with salesforce.com), it seems that many cloud providers are essentially helping enterprise IT be more efficient with their own private cloud.

 

 

  • Many of these cloud providers face challenges similar to IT@Intel's private cloud challenges. During the event, Das Kamhout, our lead architect of the Intel IT cloud, led a session and addressed the challenge of Intel becoming an IT service provider for its many business units. Afterwards, many service providers commented on the fact that they face similar challenges as Intel IT’s challenges. A few facts and figures that he mentioned include:
    • In 2010, Intel IT more than tripled its rate of virtualization in the Office and Enterprise environment from 12% to 42% and is on track to achieve its goal of 75%.
    • Intel IT reduced the time to provision new infrastructure services to 3 hours from 90 days by implementing an On Demand Self Service portal as part of its enterprise private cloud.
    • Intel IT learned how to predict future capacity for their IAAS and PAAS services by analyzing the ratio of compute resources that end-users asked for versus what they actually consumed.

  • Are we nearing an age where trusted computing for the cloud starts with the hardware? Perhaps one of the more interesting keynotes at the event was given by Simon Crosby, formerly CTO at Citrix and now start-up entrepreneur at Bromium.  One of his topics was how the majority of enterprise attacks occur via compromised enterprise clients. From the Intel point of view, what was particularly interesting was his discussion about how his start-up will potentially be using “hardware assist” technology, such as Intel TXT, that enhances virtualization on client devices to offer continuous endpoint protection.  For those who aren’t up to speed on Intel TXT - learn more from the video below (and note that many of the same principles in the data center with Intel TXT can also apply to Intel TXT on the enterprise client).

-  


  • The cloud still needs transparency. At the event, Jason Waxman, Intel’s GM of High-Density Computing, engaged in a fireside keynote chat with Jason Hoffman, CTO of Joyent. One of the more interesting topics was how there is still a lack of transparency with the cloud around a bevy of important features and metrics, such as monitoring cost or security. In order for the industry to solve these problems, Jason Waxman talked about how the Open Data Center Alliance is spelling out requirements for cloud computing service providers so that users will be able to select and access service offerings based on standard, industry-accepted definitions ( To learn more, my colleague, Raejeanne Skillern, has an interesting article here on the importance of ODCA.) One potential output could be that there will eventually be different tiers of security that service providers can design to (such as bronze, silver and gold security tiers) that end-users could specify when buying cloud services. To see this fireside chat with the two Jasons, please go here.

 

 

 

 

 

  • SSDs are a "no-brainer" for cloud infrastructures. One last trend at the event was the discussion around the performance and operational cost benefits of SSDs vs traditional hard drive disks. Many cloud service providers at the event talked about how it was a no-brainer to deploy SSDs in their infrastructure, even given the higher up-front costs for SSDs. In the video below (taken while at Structure) – I provided a little flavor about this emerging trend:

 

 

 

 

Thanks for reading. Comments or questions encouraged.

 

 

Justin Van Buren

Intel Marketing

Twitter: @jlvb2006

Last week from July 11th to July 14th, I attended my first Cisco Live event, and it was a great show!

 

Here are some of the highlights:

 

1. A flash mob kicked off CEO of Cisco Systems John Chamberskeynote and this was surely a sign of the times. Enterprise IT is going social! In his keynote, John explained Cisco’s top 5 priorities: core, collaboration, data center/virtualization/cloud computing, video and architectures. Of these priorities, Chambers zoomed in on video, and highlighted that he expects video to make up 91 percent of all internet traffic in 2014. He anticipates video as the new platform for communication and IT. What pillars will dominate the way everyone will work? Chambers believes mobile, social, video and virtual.

 

2. Intel’s booth was well attended. Traffic to Intel’s speaking theatre doubled last year’s traffic! We invested in a professional emcee, and it paid off! We were very pleased with the turn out this year.

Bill presenting.jpg

3. Intel and Cisco hosted a party to celebrate the Cisco Unified Computing System’s 2nd birthday. The party featured a cake shaped like a server (and it was tasty too!)

 

4. CEO of Cisco Systems, John Chambers, came to the Intel booth to present Allyson Tirado, one of Intel’s Business Development Managers, with an award for shattering sales records!  Allyson was both surprised and thrilled. It was a great way to send her off for sabbatical! Congrats Ally!

 

Award Presentation.JPG

5. I enjoyed Carlos Dominguez’s closing chat with William Shatner! Shatner joked about his career path to stardom, retaining creativity, distancing oneself from peer pressure, failure, and managing change. There were significant doses of colorful language and sarcasm throughout. It was hilarious, especially if that is your kind of humor!

 

I had a great time at Cisco Live this year and I learned a lot. I look forward to next year’s show!

Start your week right with the Data  Center Download here in the Server Room!   This post wraps up recent updates from the Server Room, the Cloud  Builder Forum, and our Data Center Experts around the web.  This is your chance to catch up on all of the  blogs, podcasts, webcasts, and interesting items shared via Twitter from the previous week.

 

Here’s our wrap-up of the 1st  half of July:

In the blogs:

 

Bruno Domingues discussed Cloud  Computing and Capacity Planning for SaaS

 

Brian Yoshinaka gave us  the overview of Data  Center Management with the Cisco Nexus 2232TM & 10GBASE-T: Reporting from  Cisco Live 2011

 

Wally Pereira explained IT  Modernization & Data Migration: A Spectrum of Options

 

Brian Yoshinaka gave us  his notes Reporting  from Cisco Live! 2011: Energy Efficient Ethernet

 

Pauline Nist compared Benchmarks  vs. the Real World: Reality Check

 

Kibibi Moseley shared  with us Intel  Mission Critical Solutions @ Cisco Live 2011: World of Solutions

 

Bob Deutsche compared Cloud Lessons & LeMans Racing on Data  Center Knowledge

 

Mitchell Shults discussed Big  Memory, Big Data, and the Semantic Web

 

Raejeanne  Skillern shared The Service Catalog: Demystifying  Cloud on Data Center Knowledge

 

Broadcasted across the web:

 

In the latest Intel  Chip Chat, Intel Fellow and research  Labs Director of Interaction & Experience, Genevieve Bell, discusses the connection we have with our devices and our interactions with them.  Genevieve explains to Allyson Klein that the user experience enables a strong or weak bond with technology smart TV to smart car.

 

In the latest of Conversations in the Cloud, Allyson Klein discusses NetApp’s new Ethernet Advantage program and two new reference architectures, focused on  unified networking for 10 GBbE with NFS and iSCSI. Check out the Intel Cloud Builders site and the podcast on Unified  Networking with NetApp’s Mike McNamara to learn more.

 

Last week in the Intel Cloud  Builders program, we saw a great webcast on Cloud  On-boarding from Citrix.  In this webcast, they discussed how to ease your transition to the cloud by using cloud computing technologies like cloud on-boarding. 

 

Cloud on-boarding addresses business needs, such as a spike in demand, business continuity, and capacity optimization. Enterprises can use  on-boarding to address capacity demands without the need to deploy additional infrastructure. Check out the webcast or head over to the Intel Cloud Builders Forum to discuss and learn more!

 

In the social stream:

 

Winston  Saunders shared:

 

Solid PUE, consolidation, and #datacenter efficiency  story from NREL bit.ly/r2yYHW

 

Intel "Cloud in a box" zd.net/r9cYxW

 

Intel Dalian: An Environmental Benchmark for China bit.ly/qkgfGX

 

Australia's Carbon Tax & Data Centers: Penalties likely  to boost focus on data center efficiency. bit.ly/pvTgIA  from Data center Knowledge

 

Raejeanne Skillern shared:

 

Exciting day for us DC folks at Intel -Intel buys networking  chipmaker because the data center is now the computergigaom.com/cloud/intel-bu…

 

IDC: #Cloud Server Revenue to Reach $9.4 Billion by 2015http://bit.ly/p0x4eg

 

SeaMicro Packs 768 Cores Into its Atom #Server http://ow.ly/5H0jv from cmiller237

 

Good point -"isn’t the promise of choice, portability  & flexibility a big part of the attraction to cloud?" via @reillyusa bit.ly/rrQGKS

 

12 Top Thinkers in Cloud http://bit.ly/qhKTkX via @Apptio

 

In case you missed it... VMware intros vSphere 5 and cloud  infrastructure suite http://zd.net/q8kiNm via @zdnet

 

Allyson Klein shared:

 

Germany's leading #HPC lab, FZ  Julich talks about the future of performance and early promise for #Intel MIC  archhttp://t.co/oREBK98

 

Part #3 of e-week labs review of #ODCA cloud  requirements - this 1 is carbon footprint http://t.co/tlkXigA

 

Part #2 of e-week labs review of #ODCA cloud  requirements - this one is VM Interoperability and use of #DMTF's OVF  spechttp://t.co/Ax2G890

 

E-week Labs writing profiles of all 8 #ODCA usage  models. Great stuff @csturdevant Part  #1 is here http://t.co/qRkhKc6

 

The Cloud is About Usage Models, Not Technology: Industry  Perspective from Billy Cox of Intel on ODCA. http://t.co/jYvUWm3 from Data Center Knowledge

Cloud computing is a game changer for capacity planning. There are several differences that you will find. One of cloud’s principles makes this task even harder: elasticity.

 

Some may argue that in order to respect the elasticity in a multi-tenancy environment, you should sum the peaks of your applications. However, this approach can degrade the economic gains provided by cloud infrastructure. The perception of cloud computing to facilitate unpredictable loads is merely a user’s perception, and not the reality of cloud IT architects.

 

Usually, for a capacity plan for cloud computing, you must deal with three macro variables: user, application, and infrastructure. In a cloud environment, you have different applications and different user behavior sharing the same infrastructure. There is no golden rule to follow that characterizes each variable, but I personally adopt the strategy that starts from the less mutable (i.e. user) to the cheapest/fastest to change (i.e. infrastructure). If this is a brand new environment, and you don’t know anything about your users, you can follow the reverse order.

 

Escher.png

 

Understanding the User Behavior


The first method to understand the user’s behavior for a defined application is to observe and try to find patterns, such as when usage is at its peak (i.e. day, week, month, etc.), how long a user spends in a given transaction, etc.

 

Just for exemplification of the method, let’s use the application log to estimate user behavior, as described in the following table:

 

table.png

table - 01

 

In this hypothetical case, we have 40k users in a given day that can be graphically viewed, as seen in figure 01:

 

Gaussian.png

Figure 01 – Users requests in a given day

 

Using some math tools to make it usable for a capacity plan, we can try to describe this behavior with the Gaussian function (aka. Normal distribution) as expressed in this function:

 

GaussianEq.png

 

From this equation, σ is the standard deviation (=6829.69), µ is the arithmetic mean (=4444.44) e got the following Normal (y) value equals 0.98863. With this equation, we can identify what the expected amount of user requests will be in a given time during the day, and also the absolute peak where the first derivative is zero (maximum value).

 

Diving deeply into these numbers helps us estimate how many users we should architect the system to handle simultaneously. In order to measure it, let’s use the Poisson theorem:

 

PoissonEq.png

 

We can now define how many users the system should be designed for based on probability, and not from guesses.

 

Poisson.png

Figure 02 – Poisson distribution

 

In this example, the probability to handle more than 20 simultaneous users is less than 2%.

 

The big picture

 

At this point we can mathematically express the user’s behavior with precision. For each service provided together, these equations can offer insight about the overall user demand for the entire cloud environment.

 

In the next post, I’ll present how to use it to measure the application impact and how deal with it.

 

 

Best Regards!

Download Now


Attorneys strive to reduce risk for their clients, so it makes sense that Paul, Hastings, Janofsky & Walker LLP (Paul Hastings), a top global law firm, would make risk mitigation a cornerstone of its IT strategy. Searl Tate, director of engineering at Paul Hastings, says simplifying the firm’s server and storage infrastructure with the Intel® Xeon® processor 5600 series helps avoid service interruptions and safeguard revenue streams while contributing to a 57 percent reduction in energy consumption.


“By giving us 50 percent more physical cores than the previous generation, along with the increased memory capacity and bandwidth, the six-core Intel Xeon processor has made a big difference in our ability to deliver higher virtual machine density per blade and higher performance, particularly for our I/O-challenged applications,” explained Searl Tate, director of engineering Paul Hastings.


To learn more, download our new Paul Hastings business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

Download Now


elte.jpgAt Hungary’s ELTE University, the IT Services Department operates the IT infrastructure, coordinates application development, and takes responsibility for thousands of PCs and hundreds of servers. It also had pressing issues to solve:

  • Ensure the IT infrastructure allowed for international collaboration with other universities.
  • Consolidate and optimize resources, since faculty had established and maintained its own IT infrastructure, creating a heterogeneous environment that was both complex and expensive to support.


With help from the Intel® Cloud Builders Program and Intel® Xeon® processors 5600 series, IT Services got the hardware support it needed for virtualization and central management capabilities for cloud-based services. The new cloud infrastructure helped to accelerate the integration of disparate IT systems across the university, eliminating the need for standalone servers and significantly reducing the number of physical machines. IT and storage virtualization helped to simplify data center management and enabled quick and safe system testing and upgrades. IT Services was able to establish 99.99 percent availability for IT management systems and student IT services.


“With the Intel Xeon processor 5600 series and the help of Intel Cloud Builder Program reference architectures, we were able to create a completely new IT infrastructure for both general IT services and HPC needs in one step,” explained  Dávid Ritter, CIO of the Department of IT Services for ELTE University.

To learn more, download our new ELTE University business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

Hello again. Welcome to my second post from Cisco Live! 2011. It has been a great show so far, loaded with activity at the Intel booth and a lot of exciting showcases of new technologies. With today’s post, I’d like to talk about a particular technology that seems to come up at every show: The 10GBASE-T ethernet standard.

 

In previous posts, I’ve mentioned that Intel’s “Twinville” 10GBASE-T controller will be the industry’s first single-chip 10GBASE-T controller and will power the 10GBASE-T LAN on motherboard (LOM) connections for mainstream servers later this year. This integration, along with 10GBASE-T’s backwards compatibility with Gigabit Ethernet and support for already deployed copper cabling, leads us to believe that 10GBASE-T will ultimately be the dominant 10GbE interface in terms of ports shipped.

 

On Tuesday, Cisco nudged all of us a bit closer to that reality by announcing the Nexus 2232TM Fabric Extender, its first Nexus platform that supports 10GBASE-T. Let’s take a closer look.

 

Cisco Nexus 2000 Fabric Extenders behave as remote line cards for Nexus parent switches. They connect to the parent switch via 10GbE fiber uplinks and are centrally managed by that parent, creating a distributed modular switch – distributed because the parent switch and fabric extenders are not physically constrained by a chassis, and modular because additional fabric extenders can be added to increase switch capacity.

 

The Nexus 2232TM has 32 10GBASE-T ports as well as eight SFP+ ports for connecting to its parent, in this case a Nexus 5000 series switch. With that many 10GBASE-T ports, the Nexus 2232TM can connect to every server in a typical rack. Integration of 10GBASE-T LAN on motherboard (LOM) ports on these servers will drive adoption of 10GbE and 10GBASE-T over the next few years. Since 10GBASE-T is backwards-compatible with existing Gigabit Ethernet (GbE) equipment, IT departments can upgrade to 10GBASE-T all at once or even server-by-server and use the same fabric extender for all of the servers.

 

Here at Cisco Live, the Nexus 2232TM and Twinville-based 10GBASE-T adapters are key ingredients in a joint demo from Intel, Cisco, and Panduit. There’s quite a bit more to the demo (iSCSI, FCoE, live migration), but I don’t really have space to cover it here. I’ll see if I can post a short video of the demo in the near future.

 

Earlier this week, I spoke with Aurelie Fonteny, product manager for the Nexus 2232TM, who kindly answered a handful of questions.

 

BY: Aurelie, how does Cisco see 10GBASE-T growing over the next few years?


The past few years have been marked with a trend towards 10 Gigabit Ethernet at the server access. Virtualization, consolidation of multiple 1Gig cables, price, and higher performance CPUs have all been drivers towards that trend. 10GBASE-T will accelerate that trend with the additional flexibility of connectivity options. Ultimately, LOM integration will drive the exponential volumes of 10GBASE-T platforms.

 

BY: What benefits does the Nexus 2232TM offer to your customers?


AF: Cisco is excited to introduce the Nexus 2232TM, the first 10GBASE-T product in the Nexus Family of Data Center switches. In total, 768 1/10GBASE-T ports can be managed from one single point of management. As such, the Nexus 2232TM combines the benefits of the FEX architecture together with the benefits of 10GBASE-T: 10G consolidation, 1G to 10G migration simplicity, and cabling simplicity.

 

 

How do you anticipate customers deploying this product vs. the SFP+ version of the Nexus 2232?


AF: The Nexus 2232PP (the fiber version with Direct attach copper options) and the Nexus 2232TM share the same architecture and have the same number of host interfaces and network interfaces. The choice between the two platforms will be a trade-off between power, latency, cabling type, price, and FCoE support (not supported on Nexus 2232TM at FCS).

 

BY: We’ve heard folks say that 10GbE is too expensive. Can you talk about pricing for the Nexus 2232TM?


AF: The Nexus 2232TM is priced at a small premium over the Nexus 2232PP (fiber version with Direct Attach copper options). Total Cost of Ownership includes not only a point product but also cabling, server adapter, and power. As such, both solutions with 10G servers attached via direct attach copper (Twinax) from server to the Fabric Extender or 10G servers attached via 10GBASE-T from server to the Fabric Extender are about the same price today. Some of the decision factors would be requirements for FCoE consolidation, distances between servers and network access, cabling preference and existing cabling structure, and mix of 1G and 10G ports required at Top of Rack.

 

BY: How have Cisco and Intel’s work together advanced 10GBASE-T?


AF: Our strong collaboration with Intel from the early stages of 10GBASE-T on Catalyst and Nexus platforms has been critical to the ecosystem interoperability at the server access and to the overall high level quality achieved in 10GBASE-T product integration.

 

We are working with Cisco and Panduit on a white paper that includes key deployment models for 10GBASE-T in the data center. I will post a short blog when the paper is available.

 

Follow us on Twitter for the latest updates: @IntelEthernet

I’ve been writing about executing the Proof of Concept for a RISC to IA migration.  I want to step back with this post and present a context for this activity.  The POC is best for migrating a application running in production and for a one-off migration at that.

 

But what if that’s not what your facing?  What if you are converting your firewall servers from an old RISC based platform to a modern IA server?  The POC seems like a lot of unnecessary work.  I mean, you need to prove out that the use of an IA server in place of the old RISC server works and meets the SLA requirements.

 

Then again, what if your IT Modernization efforts entail moving your old applications from the old Mainframe to a new IA based server?  The POC in this case is a significantly more involved effort what with code conversions and possibly application replacement on top of porting data.

 

I’ve developed a spectrum of the types of migrations.  It doesn’t cover every migration type but I’d bet that your IT Modernization effort falls into this spectrum.   Here’s my graphical representation of this spectrum.

 

R2IA Continuum.jpg

 

The more replicable migration types include the migration of Infrastructure applications from RISC to IA.  These applications are ones with a native IA port from the vendor.  They are diverse and can be backup and restore applications, firewalls, Web Servers, file and print servers, etc.  This migration consists of testing the application on the new target and measuring performance.  If it measures up then develop a plan to replicate the steps and begin migrating the applications somewhat like a cookie cutter.  The next type of light lifting migration is to move an application that is more complex but is commonly found throughout the enterprise.  Applications in bank branches or retail outlets come to mind.  Migrate one of these in test, document the steps, plan the logistics, and train a cadre to go forth and migrate.   This too is similar to a cookie cutter operation.  The following flow chart captures the tasks:

 

Pilot Flow.jpg

 

Next in the spectrum are the more unique migrations that get more complex as the details grow.  From here on out you’ll need to execute a PoC and the migrations tend to be a ‘one-off’ or each migration is a discrete event that frequently doesn’t inform any other migration.   My forthcoming posts to this blog will cover the steps in these migrations in greater depth but right now I just want to briefly touch on the general types that occur.

 

  • Migrate using the same software stack and same versions.  The software stack varies only to the extent that Linux varies from UNIX.  The other software has ports for RISC and IA and the software on each platform is nearly identical.  Just the data unique to the enterprise needs to be transferred.

 

  • Migrate using the same software stack and but versions.  The software stack variance is marked by the different versions of the underlying software, for instance Oracle9i on the legacy server and migrating to Oracle11g.  While the software has ports for RISC and IA and the software on each platform no differs enough to add additional complications.  Migration tools available in Oracle11g aren’t available for Oracle9i.  While the data unique to the enterprise needs to be transferred the application code in the database may need conversion to address the differences in the upgrade.

 

  • Migrations get a lot more difficult with the porting of code written specifically for the enterprise.  This can be C++, C, or even JAVA code that needs to be converted.  Some OEM vendors, HP and IBM, have tool sets for assisting in the migration from Solaris to Linux.  These tools read the source code (you do have the source code, don’t you?) and provide in-line notations where changes need to be made in library calls or syntax.   Often this code conversion effort is long and tedious and is outsourced to a third party.    How many lines of code, 100,000, 400,000, or 1,000,000 is your application you want to migrate?

 

  • Suppose you want to migrate your application from running on DB2 on AIX to running on SQL Server and Windows 2008 R2.  You are changing the platform, the Operating System, you are changing the database, and you have to modify the program code that runs in the database like stored procedures.  Whew!  A lot of variables here and each needs to be carefully monitored and managed.   This is a complex process and I would recommend converting the overall plan into discrete steps, for instance migrate platforms first and insure that everything works correctly and then move to the new operating system and database.

 

  • The far right of the spectrum is mainframe migration to IA.  This doesn’t necessarily mean that it is the most difficult or complicated process.  This could be a migration of Oracle on the mainframe to Oracle on an IA platform and this would follow the process outlined above.  Other applications on Mainframes can be 50 years old or more.  These are the applications that bring the greatest challenges.   This would likely involve a complete application re-write and then we’re into application development which is a topic for another series of blogs.

 

Overall the migration processes are all challenging in their own way but the rewards of lower costs and better utilization of Data Center resources, electricity, cooling and floor space, make the journey worth the adventure.

Greetings from Cisco Live! 2011!

 

 

I’m in Las Vegas at another one of the IT industry’s big shows. Here, I will meet with customers and partners to talk about Intel Ethernet products and some of the new technologies that change the data center. Over the next few days, I’ll bring you updates on some of the significant network technology announcements that take place here, and I will explain why they are important.

 

Today’s topic: Energy Efficient Ethernet (also known as IEEE 802.3az, EEE, or triple-E).

 

The name is fairly self-explanatory, but I’ll give you a bit of detail. The EEE standard allows an Ethernet device to transition to and from a low power state in response to the changes in network demand. This means an Ethernet port that supports EEE can drop into its low power state (known as Low Power Idle or “LPI”) during periods of low activity, and then return to normal when conditions require it to do so. That is as deep as I’m going to go, but if you want nuts and bolts, check out “Energy Efficient Ethernet: Technology, Application, and Why You Should Care” from Intel’s Jordan Rodgers.

 

Intel supports EEE across our Intel® Ethernet Gigabit product family for both client and server connections, including the recently launched Intel® Ethernet Controller I350 and the Intel® 82579 Gigabit Network Connection:

 

  • The Intel Ethernet Controller I350 supports EEE across four Gigabit Ethernet ports integrated onto a single chip. In LPI state, power consumption drops by 50 percent. This controller powers the Intel Ethernet Server Adapter I350 family, and all of those adapters enjoy the same EEE benefits.

 

  • The Intel® 82579 Gigabit Network Connection is designed for client systems and in LPI state, its power consumption drops by nearly 90 percent. It’s also a key power-saving component of second generation Intel® Core™ vPro™ processor family systems, which also include power-saving enhancements in the CPU and chipset. For a real world example of how these features help save power and money, check out this case study: Pioneering Public Sector IT.

 

That takes care of the client side, but what about the other end of the wire? EEE only works when devices on both sides are EEE-compliant. That’s where Cisco comes in.

 

Earlier today, Cisco announced a number of enhancements (including EEE) to its Catalyst 4500E switch family. These switches are designed for campus deployments, which means they connect primarily to client systems – desktops and laptops. When you consider that many companies deploy thousands of client systems, the benefits of EEE are obvious: big energy savings.

 

Earlier this week I asked Anoop Vetteth, senior product manager for the Catalyst 4000 switch family, a few questions about Cisco’s support for EEE.

 

BY: Anoop, why is EEE support in the Catalyst 4500E line important to customers?


AV: Energy efficiency and minimizing power consumption to meet corporate sustainability goals seem to be top of mind for most of Cisco’s enterprise customers. To this effect, Cisco has delivered energy-efficient platforms targeted for both Campus and Data Center. Moreover, through applications like EnergyWise, Cisco has also enabled its customers to look beyond networking equipment to monitor and regulate the power consumed by the entire campus. The piece that was missing was a mechanism to dynamically reduce the network power consumption based on link utilization. Energy Efficient Ethernet, or IEEE 802.3az, address this and is probably the only green standard in the industry today. The EEE standard was ratified late last year, and we expect to see market leaders for end devices like Intel offer EEE as a standard feature starting mid to late 2011. Moreover, we also expect EEE to fast become a requirement from certification agencies for corporate compliance.

 

Catalyst 4500E is the world’s most widely deployed modular access switch and Cisco’s leading Campus access layer switch. This platform has leapfrogged the industry time and again in terms of being first to deliver many industry standard and pre-standard technologies. With EEE support, the Cisco Catalyst 4500E platform delivers the most energy-efficient platform in its class in the Campus and future proofs customer deployments for compliance with emerging regulatory requirements.

 

BY: Can you tell me how the collaboration between Cisco and Intel is making a difference for energy efficiency across the network?


AV: From the get go, Cisco and Intel have been working closely together to deliver a solution that is compliant with the IEEE standard and to weed out any deployment-impacting issues. The EEE end-to-end solution will first be offered for the enterprise campus due to the nature of traffic profile and high impact, in terms of power savings, that we expect in this environment. The sheer volume of end devices coupled with the low link utilization in campus environments makes it ideal for the introduction of EEE technology. Testing with real life traffic profiles on Cisco Catalyst 4500E switches and Intel EEE-capable network controllers reveal that EEE can help save on an average as much as 1W per link. EEE in conjunction with Cisco EnergyWise translates to considerable savings in campus environments with tens of thousands of end devices.

 

BY: Can you tell me how Intel and Cisco have worked together to support EEE?


AV:  Cisco and Intel have been big proponents of the EEE standard at the IEEE 802.3 working group, and our representatives contributed collaboratively towards the successful culmination and ratification of this standard. The collaboration did not stop there. The EEE standard defines a new signaling mechanism between the host and end device to communicate EEE capability and negotiate precise timing parameters, including when to enter into LPI state and the corresponding duration. With no precedence and no governing body to check compliance, it became necessary to form an alliance to test and validate each other’s implementation. Cisco and Intel have been in lock step during this validation process to ensure that implementation is in compliance with the IEEE 802.3 standard. Finally, both companies have also come together to engage top customers collaboratively as part of Early Field Trial (EFT) or beta program.

 

BY: What results have you seen from your early field trial customers?


AV: We are collaboratively running EFT programs with some of our key customers from North America and Europe. This program started in mid-June and is well underway. Cisco and Intel provided the technical support required to get the setup up and running so that customers can use it to run traffic patters/profiles and measure the power savings with and without enabling EEE. The feedback from customers has been overwhelming in terms of interest in this technology as well as the power savings they are seeing by using this technology. There have been some enhancements that both Cisco and Intel have incorporated into our products based on some of the valuable suggestions that we have received from our EFT customers.

 

BY: Will Cisco support EEE in other switches?


AV: Cisco considers EEE to be a strategic technology and will extend EEE support beyond the Catalyst 4500E platform. Next generation stackable Catalyst switches are expected to support EEE, and this will extend EEE support across all the Cisco campus access platforms. The relevance of EEE in data center is expected to be more prominent and pronounced to customers as they transition to 10GBase-T links for server access.

 

For more information, see this white paper from Intel and Cisco: IEEE 802.3az Energy Efficient Ethernet: Build Greener Networks.

 

Watch for another update from Cisco Live! 2011 later this week.

 

Follow us on Twitter for the latest updates: @IntelEthernet

I had a chance to spend a day in Chandler, Arizona (not a  boondoggle if you go in July!) with David Baker and his Enterprise Server  Engineering team, which is part of Intel’s Developer Relations Division.


In layman’s terms, these are the guys that do all the work with  our software partners to optimize performance on Xeon servers.  We drive this team crazy every time we launch  a new Xeon server chip because all of the OEM and ISV partners look for  benchmarks to show off their respective hardware and  software performance.   Internally, even the Intel server group is  equally guilty because they want to feature multi core and scaling performance,  along with neat new features like our AES-NI encryption  instructions  but, as we all know,  benchmarks are benchmarks.  Every partner  looks for the one that will highlight some unique feature of their  implementation, and that’s all well and good. However, while customers may view  benchmarks as necessary, rarely are they sufficient to demonstrate the actual  deployed real world workloads (I know, you are shocked!).

 

As a result, the months in between Intel Xeon  chip launches are actually just as busy for our team in Chandler. That’s when  they essentially work on customer workloads, and/or interesting emerging  technologies like the Franz Semantic Database.  Do  you know about Triples? While there is still pressure; it’s a much more  creative environment as they get the variety of challenges introduced by new  technology like Franz’s AllegroGraph.

 

Lately,  the team has had the chance to work with a lot of healthcare partners. For me,  this is ultimately the “most real” application you get to show end user’s visible  technology improvements, such as faster diagnostic scan results. Whether it is delivered  from dedicated systems or it is Software as a Service, applications don’t get  more mission critical than healthcare.

 

“In critical patient care situations like a stroke,  time is essential. Significant technology  advancements like the Intel® Xeon® processor 5500 series processor combined  with our Vitrea fX brain perfusion application enable the fast processing of  large amounts of image data to provide doctors with quantitative results  related to patients’ regional cerebral blood volume (rCBV), mean transit time  (MTT) and regional cerebral blood flow (rCBF).” Vikram Simha, Chief Technical Officer, Vital  Images

 

Hopefully, you also caught my  recent blog about how Intel Xeons help deliver digital mammogram results even faster and more  efficiently.

 

There are a lot of yet to be  announced efforts underway in additional healthcare workloads, BI, drug  discovery and other areas.  If you’re a  partner and you’ve worked with this talented team, or you work with them now,  feel free to send along a thank-you! Keep watching for future results, and keep  sending us challenges!

Download Now


itc_cs_donders_xeon_library_preview.jpgTo both meet the high demand for data processing and stay on its tight budget, the Netherlands’ Donders Centre for Cognitive Neuroimaging hoped to create a scalable high-performance computing (HPC) environment. Instead of taking the traditional route—deploying racks of servers to provide the environment’s computing power—Donders looked for a more cost-effective alternative. It chose Dell Precision* T3500 workstations with Intel® Xeon® processors 5600 series.


“We can easily provide researchers with the computing performance they need to continue their pioneering work, thanks to the scalability of the Dell [and Intel] solution,” explained Erik van den Boogert, head of the Technical Group at Donders Centre for Cognitive Neuroimaging.


To read all about it, download our new Donders business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

 

 

*Other names and brands may be claimed as the property of others.

I am excited to go to my first Cisco Live in a couple of days. The show is at the Mandalay Bay Conference Center in Las Vegas from July 11 to July 14 and it will be their largest crowd so far. 15,000 IT professionals and innovators are expected to attend. The Intel booth will have a Mission Critical Computing Theme. Intel’s booth will be full of activity with 8 demos and a completely booked speaking theatre, which will feature a plethora of topics such as the Intel Xeon processer E7 family, Intel’s tablet roadmap, Cisco CIUS and more.  Much of Cisco Live’s audience pursues new solutions and products or plans to increase their purchases. We hope to learn about their pain points and concerns so that we can address them as we move forward.

 

 

 

I look forward to meeting a ton of people and learning more about the end customer’s perspective! Also, it will be great to hang out with the various Intel folks around that will speak, meet with customers, and man demos and education pods.

 

I know that these shows can be overwhelming and that I will be busy as a floater for our demos and staffing the speaking theatre. I decided to plan ahead and come up with a list of the top three things that I MUST do at Cisco Live.

 

  1. Attend Healthcare Industry day.  I work on vertical co-marketing programs, and healthcare is one of our major pushes. We will start a government health IT cloud campaign at the end of the summer and we will put the building blocks in place to jointly produce a white paper with Cisco. I am particularly excited to attend Cisco Healthcare Solutions & Cloud Services session, because I will pick up a better understanding of the use of Cisco cloud services in the healthcare space
  2. Take in all of the keynotes. There is a formidable keynote line-up that I do not want to miss.  John Chambers, Chairman and CEO, Cisco, Cisco CTO Padmasree Warrior and CIO Rebecca Jacoby will provide insight into the Cisco perspective. As a member of the account team, it is imperative that I understand Cisco’s interests, and this is a great way to better understand the big picture. Also, the closing keynote features William Shatner (maybe better known as Captain James T. Kirk" in the Star Trek television series or Denny Crane on the Practice!), which provides a playful end to jam-packed week.
  3. Attend the Data Center Townhall. Much of Intel’s business with Cisco is with their servers. The smarter I can get on server technology, the better!
  4. See either Cirque du Soleil or O. Okay, this is number 4 out of 3 and I won’t do this during Cisco Live hours, but I have heard much so about Cirque du Soleil that I‘d love to take the opportunity to see it

Perhaps you've noticed that there are some fairly large systems being built using the Intel E7 Xeon processor.  HP, Fujitsu, Supermicro, NEC, Bull, SGI, and many others have either announced new 8-socket (or larger) designs, or described their intentions to do so.  You may be asking yourself, "Are there really enough customers for such systems to make the market interesting?"

 

After all, some would have you to believe that there is no large data-analysis problem that can't best be solved with a pile of inexpensive dual-socket servers, some Ethernet, and an army of Hadoop programmers.

 

And, there is certainly a class of problems for which this approach is just fine. These problems are perhaps best described as the 'small number of needles in a very large haystack' sort.  Yet, there is another class of problems that is simply not well-suited to Hadoop-style processing.

 

As Franz Technology recently demonstrated at the Semtech conference, at least one of those problems appears to be the management of very large-scale, complex and rapidly-changing semantic database contents.  I've only known about Franz for about 9 months now. They were first brought to my attention by colleagues at Amdocs, who were using Franz' technology to explore some really cool ideas (more on that shortly). Less than a month before the event, Franz approached Intel with a proposal.

 

"We'd like to show the world that the loading and querying of a trillion triples is possible on a large-scale Intel server platform - can Intel help?" – asked Franz.

 

If you're anything like me, then you're probably wondering what the heck a 'triple' is in that sentence, and why you should care about the ability to load and query a trillion of them at once. I just had to satisfy my curiosity about this, so we found a way to get Franz some time on our big machine.

 

The term 'triple' is shorthand for the lowest level of data representation specified by RDF - the resource descriptor framework definition of the W3C. There's more to it, but the basic idea is to express any and all information in the form of subject->verb->object representation. An example of a single triple would be "Mitch", "Graduated From", and "Rice University".

 

My entire educational and work history could be expressed as a series of such triples. A Web-accessible database, such as a 'semantic' version of LinkedIn, could use a triplestore to support queries of work, educational history, geographic location, etc. that are far more accurate and sophisticated than mere full-text or keyword searches, which often produce nonsensical results and always require time-consuming human interpretation.

 

A relational database could be used to store my name, address, work history, educational history, etc. in a very storage and query-efficient form. However, someone has to design a relational database and someone has to maintain a relational database as the represented subject area undergoes change.

 

Relational databases are exactly the right way to deal with classic business structures and relationships - sales catalog items, warehouse inventory, shipping logistics, customers, orders, etc... And I'm not suggesting here that that's likely to change.  But relational databases are very difficult to press into service for 'fuzzier' applications, such as characterizing the linkages in a social network or predicting why a caller on a customer service line might be calling before the service representative picks up the phone.

 

Consider the case of a social network - the sort of things LinkedIn, Facebook, and others are doing every day, at massive scale.  Here's a representative social network, one that should be fairly recognizeable:

 

Advanced Social Network Example

 

Just for this simple example, representing all of the relationships in this social network in a way that allows calculation of 'social distance' and gauging of relationship strength (two things that turn out to be important) requires many thousands of triples. Real-world examples rapidly become vastly more complex.

 

But Franz and their customers are demonstrating that triplestore approaches, thanks to the enormous power and capacity of large-scale enterprise class Intel Xeon E7 server platforms, in fact are practical (and cost-effective).

 

The history of computing is basically the story of finding creative ways to burn ever-less-expensive compute time in order to save ever-more-expensive programmer time. Semantic database techniques are the latest in a long list of innovations to advance generality and flexibility that are only practically possible thanks to Moore’s Law. At Intel, our job is to keep Moore’s Law on track so those innovations can keep happening..

 

Check back next week to find out if Franz Technology was able to pull it off!

Download Now 


itc_cs_bccancer_xeon_library_preview.jpgWho gets cancer and why? Scientists are using one of the world’s most energy-efficient high-performance computing systems—installed at the BC Cancer Agency’s Michael Smith Genome Sciences Centre (GSC) in Vancouver, British Columbia—to help answer those questions. GSC uses the Intel® Xeon® processor 5600 family for computational performance and the Intel Xeon processor 7520 for high-speed storage. The solution provides eight times the processing power and 10 times the I/O performance of GSC’s previous infrastructure, and researchers using it say their work can help lead to breakthroughs in understanding, preventing, and treating cancer and other genetic diseases.


“We were looking for a processor that had good characteristics in both memory-intensive and processing-intensive workload environments,” explained Greg Stazyk, senior manager of research systems for GSC, “and we wanted to get as many cores in an energy-efficient package as we could. The six-core Intel Xeon processor 5670 fit our needs very well.”


To learn more, download our new GSC business success story. As always, you can find this one, and many more, in the Intel.com Reference Room and IT Center.

Filter Blog

By author:
By date:
By tag: