Skip navigation

So I recently did a great Podcast where I talked about Citrix OpenCloud, Citrix OpenCloud Bridge and the On-Boarding whitepaper I did with Intel.  Check out the podcast here.   From this podcast, the following transpired…


So I went to the store the other day and bought some good old “Ready to Bake” cookie mix.  The great thing about this is you always know what you are going to get.  You follow the directions, turn on the oven, and place the dough on the pan.  The only mistake you can make is if you cook them too long!  12 to 15 minutes later you have a nice batch of warm cookies.


Kind of boring, right!  So then I got thinking, what if I used a cookie cutter to make some cookies.  That would change things a bit, right?  Here is what I ended up with:




Still pretty bland…


So I got thinking about how Citrix OpenCloud is changing the game with cloud computing and taking clouds that are “cookie cutter” and giving them the opportunity to spice it up a bit.  Sure I could add frosting (logo) to these cookies, but underneath they are still the same ingredients and brand.  They will have a little sugar taste but that is about it!  Your neighbor (cloud providers) can make the same cookies and I bet they will taste the same!  Heck, your kids (private clouds) can even make these cookies (with adult supervision of course) and I bet they will taste the same!  But, what if you want brownies, cupcakes, cake, pie, pizza dough or even spaghetti, what are your options?  Sure you can go out and buy “Ready to Bake” options or store bought, but what if you prefer wheat flour, organic products or a specific brand ingredient?


How can we address these desires and at the same time kick it up a notch (differentiation)?  Well, I went out and bought one of these:




Not only can I make cookies (adding as many chocolate chips as I want), but I have also become quiet the homemade pasta maker:



So think of IT Administrators or Cloud Providers as the bakers and Citrix OpenCloud is your high-end mixer that you can add any ingredients to in order to make whatever it is you want to make. Citrix OpenCloud will provide you with a few key ingredients to choose from when building out a cloud: XenServer, Netsclaer, XenApp and XenDesktop. Yet, Citrix OpenCloud allows you to bring a wide variety of ingredients to the table in order to build YOUR cloud. Citrix OpenCloud does not lock you into any particular hypervisor, web portal or API (as examples). Citrix OpenCloud allows you to build a cloud that can compete, perform, differentiate and meet you or your customer needs.

One could argue you can go from package to cookie in little time with a "Ready to Bake" option; a very valid point. Citrix OpenCloud will allow for any combination of ingredients but at the same time, we offer up our own recipes for those who want a cookie cutter scenario. In the end, Citrix OpenCloud is truly open, allowing you to build a cloud that meets the needs of your organization or cloud offering and when needed, we can provide you with a "Ready to Bake" option.

… So again, check out the podcast!


Disclaimer: I actually did make pasta and no one died from eating it!


Now to think up more recipes…


You want to check this topic and more in detail?  Come to Citrix Synergy 2011 (May 25 and 27) where Intel is a Platinum Sponsor!


Be sure to check out my sessions talking about Citrix OpenCloud and Citrix OpenCloud Bridge: SYN208 and SYN213


You can register for Synergy here.


Details on this session are available here.

If you’re a data center or network  professional, you’ve probably heard of unified networking. If you’re not  familiar with it, the concept of unified networking is pretty simple:  combine the traffic of multiple data center networks (LAN, storage, etc.) onto  a single network or a single network fabric – in this case 10 Gigabit Ethernet.  The benefits are just as simple as the concept: simpler network  infrastructures, lower equipment and power costs, and a trusted, familiar  fabric as its base.


The idea of Ethernet incorporating  different types of network traffic isn’t new; it’s been happening for years.  VOIP, streaming video, storage traffic – Ethernet has shown that it’s flexible  and scalable enough to incorporate all of them and more. What’s been missing  until recently was the dominant data center storage fabric: Fibre Channel.


Today, nearly every Enterprise IT  department maintains separate LANs and Storage Area Networks (SANs), the latter  of which is often Fibre Channel, though iSCSI is growing rapidly. A separate  SAN infrastructure requires storage-specific server adapters, switches,  cabling, and support staff. Fibre Channel over Ethernet (FCoE) allows Fibre  Channel frames to be encapsulated in Ethernet packets and travel across a 10GbE  infrastructure. This convergence can lead to big reductions in storage network  equipment and its associated costs. There’s more to it than that, but that’s  enough Unified Networking 101 for now.


So with that out of the way, let’s take a  look at some recent developments in the world of unified networking and see  where things are headed.



In late January, Intel announced the availability of Open FCoE on the Intel Ethernet Server Adapter  X520 family and qualification by EMC and NetApp. Why was that important?  Because it means standard, trusted Intel Ethernet adapters now deliver complete  LAN and SAN unification with FCoE and iSCSI support at no additional charge. Until then, FCoE support required expensive and  complicated converged network adapters (CNAs). Open FCoE incorporates  non-proprietary FCoE support into the operating system, similar to what’s done  for iSCSI today, for performance that is optimized for multicore processors and  will scale as processors and platforms get faster. It’s also a simpler solution  that allows IT to save money and simplify management by standardizing on a  single adapter or adapter family for all LAN and SAN connectivity.


And earlier today, Cisco announced several product and  family updates that provide further evidence of Ethernet’s expanding reach in  the data center:


  • FCoE  support on the Nexus 7000 series switches and MDS fabric switches is important  for a few reasons. First, it shows that FCoE and unified networking in general  are pushing deeper into the network, beyond top of rack switches to the  director-class switches that IT depends on to meet the high availability  requirements of mission-critical networks. Second, customers now have more  choices for deploying unified networking. Today, many 10GbE unified networking  deployments rely on a top of rack switch that connects to the servers in the  rack. The port density of the Nexus 7000 means they can bypass the top of rack  switch altogether, simplifying their switching infrastructure by removing a  layer of switches. And finally, these product updates bring the same  functionality and scalability of Fibre Channel SANs to FCoE, which should help  allay fears that FCoE isn’t ready for prime time.


  • Cisco  has updated the Nexus 5000 switch family with fixed ports that support 10  Gigabit LAN traffic, 10GbE SAN traffic (iSCSI and FCoE), and native Fibre  Channel connections. These “unified ports” will make it easier for IT to  connect to FC SANs, as previous switches in this family required add-in modules  to connect to those networks. These ports also provide an easy upgrade path for  folks who are deploying 10GbE and plan to enable FCoE in the future.


  • The  Nexus 3000 is a new family of switches aimed at environments where low-latency  performance is critical, such as financial services and high-performance  computing clusters. Infiniband networks are often used for clustering, but with  the rise of technologies such as iWARP (Internet Wide Area Remote Protocol), Ethernet  can deliver the same performance while providing a more flexible and  better-understood network fabric.


It’s pretty clear that Intel, Cisco, and  others are headed down the same “Everything Ethernet” path. And why wouldn’t we  be? Time after time, Ethernet has shown that it can expand to incorporate new  types of traffic. And with the roadmap to 40GbE and 100GbE already sketched  out, Ethernet has plenty of headroom for growth. So stay tuned. We’ll have more  to talk about in the coming months.

There are moments when the light bulb just goes on and an  awareness passes over me that says “innovation is here”.  The first time I  played Zork on an Apple IIe…my first Prodigy first Blackberry (and  the love of mobile that ensued).  Moments that said that the way my world  worked was going to change because of technology…in a good way.  I had one  of these at IDF last fall when I saw a demonstration of a tool that allowed  data center managers to dynamically access compute power in an open market  environment…and other data centers sell excess capacity in the same  market.  Imagine a world where workloads could be dynamically allocated to  data centers across the globe based on a variety of attributes including  bandwidth capability, cost, and platform requirements.  Pretty cool huh? I  walked away impressed wondering who came up with this interesting idea.


Well, wonder no more my friends. The guy behind this great  idea and many others is none other than Enomaly’s Reuven Cohen (his twitter  followers call him @ruv)  Reuven is the brains behind Enomaly’s recent  rise to cloud start-up darling.  I got a chance to chat with Reuven about  his idea of an open cloud market, now called Enomaly’s Spot Cloud solution as  well as their more established Elastic Computing Platform.  Check out the latest Conversations in the Cloud.

itc_cs_simulia_xeon_carousel_preview.jpgFrom aerospace to plastics, engineers need greater scalability and performance to obtain faster turnaround times on higher-fidelity simulations. Engineering software leader SIMULIA has invested in an HP ProLiant* DL1000 cluster based on the Intel® Xeon® processor 5500 series to help meet those needs. SIMULIA uses the 512-core cluster to optimize and test its codes for scalability and performance, and to improve post-sales support for its fast-growing high-performance computing (HPC) customer base. Now SIMULIA plans to acquire a cluster based on the Intel® Xeon® processor 5600 series and expects to see another 15 to 20 percent increase in performance.


“If you’re looking at the performance per unit of space and the amount of power you’re consuming, benchmarks will demonstrate the obvious benefits of Intel® processors,” said Matt Dunbar, chief architect at SIMULIA.

The new cluster has let SIMULIA strengthen its HPC leadership by demonstrating that its realistic simulation solutions scale across large clusters and can be supported effectively. Customers can now improve their engineering productivity with rapid turnaround times—even as they run larger, more detailed models. Customers can also control costs by running workloads on cost-effective, Intel® technology-based clusters that previously required expensive RISC or single-node, large-memory platforms. The cluster also doubles SIMULIA’s computing capacity, allowing its developers to deliver new capabilities more quickly.


To learn more, read our new SIMULIA business success story.  As always, you can find this one, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.

A couple of weeks ago, I presented on the work we have been driving around new security capabilities RSA conference. Not surprising I’m sure, but one of the hottest topics circling the conference was security in the cloud.  Walking off the plane I as greeted by massive signs espousing the latest Cloud security innovations from industry leaders like Symantec and EMC. OK, maybe I was just a little excited by the hype, but I couldn’t help but feel like I had arrived at some kind of techy paradise.





Even before the first official keynote of the show, the Cloud Security Alliance kicked off with a robust discussion on the issues and threats facing the cloud. Even Marc Benioff and his posse showed up to join the debate. There seems to be little debate that Cloud computing is reshaping the operational practices and user expectations of the way IT services are delivered. At the same time, there is clearly an aura of concern and a desire to understand the implications of the Cloud on data security and privacy.


One of the most valuable (and expected) elements of cloud is its capacity for self-service—users can create an email account or a fully functional virtual machine in seconds. Many of the enterprise IT organizations I’ve talked to are looking to adopt cloud-based models because of the self-service capabilities it brings. The control and flexibility of self-service is so compelling that several CIO’s have mentioned the emergence of “rogue IT” factions in their organizations  that are turning to external cloud clouds to deploy the services they demand. On the upside the agility virtualization enables and the emergence of innovative cloud operating environments like vCloud and Open Stack are putting self-service is within reach.


Whether businesses reign in rogue IT or embrace it as an accepted process, a fact remains where there is data, there is opportunity for theft, misuse, and compliance violation. The focus on cloud at RSA this year made it clear that the industry, governments, and businesses are looking at ways to ensure they protect their infrastructures from potential threats. Mitigating threats and improving data security is a multidimensional challenge—it is about layers.


At Intel, we have been working improving assurance of the infrastructure at some of the lowest layers—the foundation of the data center infrastructure. We are working to improve assurance of server platforms through a trustworthy boot process we call Trusted Execution Technology - TXT. When data center servers boot, the system can go through a routine of measuring the low-level firmware and verifying the hypervisor prior to launch.  A hash of this measurement is stored in a tamper-resistant place on the platform called the Trusted Platform Module. This measurement becomes the basis for establishing trustworthiness of the platform foundation. Deviations from expected measurements can invoke an exception and enforce appropriate launch control policies to mitigate potential vulnerabilities from joining the infrastructure resource pool.


Launch measurement and control is just the beginning. Working with ecosystem partners like RSA and HyTrust, we are exploring a series of new usage models that use the measurement of infrastructure assurance as a basis for isolation and migration of virtualized workloads—a concept we call Trusted Compute Pools.  In these models assurance measurement becomes a core element of virtualization migration policies. Security management consoles (e.g. GRC) can incorporate these measurements into the way cloud-based workloads are managed and deployed.


This is just one step on what we can expect will be a long journey the cloud computing evolution, but establishing the basis of trust in the infrastructure is poised to become a critical foundation. In subsequent posts I will share more details on some new usage models like Geotagging and the evolution of the solutions we are working on with our partners. I will also be digging into a number of other data center business and technology topics.


Sparc Arrest

Posted by K_Lloyd Mar 21, 2011

Pun intended.


Sparc migration has been the "topic de jeur" or more accurately, if less cliche, the "topic de année", with many of my customers.  I am a solution engineer covering the Northwest and Canada.  In this region there is a substantial installed base of Sparc systems.  Many of them are getting a bit old.  Virtually every user I have spoken with would like to migrate these systems to X86.


There reasons for migration are varied, but generally hit on some common themes.

  • Reduce the number of hardware architectures supported
  • Reduce the number of operating systems supported
  • Reduce maintenance contracts
  • Address licensing concerns
  • Move to supported ( or supported earlier) platforms
  • Address performance gaps
  • Concerns about ecosystem


In general Sparc has not kept up with Moores law.  I do not mean to imply that there are not advances and some great products, but if we compare performance and price/performance of the silicon, Intel Xeon is a strong leader.


This performance gap is especially apparent for older systems.  Example, if we take specint_rate_base2006 as a pseudo indicator of "general enterprise workload performance" ( i hate benchmarks, but you have to use something) we see that a single 4 socket Xeon 7560 based system delivers about the same performance a 2004 vintage 72 socket SunFire E25k Usparc IV system.


i.e. the 72 processor system that 7 years ago was sized to run your "large" ERP, decision support, or CRM systems can be replaced by a single compact blade or rack Xeon server.


Using this benchmark, Xeon beats even the latest Sparc T3-4 system socket per socket.  Price performance is even better.


I get that migration is hard, and a bit scary.  It may be better to stay on Sparc, than risk the companies uptime... but the risk can be minimized.  There are many companies that have made the move.  Xeon architecture, especially in the EX class is very robust.  High availability configurations are available.  Virtualization provides the lubrication for easy and dynamic scaling across machines and sizes.


The time is right to make the move.

comparissons sourced from

itc_cs_tcctechnologies_xeon_carousel_preview.jpgThailand-based technology services provider TCC Technology (TCCT) needed to consolidate its server infrastructure and embrace an open, standards-based technology environment. The solution it found was to move its RISC-based applications to run on the Intel® Xeon® processor 5600 series platform. The move enabled TCC to consolidate on an open platform with the flexible resources, top performance, and agile services its private and public cloud customers need.

The new platform incorporates high performance with the energy-efficient benefits of 32nm silicon technology and advanced power state management. This enables TCCT to boost its total computing performance with no significant increase in power and cooling requirements. The open, standards-based architecture of the new platform freed TCCT from its dependency on proprietary hardware systems. This meant TCCT had greater freedom of choice and could benefit from the wide range of configurations and vendors.

“We have made a decision to walk away from a proprietary to an open system in order to reduce costs and mitigate dependency risk,” explained Kosit Suksingha, managing director of TCC Technology.

To learn more, read our new TCC Technology business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.

itc_cs_sita_xeon_carousel_preview.jpgAirline communications and IT provider SITA wanted to enhance business flexibility by building a cloud computing infrastructure with the scalability to meet growing demands and the flexibility to accommodate the fast-changing air transport industry. It also wanted to define a  reference architecture for running mission-critical solutions that could deliver the performance to process complex transactions with low latency. Finally, it needed to make sure its infrastructure was efficient enough to reduce operating expenses and keep transaction costs low.


The solution was IBM BladeCenter* servers with the Intel® Xeon® processor 5600 series, which SITA chose as the reference architecture for its cloud computing environment and its grid-based international fares pricing solution.


“Using IBM BladeCenter servers with the Intel Xeon processor 5600 series enables us to provide a tremendous amount of processing performance in a highly compressed footprint,” explained Chris Lofton, head of technology planning for SITA. “We are delivering better performance for mission-critical applications while substantially reducing our infrastructure.”


For details, read our new SITA business success story. As always, you can find this one, and many more, in the Reference Room and IT Center.



*Other names and brands may be claimed as the property of others.


I just got back from Austin after spending 5 days at the SXSW Interactive Conference.  Imagine thousands of social media happy geeks standing together tweeting, texting and talking. Let’s just say that interactive conference goers know how to INTERACT…and they do it at warp speed.  A topic that seemed to come up in every session (other than the gaming era and the Thank You Economy) was Cloud computing. And every time it was brought up, it seemed folks were talking about something slightly unique in definition from everyone else.  In fact, many conversations were “clouded” due to lack of a crisp picture of what cloud actually means (more on this definition problem from me in the following weeks).





Luckily for us, along with the happy conference goers, Austin also features a sizeable leader in cloud computing and Intel Cloud Builders partner Dell Corp.  I was happy to get some time with Dell’s Barton George to help work through the definition of cloud…evolution or revolution? Migration to public data centers or new uses of enterprise data centers? Over hyped same old same or exciting breakthrough in capability and efficiency? Barton is Dell’s evangelist on cloud computing, and you just have to love that title.  The good news is that he lives up to the title’s billing.  His enthusiasm for cloud would match any white shoed televangelist, but he backs up this enthusiasm with a deep understanding of what cloud means to Dell…and perhaps what it should mean for you and me as well.  Barton also told me what Dell engineers were working on with other leaders in the industry (including my Intel co-workers) to bring secure, efficient cloud solutions to the marketplace. Check out our Conversation in the Cloud podcast.  If you like what you hear please subscribe to our RSS feed or connect with us on iTunes. If you would like to learn more about Dell’s cloud reference architecture and solutions register for our upcoming Intel Cloud Builders webcast. And if you want to keep up with our musings on cloud and other data center innovations, follow Barton at @Barton808 and me at @techallyson.  I’d love to hear from you about the podcast, cloud computing, or whatever else you’d like to share.

Today Intel refreshed its product lineup with new processors targeted for the micro server category.  With processors ranging from 45 down to sub 10 watts and optimized for dense server solutions, the news of these efficient little brothers to Intel's mainstream Xeon products address a unique market opportunity emerging in the web hosting and IPDC arena.


While most customers will continue to require the performance, efficiency and technology capability of our current 2+ socket Xeon platforms, some customers supporting simple workloads like static web hosting see benefit in trading capability for density and lower power.  As micro servers address these market desires and mature, we expect the category could comprise up to 10% of the server market over the next 5 years.


Seeing Intel talk about Atom microarchitecture use in servers may cause some to wonder if data centers will switch completely to aisles of "wimpy" nodes in the future.  Big data center providers like Google will tell you that wimpy nodes are good for wimpy tasks, which are not very common in today's data centers(pdf). But are these little guys wimpy, or just designed for the task required?  Just as Toyota delivers vehicles from a Yaris, to a Camry to a Land Cruiser to meet different customer requirements, Intel delivers a wide array of technology solutions to meet our customer's workload and data center requirements as well. We believe all server processors small to large need to have baseline server capabilities such as ECC, 64bit and Virtualization technologies to ensure they are data center worthy - just like all cars provide the basic safety usage features required to be on the road today.  The result of this belief? Even the least of these processors can stand proud next to its more powerful Xeon brothers.  Not to mention, they all deliver the benefit of software compatibility across the entire Intel architecture family.


To learn more, check out my Chip Chat with Raejeanne Skillern on the topic.  I'd love to hear from you on your company's interest in this new segment...and if you think that these platforms may solve some problems for you or aren't the right fit.

Cloud Connect 2011 was a success for all, especially for Intel to show and tell people where they are in the conversation of the Cloud.  According to attendees, this year the attendance was 68 percent greater than last year.  Guess that clears up the question of, “is Cloud just the ‘buzz’ or is the Cloud forward-looking?” But questions still filtered the air at Cloud events that led up to Cloud Connect…




When people think “Cloud” they think Google or Amazon.  What about Intel?  When people hear, say, or see “Intel” they think of the microprocessor company.  Intel – just the processor company, but at Cloud Connect, we were able to help tell the story of where Intel fits in the conversation of the Cloud.  Along with the AES-NI Encryption demo showing the performance difference between our Xeon 5500 and Xeon 5600, and the very popular Nordic Edge One Time Password demo, Intel was able to help answer that question with the other two demos – Intel® Expressway Cloud Access 360 and the Intel Cloud Builders demo. 


Many people asked what Intel has to do with Cloud?  The Intel Cloud Builders demo of the eBook or guide was great to help tell that story of how we have worked with many of our ecosystem partners to develop the first 20 of our Reference Architectures and pull together all the case studies to help our future Intel customers and partners find the perfect recipes for building their Cloud.  Intel will be their trusted advisor and as we continue to build out on more RAs we will also build on the Intel Cloud Builders Guide (eBook) providing continued references within the guide in forms of videos, webcasts, podcasts, whitepapers, etc., to help our customers and partners build the perfect fit Cloud for their business.  Intel in the Cloud?


Wait…there’s more…The Intel® Expressway Cloud Access 360 demo (also Vikas Jain YouTube available) focuseson the biggest topic around Cloud – SECURITY IN THE CLOUD. Cloud Access 360 secures Client to the Cloud. It is the first solution suite designed to control the entire lifecycle of cloud access by providing SSO, provisioning, strong authorization and audit.  So again, Intel in the Cloud?



Outside of the demos, Intel was also there with a camera and questions.  Interviews were being conducted left and right with the attendees to find out what is their definition of the Cloud, how does their business fit in with Cloud, what brought them to Cloud Connect this year and many other more interesting questions that brought the conversation of Cloud to a whole new level of learning and excitement.  Overall, the attendees are starting to build on the hybrid Cloud, they do see a future for Cloud Computing, many different verticals are using or providing the Cloud today – education, worldwide businesses, technology, financial businesses, etc., and the best part, people are understanding not only Intel’s benefits of security in the Cloud, and for some Intel® TXT, but finding out that Intel, while still the microprocessor company, also facilitates as a Trusted Advisor role in the conversation of Cloud Computing.


Intel was also able to have a few sit down interviews with some of our partners and customers that helped make the Intel Cloud Builders Reference Architecture library fulfilled, including Cisco.  Also one of the Steering Committee Members from the Open Data Center Alliance (ODCA), Terremark, as wells as spend some quality time with McAfee, Hytrust, RSA, and a few others.  Along with these interviews and those at the Intel booths and show floor, stay tuned for videos from the successful Cloud Connect 2011! 

Last week I wrote about Intel's Day in the Cloud event. I've still got a lot to cover from that content rich day and will be back later this week to discuss the additional reference architecture solutions we had showcased by Intel Cloud Builders ecosystem partners.


This week, however, is Cloud Connect, an event that is becoming a central hub of information on the latest in cloud solutions.  Intel is there showing off our platforms and announcing a cool new technology called Intel Expressway Cloud Access 360...a software solution that enables much tighter control of password authentication as well as the ability to unify passwords across cloud apps.  This seems like a simple concept in theory, but if you think of the implications of multiple services served up across the cloud in coming years, password administration and secure access control could quickly become a nightmare for IT organizations without this time of innovation in place.




I recently learned a lot from Intel's Vikas Jain (@VikasJainTweet) about the innovation. Check out our conversation in our latest Conversations in the Cloud podcast.

We always like to feature Xeon server customer success stories at Intel, in order to help us make our point about total cost of ownership, refresh cycles, ROI and power savings.  They are always credible and interesting examples of the various ways our technology is actually deployed by creative customers.



But, I recently happened across Intel IT’s annual performance report for 2010-2011. I’m going to spend some time on it, as it is a shining example of what IT can accomplish, even during times of restricted budgets, and pressure to respond to requests for business agility.  And, it’s an example of us “eating our own dog food.”



Let’s face it, most of us only reflect on IT when we’re cursing it because something has gone wrong. But, I urge you to do something different.  Go check out this great case on Delivering Competitive Advantage through IT.


There are two things that strike me:


  • First Intel is doing exactly what it recommends to customers
    • A 40% reduction in ERP servers while increasing capacity by 260%,
    • A power load reduction of 7.35 million KWH 
    • A $47.6M savings from eliminating all single core processors in the design infrastructure, 
    • 3.5X increase in the use of server virtualization with consolidation rations of 20:1
    • 3 hours automated self-service deployment of infrastructure services in our private cloud.
    • I’m sure this sort of improvement is what every IT manager wants to deliver!


  • Second, and perhaps more importantly, is the business improvements enabled by these changes that are both breathtaking, and essential in today’s competitive environment
    • 65% decrease in manufacturing cycle time
    • 65% reduction in order-fulfillment time
    • $26M travel avoidance due to video conferencing
    • 8X the number of laptops with solid state devices (my personal favorite!)
    • 32% reduction in inventory


It’s the quick impact on the business results that every company, big or small, needs these days and its from IT investment.  It’s also the kind of track record that can help IT organizations when it’s time to plan for next years IT budget!


If this all seems too simple, check out all the details in Intel IT’s annual performance report for 2010-2011.   We’ve also got white papers, videos and blogs by checking out IT@Intel  and you can engage with our Open Port IT Community & Blogs.

Download Now


Charité Universitätsmedizin Berlin.jpgCharité Universitätsmedizin Berlin is the oldest hospital in Europe, dating to 1710. With the help of SAP In-Memory Appliance Software* (SAP HANA*) running on the Intel® Xeon® processor 7500 series, it plans to complement that long history with cutting-edge business intelligence. The hospital will use the solution to generate rich, ad hoc reporting from business data that isn't possible with its current systems. As a result, the hospital is poised to take better advantage of its data resources in real time, generating better patient outcomes and greater operational efficiency.

“Our implementation of SAP HANA running on the Intel Xeon processor 7500 series will enable us to generate robust cost and revenue trending in near-real time and with greater detail than before,” explained Martin Peuker, deputy CIO of Charité Universitätsmedizin Berlin.

For all the details, download our new Charité Universitätsmedizin Berlin business success story.  As always, you can find many more like this on the Business Success Stories for IT Managers page.  And to keep up to date on all the latest business success stories, follow ReferenceRoom on Twitter.

The word comes down from on high, “I just talked to the rep of our favorite OEM.  She told me that we can replace our mainframes and older RISC systems and get great performance and reliability.  So let's make an objective to get rid of our old mainframe and RISC servers and move everything over to Intel based servers!  The ROI is compelling.  I want this done by End-of-Year”


The folks on the ground hear these words with dread, some excitement and frequent rolling of the eyes.    These are the staff that has to implement this migration.   Lots of questions arise.


  • What are the steps to moving applications from a RISC system or a mainframe to an x86 system?
  • What do we buy to replace our big servers?
    • What is the architecture of the new system like?
    • What performance requirements do we need to hit? 
      • I/O?
      • Memory?
      • CPU?
  • How do we measure our existing application requirements and map that to Intel based servers?
  • How do our old systems systems compare to the newest Xeon servers?
  • What’s the solution stack to manage and maintain our new systems?
  • How can I get the best performance out of my new Xeon system?
    • What is NUMA (Non-Uniform Memory Access)?
    • How do I configure the new system?
    • How do I manage the new systems?
    • Can we do this without service disruptions?
    • What are the migration options available to me?
      • How do I migrate the database?
      • How do I migrate the front end servers?
      • Do I have to re-compile everything?
        • Why?
        • Won’t the users get confused about running on a new system?
        • Do my old skills in UNIX handicap me; will I lose my job?
          • How does Linux differ from UNIX?
          • Really, how reliable are the new Intel based servers?


“Sound familiar? Yea, for me too. My name is Wally Pereira, and I’ve been around microcomputers and Intel based Servers for thirty years.   I’ve made presentations at Oracle Open World and at Intel customers across the US.


My first migration from RISC architecture to an Intel based server was in 1992 when I was part of a team tasked with moving a manufacturing system from SUN to a Sequent server running on Intel 486 processors.  (Mission Critical applications on 486 processors, you bet!) The target application was Oracle’s Manufacturing Application suite and I was the ETL expert, getting the Inventory to reconcile between the systems to a $0.15 discrepancy.  I still can’t figure out where that 15₵ went to.  But I was hooked.  I worked for Sequent Computer Systems where I leveraged my experience with Oracle’s new Financial and Manufacturing Application suites to assisting Sequent customers who wanted to move from expensive RISC systems to Intel based Sequent hardware.


Since 1992 I’ve been working for Sequent, IBM, and Intel Corporation mostly as a captive consultant, helping our customers utilize the latest hardware and get the best performance from it.  This has entailed a considerable number of migrations from older legacy platforms.  I’ve developed quite an arsenal of tools to accomplish this migration and this blog will share a lot of this experience with you.  I've given talks at Oracle Open World, 2005 & 2010 on these topics.  Maybe you saw me at Openworld 2010?


I hope you’ll find my posts useful and I invite you to contribute by asking questions and submitting your own experiences in the hopes that sharing your knowledge and experience will help us all.  For instance in future posts I’ll be covering the practical issues faced when deploying the latest Xeon – EX processors and the impact of the changes to expect when Intel releases new generations of Xeons along with the issues listed above.

I had a great time at the Green Grid Technical Forum this week and I wanted to share some of the highlights. In the spirit of the “did you know?” theme of the event:





Keynote by Skip Laitner on the first day of the meetingI learned about the role efficiency plays in economic growth. According to his statistics  - did you know that approximately 75% of the demand for new energy since 1970 has been met through technology efficiency?


In the Keynote on the public day by Rob Atkinson of ITIF we learned that efficiency alone will not get us to where we need to be in energy supply. Rob made a strong argument that government focus on regulating energy use in data centers is misguided. Although IT equipment accounts for about 2% of world energy use, the world needs more IT, not less of it. Of course we need to make IT equipment into more efficient GreenIT equipment, but Rob went through serveral examples where the innovation of efficient IT has reduced energy consumption in the "other 98%." We cannot rely on policymakers to come up with a solution. We need innovation to “de-carbonize” the world’s energy system and delvier the 84% efficiency needed to meet future demand.


Dean Nelson of eBay told us in an unpublished panel discussion that the important performance metric of the future is not Ops/Second. It’s now Transactions/Watt, Listings per Watt and Dollar per Watt. To keep pace with a business that doubles every 18-24 months eBay had to break the relationship between growth and cost. They did it through some very unconventional (but solidly financial) thinking, including refreshing servers every two years. In three years eBay has  reduced their Watt/Listing by ~ 70%! This kind of thinking has helped eBay to avoid the significant cost of building new data centers. In the Data Center Pulse top 10 we also learned about the need to move from availability to resiliency in the data center.


John Tuccillo, the Green Grid President and Chairman of the Board, and Mark Monroe, the Executive Director of the Green Grid, told us about the State of the Green Grid  and the new mission “To become the global authority on resource efficient data centers and business computing ecosystems.” With the release of the WUE and CUE metrics , the Green Grid is well on its way.


Harqs Singh presented the data center maturity model and John Haas, also newly named as the Technical Committee Chair, reviewed progress on the Data Center design guide. These two documents compile a huge breadth of Industry knowledge and will set the stage for data centers of the future. A very nice paper presenting the differences between cooling architectures in the data center was also presented.


I attended many other interesting sessions and had some fascinating hallway conversations with both colleagues in the industry and some analysts. Did you know that the vision of sustainable business in France includes human rights conditions of workers? Did you know that an Energy Checker SDK to monitor the efficiency of software has been developed and is available for use and that DCeP productivity proxy work is beginning? Did you know that the first EPA Energy Star certification based on PUE was delivered in June 2010? Did you know that the Green Grid is looking at developing a “siting guide” that will pull from data bases to help data center operators choose data center locations against PUE, WUE, and CUE criteria? Here is a link to all the Technical Forum content.


Overall the event was extraordinary. As John Tuccillo stated, it's very exciting to imagine what the event will become by next year!

For many small business owners, “security” means locking up their front doors and setting their alarm system before going home for the night. But to keep your small business secure, it’s vital to protect your data, not just your physical assets.  For example, did you know that:



No small business owner wants to tell her customers that she “lost” their credit card number or other personal information. Quite simply, security disruptions cause lost data which causes lost trust. And in this competitive marketplace, you don’t want to give your customers any reason to take their business elsewhere.


So, have I scared you enough already? The good news is that there are some easy and inexpensive ways to make sure your valuable data is secure and protected. Of course, we all know that it’s important to have up-to-date virus protection on your laptop or desktop system. But how do you best protect the data on your server?


Luckily, entry-level servers based on the new Intel® Xeon® E3 family of processors, available in the next 60 days, have several built-in features that keep your data safe and secure. Intel® Active Management Technology helps you update anti-virus software across your entire network – even if you’re working remotely. Intel® Advanced Encryption Standard New Instructions (AES-NI) enable fast and secure data encryption and decryption for increased security  that won’t impact your system’s performance.


And of course, it’s just as important to backup your data as it is to keep your data from being stolen. All Intel® Xeon® processor E3-based servers support ECC memory, which corrects the vast majority of memory errors to keep your system online and your data intact. And they include Intel® Rapid Storage Technology, which protects your business from a hard drive failure by enabling you to seamlessly backup and restore data on an additional drive.



In short, Intel® Xeon® processor E3-based servers are a smart investment that helps keep your business up and running, 24x7. And they are priced affordably, so that businesses of all size can afford the latest and greatest server security solutions from Intel.


Have any security or data loss horror stories to share? We’d love to hear them! Here’s one that recently found its way to us via our IT Tuneup Contest - watch if you dare!


I got to spend the day yesterday chatting with some of the leaders in cloud computing as they showcased their reference architectures (the recipes for building cloud computing solutions I wrote about yesterday) up and running in our Cloud Builders lab.  The showcase was part of our Day in the Cloud event, a day featuring our Cloud Builders partners sharing their technology solutions to select editors and bloggers to highlight how the industry is working together to tackle some of the toughest technical challenges facing deployment of solutions in both enterprise and public cloud environments.  The reasons for such a day are vast but based on a simple premise that to truly understand the power of a reference architecture you need to see it live.  It’s kind of like reading a recipe vs. tasting the end result.  Based on our guest’s interests in the reference architecture walkthroughs, I think this premise was correct.


For those of you who weren’t lucky enough to participate yesterday, here’s an initial recap of some reference architectures that were featured yesterday.  I will be recapping additional reference architectures over the next few days.  Click the titles to find learn more at the Intel Cloud Builders site.


Gproxy: Client Aware Computing


One of the most unique and compelling usage models we saw yesterday was brought by e-commerce software leader Gproxy.  Gproxy and the Intel Cloud Builders team highlighted the concept of client aware computing…in other words, an optimized user experience driven based on the unique nature of the client requesting information from the cloud.  The reference architecture demonstration featured two PCs, one an old Centrino machine and one a brand new Intel Core machine loaded up with all the bells and whistles.  In the demonstration, Gproxy showed us how their solution utilized APIs supplied by Intel to “score” machine capabilities and send the optimized content based on network, processor and graphics capability, and other factors.  The result was a rich 3D and video rich experience for the new PC, and a flat, simpler experience for the old PC.  When you extend the thinking of client to all of the myriad devices we expect to connect to the cloud in a few years you can see how this is a powerful concept.  And, it seems ZDnet agrees.


Parallels & Microsoft: Trusted Computing via Parallels Automation for Cloud Infrastructure featuring  Microsoft Hyper-V Cloud


This interesting demonstration of a Reference Architecture featuring Parallels trusted cloud technology and Microsoft Hyper-V highlighted the unique challenges of ensuring secure delivery of data between an enterprise and public cloud environment.  Utilizing Intel’s TXT environment to deliver hardware enabled data encryption, the demonstration highlighted how IT managers could migrate workloads in a heterogeneous environment while ensuring that compliance policies were maintained regardless of data location.  With security on the tops of IT minds when it comes to cloud adoption, it’s easy to see why this technology is critical to a complete cloud environment.


Dell & Vmware: Efficient Cloud implementation through Optimized and Policy driven power management


One critical aspect of cloud deployments is efficiency.  Efficiency ensures data center costs are kept low, and whether you’re managing your internal electrical bill or paying for your service provider’s data center costs, improved efficiency helps the bottom line.  In this Reference Architecture, Vmware’s vSphere technology has taken advantage of Dell’s c2100, c6100, and c1100  server platforms featuring Intel’s Intelligent Node Manager technology to enable IT to control power delivery to servers based on workload requirements…in real time.  This tight instrumentation of power delivery enables acute control of power costs based on what is required enabling IT managers to set power policies for their data centers that drive down costs.


I’ll be back next week with more highlights from Intel’s Day in the Cloud.  In the meantime, check out my new Conversations in the Cloud podcast to learn more.

So we're in Las Vegas for The Autodesk One Team Conference.  I had great conversations with lots of people in this industry.  The conversations with Autodesk folks about their vision of "Suites" (with every possible pun on that word used).. was really rewarding to hear.  Intel Workstations make the suite experience.. satisfying.. we demonstrated some technology previews of our HD Graphics P3000 capability.  Our team showed a great demo of autocad and inventor running in multiple screens using a next generation Xeon based entry level workstation.  It looked great.. the team here was rather proud.


We also discussed visions for cloud computing with people.  People see a future of cloud computing in technical compute.  An ongoing issue for the industry.


In what seems like a digression .. but isnt... I had sushi at a restaurant here where the chef tried to mix peruvian and japanese concept.. The original intent of each dish was lost.. and what we got with muddled and unpleasant... by the well intended mixing of ideas.... combination and editing.. doesnt always return a positive result.


So I went to see a great play in portland called "Futura" about a future of cloud computing gone very wrong.  In this vision,in the future... not only has paper been eliminated (along with pencils and pens and all books.. gone are the paper editions of tolstoy and twain).. but the control of content... has been centralized.  The "corporation" in the spirit of political correctness edit the great works of literary history (like recently happened to huck finn) to match the whims and will of its leaders.  Some great works simply become unavailable.


The Intel Cloud 2015 vision of federated automated and secure cloud computing is great.  but fidelity needs to show up someplace.  In a world of blogs and mashups.. our nuevo content chefs enable content to be seen and modify  leaving open the possibility that the original huck finn will be lost to us...  maybe not a big loss perhaps..  but certainly a moment to give us pause.



Is that a bad taste in my mouth?

Do you remember the day when you were first aware that everywhere you went, people we’re talking about Every start-up was telling you how the Internet was going to change the world, everyone you knew was talking about how they were using the web, and every bit of marketing flowing from the tech industry had to feature an Internet angle.  It’s like someone had turned a switch connecting people in a global zeitgeist that said if you wanted to move forward in life you needed to travel into the net.


A similar phenomenon has struck the tech world for the past couple of years with the word cloud.  If you’re a computing industry player you know your products have to have a cloud angle, if you’re an IT manager you know you’ve got to have a plan to use cloud, and if you’re a user you might have even heard about this thing called cloud.  You may not agree with anyone else on the definition of cloud, but you know it’s important and you know you can’t get left behind. Which brings me to a comment a friend of mine made to me back during the boom…”at some point people have to stop talking about this thing and really start delivering it”.  Wise words that companies like should have listened to.


Over the past six months or so, the vaporous nature of cloud has been dissipating with a sense of customers beginning to implement solutions, take advantage of public cloud resources for targeted workloads and technology has come to market that offers real differentiated innovation.  Today at Intel is all about that delivery…our event called “Day in the Cloud” has brought together some of the leaders of cloud computing to show what we techies at Intel call Reference Architectures, proven technology solutions that address some of the critical user requirements for cloud deployments...and all based on industry standard solutions that our collective customers desire.  Our Intel Cloud Builders lab is opening up to show these reference architectures to some leading computing  editors and bloggers to highlight the incredible progress the industry has made in shifting from talk into action…into delivery of real solutions ready for customer deployment in both public and enterprise cloud environments.


The concept is simple.  A reference architecture is designed by a team of engineers to be a step by step recipe for getting a solution up and running.  It lists the ingredients (detailed lists of hardware and software that comprise the solution) and details the process of building the solution (think scripts, BIOS settings etc…kind of like Julia Child for the cloud).  With hardware vendors, software vendors and Intel engineers working together to develop and test each Reference Architecture, the end result is something that IT managers can safely use to stand up their own cloud solutions in test beds and production environments.  And interestingly enough, along the way this collaboration leads to additional insight for engineering teams that help improve the quality of the overall solution.


The Intel Cloud Builders program has delivered 25 of these reference architecture “recipes”, but today was the first day we actually showed a number of them (eight to be specific) up and running to a public audience.  These are available to folks who are getting their feet wet with these new technologies or for folks who have already begun deploying cloud solutions and are looking to improve critical capabilities like end to end security or on boarding from the enterprise to the public cloud.  With ecosystem partners including Cisco, Citrix, Dell, EMC, Gproxy, Hytrust, Microsoft, NetApp, Parallels and Vmware on hand to showcase their cloud solutions, the picture became clear that the cloud is swiftly moving from a bunch of hot air to smooth sailing solutions.  Check out our Day in the Cloud web page to keep pace with the day’s events and see what our guests have to say about the industry innovation on display today.



If you want to hear more from these industry experts and other cloud computing leaders check out my new podcast program: Conversations in the Cloud

Download Now

IT Enterprise.jpgTo meet the growing demand for hosted desktop services at a competitive price, the Netherlands’ IS Enterprise recently expanded its IS HyperGrid* cloud infrastructure platform with an Intel® Cloud Builders reference architecture and powered by the Intel® Xeon® processor E5560.  By creating virtual containers on the end-user’s computer, the IS Enterprise hosted desktop service enables customers to maintain security while also benefitting from the scalability, flexibility, and predictable costs associated with a cloud service. It's now evaluating the additional benefits of upgrading to the Intel Xeon processor 5600 series.

“By comparing the specifications of the Intel hardware against competitors’ offerings, it was obvious the Intel Xeon processor E5560 delivered superior performance for the cloud,” said Mike Janssen, technical director for IS Enterprise.

For all the details, download our new IT Enterprise business success story. As always, you can find many more like this on the Business Success Stories for IT Managers page. And for instant updates on the latest business success stories, follow ReferenceRoom on Twitter.


*Other names and brands may be claimed as the property of others.

Filter Blog

By date: By tag: