Skip navigation

The Data Stack

9 Posts authored by: Shannon Poulin

The science-fiction writer William Gibson once observed, “The future is already here — It’s just not very evenly distributed.” The same could be said of the today’s data centers.


On one hand, we have amazing new data centers being built by cloud service providers and the powerhouses of search, ecommerce and social media. These hyperscale data center operators are poised to deploy new services in minutes and quickly scale up to handle enormous compute demands. They are living in the future.


And then on the other hand, we have enterprises that are living with the data center architectures of an earlier era, a time when every application required its own dedicated stack of manually provisioned resources. These traditional enterprise data centers were built with a focus on stability rather than agility, scalability and efficiency—the things that drive cloud data centers.


Today, the weaknesses of legacy approaches are a growing source of pain for enterprises. While cloud providers enjoy the benefits that come with pooled and shared resources, traditional enterprises wrestle with siloed architectures that are resistant to change.


But there’s good news on the horizon. Today, advances in data center technologies and the rise of more standardized cloud services are allowing enterprise IT organizations to move toward a more agile future based on software-defined infrastructure (SDI) and hybrid clouds.


With SDI and the hybrid cloud approach, enterprise IT can now be managed independently of where the physical hardware resides. This fundamental transformation of the data center will enable enterprises to achieve the on-demand agility and operational efficiencies that have long belonged to large cloud service providers.


At Intel, we are working actively to deliver the technologies that will allow data centers to move seamlessly into the era of SDI and hybrid clouds. Here’s one example: The new Intel® Xeon® Processor E5 v3 family exposes a wide range of information on hardware attributes—such as security, power, thermals, trust and utilization—to the orchestration layer. With access to this information, the orchestration engine can make informed decisions on the best placement for workloads within a software-defined or cloud environment.


And here’s another of many potential examples: The new Intel Xeon processors incorporate a Cache QoS Monitoring feature. This innovation helps system administrators gain the utilization insights they need to ward off resource-contention issues in cloud environments. Specifically, Cache QoS Monitoring identifies “noisy neighbors,” or virtual machines that consume a large amount of the shared resources within a system and cause the performance of other VMs to suffer.


And that’s just the start. If space allowed, we could walk through a long list of examples of Intel technologies that are helping enterprise IT organizations move toward software-defined data centers and take advantage of hybrid cloud approaches.


This transformation, of course, takes more than new technologies. Bringing SDI and hybrid clouds to the enterprise requires extensive collaboration among technology vendors, cloud service providers and enterprises. With that thought on in mind, Intel is working to enable a broad set of ecosystem players, both commercial and open source, to make the SDI vision real.


One of the key mechanisms for bringing this vast ecosystem together is the Open Data Center Alliance (ODCA), which is working to shape the future of cloud computing around open, interoperable standards. With more than 300 member companies spanning multiple continents and industries, the ODCA is uniquely positioned to drive the shift to SDI and seamless, secure cloud computing. There is no equivalent organization on the planet that can offer the value and engagement opportunity of ODCA.


Intel has been a part of the ODCA from the beginning. As an ODCA technology advisor, we gathered valuable inputs from the ecosystem regarding challenges, usage models and value propositions. And now we are pleased to move from an advisory role to that of a board member. In this new role, we will continue to work actively to advance the ODCA vision.


Our work with the ecosystem doesn’t stop there. Among other efforts, we’re collaborating on the development of Redfish, a specification for data center and systems management that delivers comprehensive functionality, scalability and security. The Redfish effort is focused on driving interoperability across multiple server environments and simplifying management, to allow administrators to speak one language and be more productive.


Efforts like this push us ever closer to next-generation data centers — And a future that is more evenly distributed.



For more follow me @PoulinPDX on Twitter.

The role of Intel architecture within the data center has grown tremendously since the introduction of the first IA server processor, the Pentium Pro, in 1995.  Within the last two decades our data center product lineup has grown from two processor configurations to over a hundred iterations addressing an equally expansive workload array found in the data center.  Almost as rapidly as we can develop new variants of processors we are seeing creative developers ask for another special optimization for their workload.  The move to mobility, and the massive apps ecosystem mentality it has spawned, have caused companies to rethink how they deliver services and thus how they are going to construct their data centers to meet ever growing demands.  Our mainstream Xeon processor line is fine tuned for what we believe is >90% of data center workloads with special attention paid to compute, storage and network requirements all delivered with a common instruction set architecture.  This foundation has enabled incredible industry innovation that has in turn enabled the next generation of supercomputing, the birth of the cloud, and the transformation of enterprise IT from supporting the business to becoming a strategic business asset.


In recent years we've identified adjacent market opportunities for data center workloads and have enhanced our product portfolio accordingly.  The introduction of the Xeon Phi and Atom product lines provide us the ability to meet emerging needs in highly parallel technical computing and extreme low power workloads respectively.  This reflects our goal of delivering the best silicon products available for every data center workload that customers require.


With advancements in silicon manufacturing and design, our ability to augment our roadmap of product offerings and be more nimble to market changes has grown.  With this nimble design capability we’re better able to deliver products that are specifically designed to unique workload requirements when we see a potential improvement in solution capability that merits a unique design.  We have over a hundred projects underway to deliver specialized SKUs today, most notably projects we've discussed with customers such as Facebook and eBay.


We have taken this approach even further with the planned delivery of an FPGA that has been integrated into the same package as the Intel® Xeon® processor.  The FPGA can be programmed to accelerate specific functions (e.g. video codecs or search algorithms) or re-programmed as workload patterns change.  Based on our internal estimates we believe packaging the FPGA with Intel Xeon processors using both coherent and non-coherent links will allow us to deliver up to >10x performance efficiency versus discreet FPGAs.  We see the FPGA integrated with Xeon solution as a major step forward in delivery of customer value initially targeted at hyperscale and telco environments.  While we don’t see FPGA based solutions as optimal for broad data center requirements in the near term, we see this as an excellent potential for a unique set of customers…and that’s pretty much what our customization efforts are all about. k

My grandfather was born in the early 1900’s.  By all accounts he was a hardworking man with a strong degree of curiosity.  He passed away in his late 80’s and before he died I remember talking to him about my pursuit of an Electrical Engineering degree.  He nodded politely, asked a few questions and when I helped to fix the electrical outlet in his garage I got the sense that he thought I was heading down the path to be an electrician.  I believe that thought pleased him.  Several years ago I was explaining to my five year old daughter in layman’s terms what I did for a living and what my company made.  I said things like “We make tiny engines that run computers” or “I work with computers that run websites like Webkinz® and Disney®”.  She seemed impressed.  Months later when she was asked by a parent of her friend what her dad did for a living I was a combination of proud and surprised to hear that she replied “They make chips…”  (proud moment) “…and salsa!” (um OK.  I still have work to do).






Now the other day she walked up to me and said something like “Dad, I am having trouble getting the Slingbox to work on mom’s iPod Touch.  It is connected to the Internet but the remote does not seem to be changing the channel.  Can you help me?”  Clearly she has made some progress up the technology curve, but it also struck me how far she has come.  Kids these days are surrounded by technology.  In our house alone there are at least the following electronic devices; Oven, Microwave, AppleTV, refrigerator, smoke detector (3), carbon monoxide detector, programmable thermostat, furnace, radio, garage door opener (2), wireless speakers, televisions (3), set top boxes (3), ceiling fans with remotes (3), netbook, Slingbox, Clear wireless router, remote outlet, sprinkler control box, iPod Touch, desktop computer, Wii, iPod shuffle (2), alarm clocks (3), oven timer, electronic light dimmer, cordless phones (4), AV receiver, DVD players (3), VCR, iPod docking station, security system, motion sensor, camcorder, camera (2), USB hub, music keyboard, AV switch, computer keyboard, battery chargers (4), Wii remotes (4), Wii Fit Pad, Wii drums, copier/fax/scanner, computer monitor, AC, Power supplies (4), RFID credit cards (2), washer, dryer, noise canceling headphones, answering machine, internet modem, cell phones (2), handheld GPS, auto GPS and electronic battleship.






I am sure I have forgotten several things and I did not count cars or anything at my children’s school.  I am also sure each of the electronic devices in our house has either a processor, microcontroller, ASIC or multiple of each.  Admittedly, the silicon content in our house is probably above average given where I work and the personalities my wife and I have.  But when I think back to my grandfather he had none of these silicon laden items.  I am sure he didn’t care since it is hard to miss something you never knew.  Of the hundreds of pieces of silicon in our house about a dozen or so are smart enough to connect to each other or to “the cloud” in some way.  I put “the cloud” in quotes because it is not only the most over-hyped word of it’s time it is also the best way to articulate what I suspect my children and many others think of the services that they get when all of this stuff gets connected.






I can safely say two things are fact. First, my grandchildren will have in their house many more pieces of silicon than I do. Second, they will have more pieces of silicon that can connect to each other and communicate with “the cloud”.  There are many billions of devices connected to the Internet today and that number will grow.  At Intel we are building silicon, and increasingly software assets, that facilitate the processing and movement of data both on those devices and between them. Servers are increasingly becoming an important part of that over-hyped cloud word. My cable company has a cloud delivering me my on demand video content, A social media site allows me to upload pictures into their cloud to share with my friends, someone just used a cloud architecture to develop a perpetual motion machine.  OK, one of those things was false.






My grandfather thought a cloud was something in the sky.  My children think it streams video to their handheld device.  What will our great-grandchildren think?

OK, so we launched the Xeon 5500 processor based servers and workstations a couple of weeks ago. While I don’t have direct quotes of support from Brit, Miley, Susan or any country presidents who have signed economic stimulus into law I am pretty confident that if they were ever actually considering purchasing a server or workstation they would come to the conclusion that the new Xeon 5500 platforms would be their best choice.


I had the privilege of being at one of the thirty seven different worldwide Xeon 5500 launch events. I was on Wall Street and attended the NASDAQ launch event on March 31st. Based on which data source estimate you look at Financial Services as a whole represents about 20% of the worldwide market for servers. It was also evident when meeting with customers in the NYC area that they are passionate about performance and power consumption. Most of them had received pre-production seed systems and had already done extensive testing prior to this launch event. I have been in Intel’s Server Platform Group for over a decade now and I have never seen so much enthusiasm for a product launch.


I won’t rehash the performance benchmarks and performance per watt data. There are many benchmarks, blogs and press articles doing that. What I took away from the conversations was a feeling of optimism from the end users I spoke to. Some people felt that these new products would be what it takes for them to deliver solutions that would give them a performance advantage over their competition. In few markets does that pay off more, and translate almost directly to the bottom line, than in Financial Services. Others felt that these systems would help them continue to add to their existing datacenters without having the need to build a new one. This was due to the performance per watt improvements and the end users ability to replace many old servers and workstations with a few new ones.


Lastly, I think human nature being what it is we are seeing that IT professionals want to work on cool new projects. These Xeon 5500 servers and workstations represent a shiny new toy that IT professionals can use to have a material impact on the bottom lines of their companies. To some degree the same applies to virtualization in that it is disruptive and provides a new cost effective way to deliver legacy solutions and also enables flexibility for future growth. The IT folks that I have met who familiarize themselves with virtualization, new hardware and advanced management techniques (power, systems, virtualization) generally are viewed internal to their companies as leaders with visionary capabilities.


As we all work through this economic morass I am hopeful that with new technology introductions, and a relentless focus on efficiency, we will all emerge with a greater level of capability and a higher degree of flexibility. I also believe IT will emerge as a key asset of differentiation for companies from Wall Street to Main Street and this will place an even greater burden on delivering solutions to meet those unique needs.


What do you think?



Having bounced from Engineering to Sales to Marketing in my career I have found some unique interactions between those organizations along the way. But I have recently come across something for the first time that seems particularly noteworthy. I am finding that many of the internal discussions I am having about our upcoming products are largely void of the usual marketing fluff. You could argue that this blog and my previous statement is itself marketing, but oh well.  I am also not saying that I don’t still visit an end user who is having trouble picking out a server topology, an infrastructure to virtualize on or maybe they are having datacenter challenges or power constraints and we provide them with advanced product info.  All of that still happens regularly and I expect it will continue for a long time. Rather, I am referring to the solutions we are starting to propose for those problems.


I am sure everyone in marketing can remember some product that they were responsible for that kept them up nights. The feature set wasn’t quite right, the price was out of whack, competition was breathing down their necks or competition was the incumbent in a certain area. Those are tough days and you only hope that the future products in the hopper are leadership and there is balance to your present day effort. For a while I have seen segments where products are “unmarketable”. You can pretty much leave the marketing guys at the door when you walk in to a High Performance Computing account, Financial Services Account or Internet Portal Datacenter. They want hardware and you can take your PowerPoint slides and “shove them $#@^%.” That may be a direct quote J


Still, that was certain segments. They did their own benchmarking and they made their decisions based on the exact workloads and configurations they are running. Many Enterprises, Datacenters and Small/Medium Businesses rely on third party data, benchmarks or word of mouth to make their purchase decisions. We have been talking to them under non-disclosure lately about our next generation Nehalem based products and the responses have been rather unique. In short, Nehalem appears to be “unmarketable”. I find myself pretty much trying not to mess things up when talking about the product. There have been some early public discussions about the performance and the message boards seem to be taking a keen interest in how the platform looks. The launch will happen later in Q1 and I for one am looking forward to seeing what exciting new things companies are going to be doing with them.

Shannon Poulin

When to Buy?

Posted by Shannon Poulin Oct 4, 2008

I was on a plane flying somewhere the other day and I happened to be seated next to someone who ran consumer sales for a large Multi-National Corporation.  We had a great conversation about technology and discussed his specific focus on client computing.  During the course of the conversation we talked about what computers we carried around, what we had at home and some of the exciting things happening in the mobile space.  To keep a long story short we debated the best time to buy something.  One of the dangers of being an Intel employee is you always know there is something great coming right around the corner.  It can create paralysis when deciding to buy that next computer for my wife or that next mobile device for one of my two daughters.  Buy today and Nehalem is coming tomorrow.  Buy tomorrow and 32 nm products are coming soon after.  When I apply this thinking to my position in the Server group I realize that system admins and IT professionals are making the same sorts of decisions everyday.  The difference is their penalties for waiting are much more severe.  They could lose profit, lose share or but their existence in jeopardy if they make the decision to wait and fall behind their competitors.  Likewise, if they are on the leading edge with their technology purchases and can not extract value for that then they are exposing themselves in that they have wasted opportunity cost.  Now if I decide to not buy my wife and my kids a new computer the consequences are severe but not quite visible on the bottom line of a balance sheet. I have also not seen the downside of buying them a new computer ahead of their normal replacement cycle.  I'm sure there is a lesson in there somewhere but I don't have time to dig for it.


When we looked at this phenomenon in the Enterprise we wanted to minimize the risk of being a leading technology adopter.  That meant trying to find a way that our customers could adopt server technology today and extend and blend the use of that technology in the future with their next generation hardware.  One example of this would be what we have done for years with the Intel Architecture.  The very nature of the instruction sets that we develop allow old and new software alike to run on next generation hardware.  As enterprises evolve and virtualization grows in it’s adoption we developed another feature called FlexMigration that allows someone to start virtualization pools with today’s hardware and grow the size of the pool with the next generation hardware that we will be delivering soon.  It is amazing the positive feedback we have received from a feature that in essence isn’t about a performance enhancement (Intel’s Moore's Law) but is rather about giving them better investment protection.  Look for more of these types of advancements from Intel in the future because while we realize the need for absolute performance leadership in all segments, we also know that there are features just as important to an IT professional when it comes to the bottom line.

I have visited a number of customers recently.  The discussions are usually straight forward where I provide them with a download of our current products, I tell them about things that we are doing in the future and along the way I ask them some questions about trends that they are seeing with their businesses.  It will come as no surprise that enterprises are trying to keep up with their current requirements while also squeezing out increasingly flat or dwindling budgets to do something new.  Many are turning to virtualization as a way to do more. 


So who cares?  CFO's care.  I went out to visit a leading Fortune 500 company based on the West Coast of the US.  Keep in mind I am planning to discuss our server platforms, why I believe they are leadership on performance and power and also all of the great new virtualization features we have recently introduced or will intro in the future.  Before we get started they proudly walk me through their new datacenter and I stop in front of a rack that has two servers in it.  Two 2U two processor servers.  It is right next to another rack that has four servers in it.  I inquire as to why both racks are only partially full and I receive a response that says one is owned by Finance, one is owned by a business unit.  IT just manages them.  You can look at this two ways.  The glass half empty way would be that they are wasting an incredible amount of datacenter space and they are hopeless.  The glass half full way would be that this is a great opportunity to really deliver value to this company's bottom line by first convincing them that physical consolidation (full up their racks) is important, then showing them a path toward application consolidation and finally sharing a vision of datacenter virtualization that includes compute, storage and networking.  Their CFO will care.


IT employees care.  One theme that seems to be coming through loud and clear is that people who drive some form of virtualization are usually considered as innovators or leading edge thinkers within their company.  I have heard the term "IT Hero" to refer to someone who has delivered on a high ROI project, usually these days through the use of virtualization.  I have met a number of IT folks at conferences and during visits and it is uncanny how many are trying to dig for more product information and how eager they are to hear about what new features we're putting into CPUs, chipsets, networking devices.  A quick search of Youtube found this case study (here) that sums up the sorts of things I have heard.


It is also increasingly important that all of this stuff works well with the software, VMM and OS vendors product offerings.  I know we are working closely with all of the ecosystem players because if we come out with an amazing new feature in our components it would be wasted if the VMM, OS or software didn't take advantage of it.    There is some interesting banter here (here) about some of the pros and cons with virtualization.  We are busy working on features that improve the performance and simplify the experience end users have when they virtualize.  Why do you care about virtualization?  What are you doing today that you couldn't do a year or two ago that has been made possible because of virtualization related technology?

Every now and then a colleague, customer or acquaintance sends me a link to an article or blog that usually either features our products or those from one of our competitors.  More often than not I get a lot of repeat sources (The Register, The Inquirer, CNET, etc…).  The blog that comes my way most often is one from George Ou at ZDNet.  One of his most recent blogs (A comparison of quad-core server CPUs) shows a bunch of our latest quad core CPUs and how they stack up against our previous versions as well as those from AMD.  I won’t rehash the article here aside from saying it was positive for Intel and to say AMD’s issues with their quad core processors have been well documented.




Is Intel winning now because our products are superior?  Are we winning because our competitor is struggling?  Do these benchmarks mentioned in George’s blog tell the whole picture?  As you can imagine we constantly ask ourselves these questions and many more internally.  Our conclusions are that for processors and server platforms, as long as we provide leadership along several key vectors then our market share and overall market position will improve. 



Manufacturing process, processor architecture, system architecture, cache size.  These are four critical vectors that we have direct control over when we are making design and enabling decisions.  At times in our past and in the present we have had leadership on all four.  In those times we have won hands down.  There have also been times where a competitor has chosen to focus on one or two vectors and that has led to their products being better for a specific area.  The four vectors above are things that Intel focuses on but we always have to keep an eye on what end user value they deliver. 



Our customers tell us they care about three main things; Price, Performance and Power.  The three P’s.  George’s blog shows that for one of the P’s (Performance) Intel has leadership, particularly on integer and floating point.  There are similar looking examples for database, virtualization and pretty much any performance benchmark we have looked at recently.  Thankfully for Intel, Performance is the “P” with the strongest correlation to success in the server market from a MSS perspective.  We are also doing some amazing things with regard to Power.  Some have been launched already and some will be coming soon with new products in 2008.  The market is segmenting and we now make CPUs, chipsets and networking components that help OEMs build platforms targeted at high performance computing, mainstream enterprise, blades, workstations and emerging markets.  Each has unique requirements with respect to the three P’s and one size no longer fits all. 



I believe that overall George’s blog highlights the success that we are having today.  I also think that there will be a steady stream of innovations that will be delivered in 2008 and beyond that will cause us to rethink how we deliver performance at the most efficient power level for the best possible price point.  Virtualization, utility computing and charge back models for datacenter environments are all stepping up to take center stage.  We all must innovate or become irrelevant…technological evolution waits for no one.




Leading up to the launch of our 45nm processors I was often asked "what does this technology mean to my business?" or "what does it mean to me as a consumer?" My usual responses of improved performance, better performance/watt and better price/performance were all very true. But as I write this I am challenged to find more depth to that response. The solutions that you, the technology industry, collectively deliver include software, hardware and luckily for Intel processors that are now based on 45nm technology. We are on a line that is sloping up and to the right with respect to being able to deliver more performance over time. But so what? How can we look at single points on that line and reflect on their significance?


There are a number of examples where things start our revolutionary and simply evolve from there; flying, combustion engine automobile travel, the Internet, One day you walked/wagon/horse from place to place the next day you drove. One day you drove, the next day you flew. One day you wrote a letter, the next day an email. All of these had some groundwork that lead up to them for sure, but the new normal existed the day they became ubiquitous. Writing letters, putting a stamp on it and dropping it in a mailbox is now a lost art that we teach kids while we also explain to them what cassette tapes, rabbit ears and wired Ethernet are.


When was there enough performance, with low enough power and at a low enough price point for me to buy a handheld global positioning sensor unit that I can use to go geocaching with my kids? Clearly it wasn't ten years ago since I suspect the device may have existed for the military but wasn't quite portable enough for me or at a low enough price point to catch my eye. I am sure everyone can remember the first cell phones which looked like a car battery with a phone stuck on top. There are countless examples of points on a price/perf/power curve that lead to evolutionary or revolutionary products that change the way people live, work or play.


These new 45nm components are compelling and surely enterprise customers are going to find that they can run databases faster, develop software quicker and process transactions faster. Financial services companies will use these new products to execute faster trades. That in turn will allow them to win share against their competitors who are slower and it will reflect on their bottom line. Oil and gas companies will use these new products to more efficiently search for, locate and model the size of energy reserves. Search companies will use these products to ranks pages, target online consumers and drive advertising based commerce. Those things are evolutionary and allow companies to improve what they are already doing.


What are the revolutionary things that we will look back on and say "without the price/perf/watt that 45nm processors delivered in November 2007 xxx would not be possible?" Are you working on it? The technologies we develop are constantly looking to improve the present while also keeping an eye on the future. They are optimized for you, the developers and consumers, because quite frankly we are fascinated with what you are doing today and very interested in what you are going to do tomorrow with all of the high performing low power products that we are launching this month.


One last thing, if you're working on the next Google like revolutionary online platform drop me a note. I might want to alter my investment strategy J

Filter Blog

By date: By tag: