Remember your first introduction to databases?  If you design and administer them it was probably how to make a system to organize your books or music.  For everyone else, it was probably the same…. We live in a world with expanding amounts of information and every day big data gets…bigger – exponentially at that.  Now is your chance to ask questions to the people who breathe big data, and dream about mission critical architectures.

 

Chip Chat Live On Air Sign

Source - Flickr User: katielips

 

 

Intel Chip Chat and host Allyson Klein will be broadcasting live from Oracle Open World on Monday October 3rd from 1:00 PM PDT to 3:45 PM PDT, and from 9:00 AM PDT to 11:45AM PDT on Tuesday, October 4th with business data and intelligence experts from Intel and Oracle.  Who? To start we have Tim Shetler, a VP from Oracle who will be discussing Exadata, and from Intel we will have Steve Shaw discussing database technologies, and the following day we will have our very own resident data center expert on mission critical technology Pauline Nist.

 

 

Want to know more?  Follow @IntelChipChat to get more details and to ask your questions to the experts live at Oracle Open World 2011!

It’s nearly October, and that means another fantastic IDF occurred!   Wait… how do I know, you ask?  Aside  from the great attendance in the data center technical sessions that covered everything  from cloud computing to HPC, I can tell by all of the great blogs, podcasts,  and videos that came out of IDF! We also received great questions from visitors  to the Data Center Pavilion on Intel  Cloud Builders. Here is your Data Center Download for IDF 2011:

 

Intel Developer Forum 2011 Event Photo

 

In the Blogs we read:

 

Data  Center Technology & Intel® Ethernet at the Intel Developer's Forum by Douglas Boom

 

Below  the Surface: My observations from the Road Less Travelled at IDF by Ajay Chandramouly

 

Open  Data Center Usage Models: Solution Providers Respond by Allyson Klein

 

Exascale  High Performance Computing Discussions at IDF 2011 by John Hengeveld

 

Cloud  Computing, Data Center, HPC, and Itanium @ IDF 2011: Briefing from Kirk Skaugen by Cory Klatik

 

Facebook  & Open Data Center Alliance: What’s not to love?  Or should I say “Like”? By Raejeanne Skillern

 

Simplifying  the Cloud On-Boarding Process… Securely a Guest Contribution

 

Next-Gen  Mission-Critical @ IDF: Oracle Exadata + Intel Xeon by Mitchell Shults

 

Cloud  On-Boarding – How to Ease Your Transition to the Cloud: from Citrix & Intel a Guest Contribution

 

Data  Center Efficiency & Facebook @ IDF11 - Node Manager Innovations by David Jenkins

 

Mission  Critical Solutions at IDF: Preview by Pauline Nist

 

On Intel Chip Chat, we listened to:

 

Cloud Computing  & Storage Technology with Reuven Cohen @ IDF2011

 

Mission  Critical Technology & Intel Live From IDF

 

IDF Livecast:  Intel® AppUp Small Business Service Update

 

IDF Livecast:  Storage Industry Update

 

IDF Livecast:  The Architecture of Cloud Computing

 

IDF Livecast:  Parallel Programming

 

On Intel.com and Channel Intel, we watched:

 

Driving towards Cloud 2015 – A Technology Vision to Meet the ...  with Stephen S. Pawlowski

 

Intel  Data Center & Cloud Computing Briefing with Kirk Skaugen

 

Exascale  HPC Computing Discussion with John Hengeveld

 

Open  Data Center Alliance Usage Models Panel with ODCA Solution  Providers Representatives

 

Open  Data Center Alliance and Open Compute Project Announce Collaboration with ODCA, Intel, and Facebook OCP Representatives

 

In the social stream:

 

There was great online conversation surrounding IDF 2011 that you should see it in all of its  glory on Twitter.

 

 

What’s next?  Oracle Open World of  Course!  Did you know Allyson Klein will be there livecast  recording and taking your questions?   Learn more by following @IntelChipChat

 

To hear more about cloud computing, data center, HPC, and  mission critical technologies remember to follow @IntelXeon and checkout The  Server Room on Facebook.

brunodom

Capacity Planning for PaaS

Posted by brunodom Sep 29, 2011

As I previously explained in my last two posts about Capacity Planning for SaaS, if you offer  Software as a Service, the capacity plan should be conducted from the application/software layer to the base hardware layers (i.e. servers, storage, network, Data Center Facilities, etc.). However, for PaaS, the biggest challenge is the distrobution of the workload. On top of what may be covered in IaaS, you can also offer database, application server, etc. for an unknown number of applications.

 

thinker.png

The hardest part of a PaaS is the design of the database layer. If you have a multi-tenant environment, what kind of isolation should you provide in your PaaS environment?

 

     There are two possible approaches:

 

    1. Use a full virtualization strategy where the database is installed in a virtual machine
    2. Use isolation promoted by instances, where the database runs on top of the operating system and accesses the hardware by directly relying on database capability to isolate each instance.

 

For each approach, there are pros and cons, which are summarized below:

 

Full   virtualization

Isolated   by Instances

User has complete access to database configurations

User has access only to table and permissions associated with the instance

Upgrade database with no impact on neighbor applications

Can only upgrade when all application associated can support the new   version

Compute resources can be directly associated with user’s database –   no direct concurrence

Compute resources will be shared among instances in the same database

Highest impact on latency – maybe prohibitive for highly OTLP   environment

Lowest impact on latency – recommended for highly OLTP environment

Backup/restore can be promoted by VM snapshot

Backup/restore should be applied individually to each instance

High availability promoted by hypervidor

High availability promoted by database cluster capability

 

 

From a management standpoint, full virtualization brings a lot of benefits for an administrator. It is much easier to provision, upgrade, backup and restore in a multi-tenant environment. However, as I explained in a previous post about database virtualization, the penalty for a high OLTP application can destroy the performance application. In some cases, it can even increase the number of locks in a database, especially those that are highly normalized.

 

Based on the actual state of performance for a highly OLTP environment in a virtualized environment, it would be a good idea to also offer in a PaaS. The possibility to run a database isolated by instances creates a database grid layer. It can be costly for most of the applications, such as development, homologation, small OLAP databases, etc. However, so far it is the best approach for highly OLTP applications.

 

Similar to many others scenarios, there is no “one size fits all” model. But for a PaaS, you should at least deliver a scalable platform that can support a wide range of database requirements. You never know what kind of application must be supported in the environment.

 

A good approach to identify if a database is the bottleneck of application performance is by running the application under a heavy load. It can be on fire or under a load generator. See if the transactions and CPU consumptions scale in the same pace; if transaction increases to a point where you still see low CPU usage, it is a good sign that something blocks the transaction.

 

In most cases, external calls such as database can be problem. Start to look for database optimization, like indexes, contentions in queries, recompilations of store procedures, the nature of how the application makes external calls, etc. If nothing improves, put the database in a dedicated machine and see if that solves the problem. If it does solve the problem, the PaaS architecture is not good to host OLTP applications, and you may run into trouble.

 

Best Regards!

-Bruno Domingues

It has been a while since my first blog post on Oracle database performance, so what better time to follow up again than my next appearance at Oracle Openworld 2011!

 

Where was I in 2010? I was presenting at Linuxcon in Boston. This year, I'm back at Openworld. Although the title is similar, so much has changed.

 

To start off, here are the logistics:

 

Oracle Database Performance and Scalability on Intel Xeon Platforms

Presenters:  Steve Shaw and Eric Wan
ID#: 31701
Track: Database
Date:  3 October 2011
Time: 11:00 - 12:00
Venue: Moscone South
Room: Room 300

 

We'll talk about defining both performance and scalability, and what is desirable. We'll also look at getting the basics right at the system and OS settings. Then, we'll take a walk through the platform, as we look at the CPU and see measurements of the impact of features such as turbo boost, hyper-threading and 4 and 8 socket scalability tested against an Oracle database.

 

We will also cover memory performance, including options such as NUMA settings. At the I/O level, we'll look at SSDs. Finally, we will show an approach to holistic performance testing and engineering the system as a whole to work together for optimal performance and scalability.

 

If you don't get a chance to stop by the presentation, I'll be at the Intel booth with Eric at 4.30pm to go in-depth on some of the topics. Even if you can't make it to Openworld 2011, I plan to follow up on Openworld topics and some of the areas we will speak about on this blog.

Hello!  My name is Christine McMonigal.  I manage cloud marketing programs for the Storage Group at Intel, which is part of Intel’s Datacenter and Connected Systems Group.  This is my first blog.  I’ll be blogging on storage and the cloud, beginning today with Converged Storage Servers and why they are the foundation of storage for the cloud.

 

Traditionally, compute, storage and networking resources would reside in separate silos in an enterprise datacenter, where they were assigned to a particular business unit or division, or application, and provisioned based upon long-term needs.  However, that makes it difficult to reassign underused resources, which contributes to increasing space and power requirements, and rising costs.  Converged Storage Servers can help with these issues, but it requires IT managers to think about resources in a different way.

 

But before I get too far in, a definition:  Converged Storage Servers are solutions built from standard, high-volume server and storage HW components, as shown in the diagram below.  These converged storage servers would typically be racked and networked together as a pool of compute and storage resources, as part of what is often referred to as a unified or converged datacenter infrastructure.

 

CSS_diagram.png

 

Next, how would an IT manager or service provider use a converged storage server?  Because it is based on standard, widely available components, converged storage servers provide a flexible foundation for more efficient processing on a wide range of workloads.  This common component base can be tailored to meet differing requirements for everything from storing data for applications, to analyzing large data sets, to storing large objects like photos and videos.  Converged storage servers make it easier to locate storage in a shared virtual pool, where anyone in the organization can access it securely, and where additional resources can be automatically provisioned and easily scaled for better utilization.  These capabilities make converged storage servers a good fit for private, hybrid, or public cloud deployments.

 

Because of the intelligence built-in to these systems, more sophisticated technologies can be applied to optimize for the data being stored.  For example, when storing data for applications, the data stored could benefit from data de-duplication to eliminate duplicate copies of an e-mail and attachment sent to a group of employees.  With the increased processing power of the latest Intel® Xeon® processors, it’s now possible to de-duplicate data on-the-fly, before it is stored, which actually reduces the amount of storage space required.

 

Storage tiering is another technology to optimize data efficiency.  The most frequently accessed data can be stored to high-speed and high-reliability media such flash or solid state drives, while seldom accessed data is stored to slower hard drives.  In between, faster hard drives can fill the gap on occasionally accessed data.  Again, with the latest Xeon processors, tiering can be automated using algorithms defined by company policy, to optimize cost and efficiency.

 

Converged storage servers also help you achieve greater business value:

 

  • Performance & Efficiency –

        Through more efficient utilization of resources and standardized components to manage

  • Availability –

        By locating the storage in a shared virtual, not necessarily physical, pool where anyone in the organization can access it securely from different devices

  • Capacity

        By scaling-out to multiple nodes, co-located or dispersed geographically

  • Management

        As a single logical drive, regardless of the number of drives or nodes or their location

 

Overall, converged storage servers allow a datacenter to standardize their hardware with the flexibility to provision it as needed and when needed.  More sophisticated storage technologies, such as those described above, can be more easily driven across a datacenter on a standard architecture.  Further, total cost of ownership (TCO) can be reduced with simplified management as a single device, and with spare parts that can take advantage of a standard, interchangeable set of components.  It is time to evolve your storage architecture, retire legacy storage systems, and take advantage of the efficiency of converged storage servers and storage technologies to manage the growth of your data.

Hawking.jpg Watch Video


“The universe we live in is enormous,” explains Professor Stephen Hawking. “It is complicated and non-linear. We are trying to develop a seamless history, from the first reactions of the first seconds after the big bang through to the present day, 13.7 million years later.


Hawking is working with the COSMOS Consortium at Cambridge University, which is trying to understand the origins of the universe. Helping it is the COSMOS super computer based on Intel® Xeon® processors 7500 series.


“High-performance computing is so important in cosmology because of one word: data,” says Hawking. In the past two decades, cosmology has emerged as a data-driven field with many successful space- and ground-based experiments telling us more about the universe. The deluge of data has allowed cosmologists to construct increasingly sophisticated mathematical theories with sufficient precision to capture all that is being observed. We need to create realistic mini big bangs on the COSMOS super computer to fast-forward to today and then test if the predicted universe matches the latest observations. Without super computers like COSMOS, we would not be able to reach out and make contact between theory and the real universe to test whether our ideas are really right.”


For the whole story, watch our new COSMOS video. As always, you can find this one, and many others, in the Intel.com Reference Room.

I was very excited to host a “megabriefing” for about 100 international press and analysts at IDF 2011 to share some great steps forward we're making in HPC.

 

We were joined by HPC pioneer  Prof. David Patterson of Berkeley and Rob Neely from LLNL.  Professor Patterson (one of the minds behind one of the famous seven dwarfs of HPC) shared a vision for the use of computation to achieve a breakthrough in the understanding of the genetics of cancer.  Dr. Neely shared his vision of far greater public access to computation at the LLNL open campus.

 

I had the pleasure to provide background for people on how intel is making a difference in HPC with our Xeon Processor E5 family, and our future MIC coprocessors.

 

We got some great questions from the audience around the programming model and architectures that will help make exascale a reality, and a few questions around how Intel collaborates with our partners.  All in all a very rewarding conversation… catch the video below.


 

Today the Texas  Advanced Computing Center (TACC), together with The University of Texas at  Austin, announced that they will deploy a 10 Petaflop HPC  Linux cluster called “Stampede.” When it is operational at the beginning of  2013, “Stampede” is expected to be among the most powerful computers in the  world. Normally, we’d celebrate this important design, but for all of the Intel  employees working on Intel® MIC Architecture for the past several years, this  announcement has a very special meaning.

 

For Intel this is an  exceptional announcement as those 10 Petaflops will be delivered entirely by Intel  technology. “Stampede” will be based on future 8-core Intel® Xeon® processors  E5 family (formerly known as Sandy Bridge-EP) that will deliver 2 petaflops of performance. But this is  also the first announcement of a  system that will include thousands of Intel® Many Integrated Core (Intel® MIC) architecture co-processors  codenamed “Knights Corner.”, which will provide additional 8 Petaflops.  In all,  “Stampede’s” 10 petaflops performance will be achieved thanks to hundreds of  thousands of Intel Xeon and Intel MIC cores – all based on Intel  architecture.

 

These forthcoming Intel®  Xeon processors E5 family, which are aimed at numerous market segments ranging  from enterprise data centers, to cloud computing applications, to workstations  and small businesses, are shipping to customers for revenue now. Intel is experiencing  approximately 20x bigger demand for initial production units and 2x  more design wins for this processors compared to the launch of the Intel® Xeon®  processor 5500 series in 2009. We expect many high performance computing (HPC)  and cloud customers to deploy their systems this year based on these new Intel Xeon  processors with broad systems availability in early 2012.

 

“Knights Corner,” a  co-processor aimed at highly parallel workloads, will be the first commercially  available product featuring the Intel MIC architecture. “Knights Corner” is an  innovative design that includes more than 50 cores and will be built using Intel’s  leading edge 22nm 3D Tri-Gate transistor technology when in production.

 

It’s very important to  understand why TACC choose to include Intel MIC architecture in Stampede.  Since Intel MIC was announced more than a year ago at the International  Supercomputer Conference (ISC) in Hamburg, more than 100 Intel MIC partners  have been evaluating its potential. Earlier this year TACC joined other universities and  research organizations around the world to build applications that take full  advantage of the Intel MIC architecture. At this year’s ISC show in Hamburg,  many of those MIC partners including Forschungszentrum Juelich, Leibniz  Supercomputing Centre (LRZ), CERN and Korea Institute of Science and Technology  Information (KISTI) shared their results,  including how they were  able to take  advantage of the Intel MIC co-processor’s   parallel processing capabilities while  using well known IA instruction set. Using these widely available programming  tools can help save time and money as it negates the need to learn any proprietary  languages.

 

We believe the decision to build “Stampede”  based on Intel Xeon processors E5 family and Intel MIC architecture based “Knights  Corner” is a recognition of the advantages  that standardized, high-level CPU programming  models bring to developers of highly-parallel computing workloads.  Being  able to run the same code on both Intel Xeon processors and “Knights Corner” co-processors  should allow developers to reuse their existing code and programming expertise  which leads to greater productivity. Also, since Knights Corner is based on  fully programmable Intel processors, it can run complex codes that are very  difficult to program on more restrictive accelerator technologies.

 

TACC  also announced that the current system is  only the beginning as they plan to expand ”Stampede” in the future and increase  the total system performance by more than 50 percent to 15 petaflops with the  help of future generations of Intel products.

 

What does all of this  mean for the future of HPC? Last week at the Intel Developer Forum, Kirk  Skaugen, VP and GM of Intel’s Data Center and Connected Systems Group, talked  about the huge growth that is expected in HPC in coming years. (For those who  didn’t have a chance to attend IDF you can see video of Kirk’s presentation  here -> part 1 & part 2).  Our estimations show that by 2015, the world’s top 100 supercomputers  will be powered by 2 million CPUs and by 2019 this number will reach 8 million  CPUs. To give you a perspective, in 2010, Intel shipped about 8 million server  processors in total.

 

This growth is fueled  by the constant need for performance to solve some of the world’s biggest  problems. Here’s one example: In 1997 the cost of sequencing a human genome was  about a million dollars – mostly due to the scarcity of sufficient computing  power. In 2009, the cost had dropped to $10,000. Due to continuing increases in  compute performance, we believe that in a year or two the price can drop to $1,000.  This relatively low cost might enable a patient to have his individual genome  analyzed and an assessment made of his likelihood of contracting diseases. From  there, preventative measures and treatments could be customized precisely for  this one person. One of the keys to this “personalized medicine” is providing sufficient  processing power to make the necessary calculations in as little time – and as  cheaply – as possible.

 

Intel and HPC are a  great match. Today, Intel processors power nearly 80 percent of the Top500 list of super-computers  – which is great for our business.  Overall,  Technical Computing including HPC makes up about 30% of our data center  business today. In response to the needs of technical computing developers and  customers, Intel is committed to providing new technologies that will deliver even  more performance to scientists to help fuel the next generation of scientific  discovery. Again for those who were not at IDF, I recommend viewing a great video of a speech given by John Hengeveld, Director  of HPC strategy at Intel. John discusses the future of supercomputing and why  it’s critical to enable broader access to huge amounts of processing power.

 

This announcement of  the first ever supercomputer combining the   benefits of microprocessor and co-processors - without sacrificing the  programming compatibility - is another step towards making access to HPC  resources easier, more cost-effective, and more time-effective. This allows our  scientists to focus on their own field of science and not the computer science.

Last week was a big week for technology news, Ultrabooks, Ivy Bridge, Memcache records broken, and the big Intel & Android announcement were all chart toppers from IDF.  As we've pointed out before, the data center was not left out of the action!  Specifically the Open Data Center Alliance made a splash at IDF with major announcements.

 

To start there is the collaboration between the ODCA and the Facebook-led Open Compute Project.  Second is the announcement of the "Conquer the Cloud" competition.  Have a cloud implementation you would like to share?  You could win $10,000… Lastly the ODCA shared the Solution Providers members' response to the ODCA usage models that launched not too long ago.

 

In this 25-minute video ODCA Solution Providers representatives from Citrix, Dell, EMC, Red Hat, and VMware present how they are working towards addressing the key requirements of ODCA usage models for automation, common management & policy, secure federation, and transparency.

 

Every month, I have the privilege of participating in Intel’s Enterprise Board of Advisors (EBOA) program. EBOA gives Intel a boots-on-the-ground perspective from the outside world, with feedback that helps the larger Intel make future product decisions. Think of it as an opportunity for our largest corporate clients to let us know that “this is what’s working,” or “this is what you need to tweak,” or even “what were you thinking?” (Obviously we much prefer to hear the former, but reality doesn’t always match the script).

 

In our last EBOA Data Center Working Group session, a member from a large European-based company gave an excellent presentation on their experiences implementing a cloud framework, which the member described as one part technology to nine parts process. I really liked that analogy. In another discussion I had today with a CIO of a Commonwealth government agency regarding the cloud recipe, I heard exactly the same thing, stated slightly differently.

 

Hmm… Maybe we’re on to something here.

 

Admittedly, and from a somewhat selfish perspective, the recipe couldn’t have been better timed to introduce my current industry perspective on Data Center Knowledge. This week’s post discusses my third fundamental truth of cloud computing strategy: your cloud ecosystem is only as robust and adaptable as the sum of its parts.

 

The article introduces the concept that your cloud framework will likely only be as mature (and by default, robust) as the organization it comes from. I expand on the topic by suggesting that the source of this maturity/robustness is your enterprise architecture (EA). In yet another example of perfect timing (maybe I should have gone to San Francisco this week), a presentation from the recent Intel Developers Forum (IDF) titled  Intel IT’s Journey to Cloud Computing also links maturity to successful cloud readiness.

 

I think you’ll find the article interesting. As always, I welcome your feedback. In particular, I’m interested in whether you’ve seen evidence in your own company or others that a cloud framework is constrained by the maturity of the organization it comes from. I believe there’s a related discussion to be had here based on other feedback I’m receiving: it’s also important to consider the maturity of the cloud service provider. Unfortunately, that discussion will need to wait for another time!

 

You can read the current industry perspective, and join in the discussion on Data Center Knowledge. For more information or answers to your questions, please feel free to contact me on LinkedIn.

Its easy to understand why most of the media attention and news coming from IDF was  surrounding Android smart phones and Win8 tablets and ultrabooks and even the  potential for a new solar-powered “postage stamp processor”.  But in my humble opinion, that was just the proverbial tip of the iceberg.  Below the surface was the massive innovation that Intel is bringing to the data center and cloud.

 

Kirk Skaugen said in his data center update at IDF this week that Xeon E5 will ship later this year and we should start seeing them in servers at the beginning of 2012. The E5 is based on the Sandy Bridge microarchitecture and will come in 4, 6 or 8 cores, run up to 16 threads per second, and are aimed at mainstream cloud and HPC servers with 2 or 4 sockets.  It is the first chip that has the PCI-Express bus integrated into the microprocessor which will improves data throughput while saving power.  And Intel  noted more than 400 design wins for the E5.


Beyond the impressive new performance the mainstream Xeon E5 brings to the data center, Intel announced at IDF that the new 710 SSD series, ranging from 100GB to 300GB, are poised to replace hard drives in enterprise servers.   And then there’s the new 10GbE and Xeon based storage solutions that are coming to market. This “converged IA” in the data center will tear down silos and make it much simpler for developers to program solutions across computing resources, storage, and networking in the data center.

 

And last but certainly not least, I learned a lot and had fun networking with some of the industry’s most influential media and bloggers.  It was great to connect personally with: Matt   Weinberger, Greg Pfister, Reuven Cohen, Alex Williams, Chris Evans, and   John Furrier among others.

 

You can find me on Twitter @ajayc47.

Intel Developer's Forum was a great place to see what we've been working on, and where the future of computing is heading.   Here is what went down at the Intel® Ethernet classes offered at this year's show.  There will be more details later here at the Server Room as well as at the Wired Blog.

 

  • NETS001 A Case Study for Deploying a Unified 10 Gigabit Ethernet Network: Ruiping Sun of Yahoo reported out on a POC (Proof of Concept) comparing the performance of FCoE vs. 8G Fibre Channel in a Yahoo environment.

 

  • NETS 002 Best Practices for Deploying VMware* vSphere 5.0 Using 10Gb Ethernet: Brian Johnson presented Best Practices for Deploying VMware vSphere 5.0 using 10Gb Ethernet.   We'll be posting the Best Known Methods (BKMs) that Brian talked about first since their was such an upswell around this class.

 

  • NETS003 Using Industry Standards to Get the Most Out of 10 Gigabit Ethernet in Linux: Intel and IBM presented on using industry standard I/O virtualization technology to get the most out of 10GbE in Linux virtualization and cloud environments.

 

  • NETS 004 Network Virtualization: Uri Cummings, Sr. Switch Architect presented a session on network virtualizationThe latest trends in Data Center Networks driven by 10G unified networking and virtualization in cloud computing infrastructures. Uri’s class included an overview of the latest 10G/40G switch from Fulcrum.

 

  • We had live demonstrations of Open FCoE in VMware, Windows and Linux environments in the Data Center Zone.

 

  • We also had a live Demonstration of an Open FCoE Target from QSAN will be in the Data Center Zone.

 

We had a great time at the show, and thanks to our partners that helped out.

Last week, the Intel Developer Forum (IDF) was in full force with a show packed with education, a packed technology showcase, keynotes presenting various Intel technologies, and great experiences across the board.  While there were many impressive announcements regarding Ultrabooks, and the next generation of processors codenamed Ivy Bridge, the data center got its share of attention as well.

 

In this two part video series from the mega briefing that was presented by Kirk Skaugen, Intel’s Datacenter & Connected Systems Group General Manager data center technologies from cloud computing to Itanium are highlighted.

 

 

Part 1

 

 

Part 2

 

 

Want more great content from IDF and all things Data Center & Intel?  Follow us on Facebook and Twitter.

We had some big news this week from the Open Data Center Alliance (ODCA) and the Open Compute Project (OCP) - two industry heavy weights that share a common goal to accelerate highly efficient DC environments and open systems management. The two organizations will work together to define open specifications for server, storage and data center infrastructure that in turn will drive more efficient web-scale and enterprise infrastructure.

 

This collaboration is truly a winning combination for cloud providers, cloud subscribers, and OEMs. The Open Data Center Alliance recently published a series of usage models that outline critical IT requirements for cloud computing. The Open Compute Project, meanwhile, recently published a series of specifications for efficient motherboard, server, and data center design for mega scale environments. The joint work by the two organizations will enable input of the Alliance’s requirements into Open Compute’s specifications and will also enable ODCA innovation on top of OCP specifications to extend to broader enterprise and service provider IT.

 

At Intel, we support the Open Compute Project, an organization formed by Facebook. We see it as a way to encourage industry innovation by openly sharing specifications and best practices for high-efficiency board, system, storage, rack, and data center design elements. These specifications represent ingredients and not complete solutions and we will work closely with our OEM partners to deliver those solutions. They will then deliver the full support, validation, supply chain and enterprise capabilities to make new generations of technology ready for use in a wide range of data centers.

 

To sum it up, I like this take from Services Angle:  “This is a big win for ODCA members. The people at Facebook now become peers with ODCA members. That means a level of respect and a comfortable environment to interact and learn about how to manage significant data loads.”

 

The collaboration shines a spotlight on the momentum of the fast-growing Open Data Center Alliance, a consortium of top global businesses brought together by Intel. The organization now encompasses more than 300 industry leading enterprise and service provider IT end users who’ve come together to chart the requirements for next generation DC and cloud infrastructure.  This is yet another substantial industry collaboration that will enable real change and positive progress for standards in the cloud.

 

What’s not to love here? This collaboration is good news all the way around!  I am getting on Facebook right now to signal my “Like” of this announcement!

 

To read the ODCA’s news release on its strategic collaboration with the Open Compute Project, visit www.opendatacenteralliance.org.

 

Raejeanne Skillern is director of cloud computing marketing for Intel. Follow her on Twitter @RaejeanneS

Download Now


paulrooney.jpgTo help maintain a high quality of service, Paul Rooney Partnership (PRP) wanted to introduce disaster recovery (DR) to its IT infrastructure and virtualize servers and storage.  PRP deployed a  virtualized solution based on Dell PowerEdge* servers with Intel® Xeon® processors 5500 series and Dell EqualLogic* storage, and chose a hosted DR solution. The company also selected Dell ProSupport* to protect its IT investment. Now if there’s an emergency, the staff can regain IT services in less than an hour. The company was also able to consolidate its servers and cut storage management time in half.


“We’ve consolidated our IT by around 60 percent, while gaining enough capacity to support business growth over the next few years,” explained Carl Pywell, IT manager for PRP.


For the whole story, download our new Paul Rooney business success story.

 

 

*Other names and brands may be claimed as the property of others.

Download Now

 

Georgia Tech.jpgThe Georgia Institute of Technology (Georgia Tech) College of Engineering (COE) Virtual Lab (Vlab) project allows students to remotely access dozens of engineering applications from their personal computing devices. In the fall of 2010, COE upgraded its Vlab servers to the Intel® Xeon® processor X5650, gaining 50 percent more cores and twice as much memory per server as it had on its Intel Xeon processor E5520-based servers. The results: increased performance and density, a better experience for users, and cost savings for IT.


“We have been able to exceed our initial target of 36 virtual desktops per server,” explained Didier Contis, director of technology services for the Georgia Tech College of Engineering. “That target may seem very conservative compared to other virtual desktop interface (VDI) projects, but our virtual desktops are sized at a minimum with two virtual CPUs and 3 GB of memory per virtual machine to support a variety of engineering applications.”


To learn all about it, download our new Georgia Institute of Technology business success story. As always, you can find this one, and many others, in the Intel.com Reference Room and IT Center.

Brandon Draeger, Product  Planning Manager, Dell

 

Researchers are proving  that what is good for your data center and budget is also good for the  environment. Here’s a quick strategy guide on how to go green and positively  impact your bottom line.


Right-sizing servers, measuring/monitoring, and improving your  Power Usage Effectiveness (PUE) will  result in an improved bottom line and a greener environment.

 

  1. Right-sizing your hardware and monitoring  power consumption can deliver the same quality performance to your customers.

  2. You save on  space, power, cooling and maintenance—and you don’t get stuck paying for more  than what you need.

  3. The environment benefits from operational  efficiencies including reduced energy consumption and a smaller carbon  footprint; and supply chain efficiencies including smaller quantities of more  environmentally-friendly packaging.

 

Implement one or more of  the following 3 strategies:

 

  1. Right-size  servers to boost your hardware performance and efficiencies

    It’s  no secret that the world’s top cloud providers and search engines use  servers engineered for performance and efficiency at scale. These lean machines  are stripped of superfluous components, and utilize shared infrastructure  to reduce space, power and cooling demands. Dell’s PowerEdge C servers are  specifically built for scale-out environments. With Dell Modular Data Centers, Microsoft  Bing Maps reports 8x in cost savings with 5x the density than traditional computing models, and achieved  a PUE of 1.03!

     

  2. Measure  and monitor your power consumption at a more granular level

    To control your power and cooling costs,  you need to measure BOTH the server and data center levels. Why does this  matter?

     

    Measure and monitor performance and efficiency by using your  server’s built-in power management capabilities. Intel®  Intelligent Power Node Manager provides power monitoring and policy-based  power management at the individual server level. By adding a power management  console like JouleX  Energy Manager (JEM), you can lower server power consumption by up to 25%  without impacting performance.

     

  3. Improve  your Power Usage Effectiveness (PUE)

     

    Implement  efficient power and cooling best practices throughout your data center facility.  For higher power density facilities, electricity costs can account for over 10%  of the total cost of ownership in higher power density facilities. The Green Grid and EPA EnergyStar offer ways to benchmark current performance of data centers, determine levels of  maturity, and identify next steps to achieve greater energy efficiency.

     

    Some of the more  common techniques that have become popular include:

     

    • Fresh air technology allows servers, storage units, and network  switches to run at more extreme temperatures (up to 113◦F in excursion-based  operation) to help save on cooling, and in some climates, eliminate chillers  altogether.

    • Economizer cooling involves using only outside air to keep the data center cool. A test done by Intel proved this method effective in climates  as hot as 92◦F.

    • Data center containment such as creating hot and cold aisles to  prevent cool air mixing with hot, resulted in a 7.7% improvement in overall  energy efficiency and 18.8 million kilowatt hours (kWh) annualized savings for  Verizon.

 

Take the  next steps to running a greener and more cost-effective data center

 

Visit Dell’s green page to learn more about Dell’s approach to  green technology. Learn about Intel  Intelligent Power Node Manager on the Dell PowerEdge C Series Energy Efficient  Server. Dell’s Data Center Capacity Planner provides power, cooling and  airflow estimates for server centers. Request the JouleX Enterprise Energy  Management Buyers Guide to learn more about smarter power consumption management. Benchmark  your data center with tools and research from Green Grid and EPA EnergyStar.

 

Note: What is PUE? PUE = total  facility power divided by IT equipment power. PUE is the ratio of the total  amount of power used by a data center to the power delivered to the equipment  used to manage, process, store and route data. An ideal PUE is 1.0.

1. OSPC for OpenStack

 

We demonstrated OpenStack with Intel ® TXT and Node Manager integration along with an Intel IT developed user interface and portal. We ultimately offer the user interface and portal to the OpenStack Dashboard project.

 

The momentum behind OpenStack is growing with more and more contributors and end customer interest. The commercial cloud operating environments continue to increase in capability as VMware demonstrated 2 weeks ago at VMworld (very impressive). The open source communities continue to grow their capabilities as well, not just in Xen and KVM, but also in cloud operating environments such as OpenStack. We look forward to working with the community to significantly extend and mature the OpenStack capabilities.

 

In the context of the larger Intel Developer Forum, Matt Weinberger from Talking Cloud captured it quite well in his blog on IDF and Cloud Computing on Thursday by noting that much of the focus at IDF was on consumer innovations (some of which are really cool) with little attention being paid to the cloud. In my meetings with customers and partners, it is clear, however, that our efforts in advancing the state of the art in cloud are not going unnoticed, regardless of the broader marketing message.

 

This is my second OpenStack related activity in a bit over a week. Last week I was in China helping kickoff the China OpenStack User Group where over 350 people attended the conference. It is really exciting to see so much energy being applied from such a diverse audience.

 

2. Memcached performance optimizations

 

In Justin’s keynote (where I had the pleasure of a short walk-on part <grin>), we demonstrated an optimized version of memcached delivering ~800k reads/sec compared to the previous published rate of ~560k reads/sec. Latency also decreased from ~1ms to ~450us. While the transaction rate increased significantly, the power per transaction is also improved.

 

One of the tricks in this optimization was to stay “real world”. It is easy to get really big numbers if you create a lot of independent instances of memcached on a single server. For real world applications, this is not an optimal solution, as it means that application would need to be modified to direct requests to many memcached services rather than just one.   Our optimization maintains a clear focus on performance, but for real-world applications.

 

From where I sit, this is further evidence that the cloud will drive innovation not just in new areas such as Hadoop and memcached, but also in optimizations that will improve our everyday experience using the cloud.

 

3. Solution Provider Innovation

 

I had a number of meetings with Solution Providers this week. There is clearly a transition happening from ‘hw focused’ to a broader base of consulting including things like connecting their customers to service providers. Any transition is challenging especially when it touches the basic business model. In this case, we are also seeing examples of innovation where these solution providers are being proactive in helping their customers effectively and materially use the cloud.

 

For example, I pleased and somewhat surprised to hear that some of the solution providers are pro-actively refactoring some of their applications so that they can be more cleanly deployed in a cloud (private and public). They are eager to take advantage of the benefits this compute/storage model offers.

 

However, it is also clear that the impact of the move to more of a ‘devops’ model is still very early and not well understood.

 

4. Keynote == beret

 

We all learned from Justin that if you want to do a keynote at IDF, you need a beret. I recon my cowboy hat will just have to do.

 

5. Solar power CPU’s

 

The era of solar power computing may be upon us. With the use of Near Threshold Voltage designs, we can get the power level so low, you only need a solar cell. Ok, maybe it was only a technology demonstration but it works for me!

As we talk to customers, they typically look for  ways to simplify the Cloud On-boarding process as they migrate application  workloads between cloud environments.

 

In today’s virtualized data centers,  migrating an entire application and its associated virtual machine(s) from site  to site can be complex. Migrations of mission- critical applications need to  happen quickly and seamlessly with no impact to the underlying storage that  supports your applications.

 

Storage federation enables IT to quickly and  efficiently support the business through pools of resources that can be  dynamically allocated. This flexibility elevates the value IT offers within the  business, as application and data movement is possible for better support of  services. To address the need for mobility and flexibility to support cloud  on-boarding and cloud bursting, EMC has developed federation-based storage  solutions to provide cooperating pools of storage resources.

 

EMC VPLEX Metro delivers a virtual storage solution that addresses the need for  cloud on-boarding through federation and creates cooperating pools of storage  resources. Federation enables IT to quickly and efficiently support the  business through pools of resources that can be dynamically allocated.

 

As customers migrate on-board applications between  cloud environments, they must ensure that they migrate virtual machines to hosts  that are secure. Intel® Trusted Execution  Technology (Intel® TXT) allows you to validate the launch status of the  host, thereby enabling the on-boarding of virtual machines onto trusted hosts  while it prevents virtual machines from being migrated to un-trusted hosts.

 

Intel TXT is a set of enhanced hardware components  designed to protect sensitive information from software-based attacks. Intel  TXT features include capabilities in the microprocessor, chipset, I/O  subsystems, and other platform components. When coupled with an enabled  operating system, hypervisor, and enabled applications, these capabilities  provide confidentiality and integrity of data in the face of increasingly  hostile environments.

 

Stop by the EMC booth at IDF this week to learn more about how technology from Intel and EMC can enable you  to securely on-board your mission critical applications.

 

Josh Mello, Solutions Technical Marketing, EMC

If you come to the Intel Developer Forum this week, be sure to attend the session I will host on the Oracle Exadata platform and the Xeon processor's role in it.  (Session DCPS005, Wednesday 9/15 at 10:00a.m.)

 

The second generation of the Oracle Exadata platform is now almost a year old.  You might not realize it from looking at the Oracle product descriptions, but from the beginning, Exadata has been constructed from server components built around the Intel Xeon processor.

 

The first generation of Exadata was built from a collection of two-socket Xeon(r) processor 5500-series server platforms. The central, unique innovation of Exadata (vs. conventional Oracle database deployments) is that it divides the processing of database processing into two elements - the database machine element handles the most complex aspects of data management such as join processing, lock management, etc., while the storage cell element divides the often time-consuming table scanning process across multiple 'storage cell' nodes running in parallel.

 

Parallel processing is not new to relational databases.  Originally, Teradata came up with the idea in the 80's when they introduced their data warehousing product to accelerate largely mainframe-based database queries (Teradata has used Intel processors from the very beginning).

 

However, Exadata is the first example of parallel processing applied to Oracle databases, and it is done in a manner that isn't limited to just business intelligence or data warehousing functions.

 

This is because the database machine component of Exadata is running standard Oracle 11G R2 with Real Application Clusters (RAC), which are available if a database image needs to span multiple processing nodes or high availability is required.  The Smart Scan feature, provided by the parallelized storage cell component of Exadata, comes as an added bonus that enables parallel query acceleration (and eliminates the need for expensive enterprise storage arrays!).

 

So, what is the role of the Xeon processor in Exadata?

 

First of all, there's the usual - performance.  Exadata provides a new feature: hybrid columnar compression. That's very useful for query acceleration, but it chews a LOT of processing.  Xeon provides the raw horsepower needed to make HCC practically usable.

 

Second, as of Exadata V2, Xeon provides scalability in the form of the 8-socket Xeon 7560 processor-based database machine option that was introduced last year.  This year, look for that offering to be upgraded to the next-generation Xeon E7 platform, which offers still more headroom for even larger mission-critical workloads.

 

In these days of pervasive concern about security, the Xeon E5 processor delivers the encrypt/decrypt performance in the Exadata storage cell element that is required to make the Oracle Transparent Encryption feature practically usable.  Without the AES acceleration capability of the Xeon E5 processor, whole-database encryption is just too costly to be practical.  The Xeon E5 processor makes it deployable.

 

Those are some of the highlights from the session.  I'll be joined by Sumanta Chatterjee, Oracle VP and a key leader of the of Exadata development team, and Hubert Nueckel, lead optimization engineer from the Intel team that works with Oracle to optimize their system for the Xeon processor.  Hope to see you there!

Is your head in the “cloud” or at least thinking of going  there?

 

Moving applications to the cloud can be complex, but upfront  planning, useful migration tools, and experience can help simplify the process.  Many factors must be considered when moving an application to the cloud:  applications component, network stack, management, security, and orchestration.  For enterprise IT professionals looking to move data,  applications, or integrated solutions from the datacenter to the cloud, it is  often useful to start with the knowledge and experience gained from previous  work.

 

Cloud on-boarding is the deployment or migration of data,  applications, or integrated solutions of compute, storage, and network  resources to a public, private, or hybrid cloud. On-boarding addresses business  needs, such as a spike in demand business needs, such as a spike in demand,  business continuity, and capacity optimization. Enterprises can use on-boarding  to address capacity demands without the need to deploy additional  infrastructure. Cloud on-boarding should be considered in the design of overarching,  enterprise-wide, cloud infrastructure that supports internal, external, and  federated clouds. It provides a very compelling usage for enterprises who want  to maximize the elastic capabilities of cloud computing.

 

The general premise of cloud on-boarding is to allow the  cloud to act as an additional resource or extension of the datacenter for the  following reasons:

 

  • For occasions when the datacenter becomes  overloaded by demand spikes
  • For cost-effective capacity management and  seamless load balancing
  • For disaster recovery and failure mitigation

 

Citrix and Intel have worked together to develop a reference  architecture focused on cloud on-boarding and have identified important  on-boarding considerations, as well as the key software tools and applications  to make the effort successful.

 

Are you attending IDF? If so, stop by and visit Citrix  within the Cloud Zone area in the Intel Data Center Pavilion. See for yourself  our work in action and learn what your organization should consider and ease  your transition to the cloud.

We look forward to seeing you in San Francisco!

 

To learn more about the Citrix solution for Cloud  on-boarding visit www.citrix.com/cloud/cloudbridge.

 


Pete Downing

Is the Principle Product Manager in the Cloud Networking Group at Citrix Systems

Pete joined Citrix in 2006 and is involved with Citrix’s  cloud computing initiatives which includes the Citrix NetScaler Cloud Bridge,  Citrix OpenCloud On-Boarding Solution stack, and other key strategic cloud  initiatives. 

Listen live @ The Intel Chip Chat Channel on  Blogtalk Radio ask live to @intelchipchat!

Chip Chat Live On Air Sign

Source - Flickr User: katielips

 

 

Not at IDF in San Francisco this week? Intel® Chip Chat is  broadcasting live, bringing you interviews with the industry’s leading  technologists from 10:30 a.m. – 12:30 p.m. PT on Tuesday (9/13) and Wednesday  (9/14).  Questions on cloud? HPC or Intel  software? This is the time to get them answered.

 

Tuesday’s line up is: Bridget  Karlin (GM, Intel Hybrid Cloud)  and Jason Davidson (Strategic Solutions Director, Intel Hybrid Cloud)  from 10:30 - 11; Alex  Williams (editor, SiliconAngle)  from 11:15 – 11:45; and Reuven Cohen (Founder/CTO, Enomaly)  from 12:00 – 12:30.

 

Wednesday's guests are:  Pauline Nist (GM, Intel  Mission Critical Technologies) and James Reinders from the Intel Software  Group.

 

Questions for any of the  guests can be tweeted to @intelchipchat & listen live @ The Intel Chip Chat Channel on Blogtalk Radio!

Late breaking news for IDF class Improving Data Center Efficiency with Intel® Products, Technologies and Solutions (DCCS003) (Today @ 4:25pm) - As planned, Jay Kyathsandra and I will be covering how Intel products, technologies and solutions - both existing and in development - provide opportunies for server builders & integrators, developers and end users to innovate and drive improvements in data center efficiency that have graduated from "nice to have" to "requirement" for IT staff.

 

The big news is that we will be joined by special guest Alex Renzin of facebook who will talk about how Intel and facebook are collaborating to deliver a manageability solution based on an Intel technology implementation the DCMI specification to manage BMC-less servers in their data center environments.  Real world issues, real technology, real innovation - we are definitely excited to hear what Alex has to say.

 

Plenty of other opportunities to talk with us about manageability and power management at IDF - stop by the Node Manager booth in the Data Center Zone in the Technology Showcase with your question or comments about developer opportunities based on Intel technologies

It’s that time of year again! Only one day until Intel’s  annual developer forum (IDF) at Moscone  Center in San Francisco from September 13-15.   While there is always a lot of focus on new Intel client products and  technologies, we always fight to include Server content. I am happy to report  that we have again succeeded!

http://www.moscone.com/uploads/multimediaphoto/57/lowRes/MWExterCornerFourthSt-.jpg

Source: Moscone Center - SMG

 

 

 

If you are an Intel server customer, we love to focus on  solution stories with our many software partners.  Given the success of our Xeon E7 (Westmere)  launch earlier this year, we have had the opportunity to see most of our  partners demonstrate data center solutions and utilize key features. This is  your opportunity to come and learn firsthand from many of the experts.  I’ve listed key server sessions below,  highlighting Mission Critical features and Data Center solutions.

 

Additionally, there will be numerous demonstrations that  showcase Intel 10GbE network products, Intel Node Manager, software and  security, and new chip technologies such as our Many Integrated Core (MIC) chip  for scalable technical computing.  New  computing solutions such as Microserver, in addition to Cloud and Virtualization,  will also be in the Technology Showcase, along with partner solutions from  VMware, Microsoft, Oracle, Cisco, Dell, RedHat, Citrix, EMC and others.


Some key IDF Tech Sessions featuring Intel Xeon mission critical technologies are:


 

 

 

 

 

 

 

These sessions will be delivered by many of our Key  Architects, Strategists, Software Engineers and performance experts.  Please check out the IDF  Program for detailed abstracts, speaker information and times and  locations. They provide an opportunity to see how Intel technology currently  delivers real world solutions, along with future product plans and emerging  technology trends.

 

You will also want to attend the Executive Keynote  sessions   to hear our executives  discuss Intel’s future plans. Intel will also host bloggers and broadcast from IDF.  I’ll be sure to share my impressions and  reactions to the conference after IDF, so look out for another blog post soon! See  you at IDF!

I think it was the mid 80's when McDonald's advertised the "McDLT" (I loved that music). The claim to fame for this 'burger' was the packaging.  It was all about separation by temperature.  The hot meat separated from the cold and delicate lettuce to be joined sometime later by the consumer. At that point, my burger purchase to consumption delta t was about 5 seconds, and I didn't really benefit from the separation. I never bought one...

 

http://3.bp.blogspot.com/_f-W4oMS8xcI/SdA67AoAxLI/AAAAAAAAAAw/KjRt7hd6oog/s320/McDLT.JPG

 

25 years later, I look at a customer data center and I say "you need to keep your hot side hot and your cold side cold". Then, I inexplicably (to the customer) chuckle. The stuff we remember...

 

But, I was correct - you really do want to keep them separate.  I started digging around on the Internet and I found this is a good method for data center efficiency - It is a solid intellectual discussion of the benefits.

 

I do not advocate anybody's solution, but the benefits of separation seem obvious.

 

Separating your hot and cold air streams optimizes your use of cooling and fan energy. Separation also makes it possible to adopt all kinds of cool (pun intended) energy saving alternatives.

With the hot and cold streams separated, it becomes possible to:

  • Inject cool outside ambient air in to the hot stream - free cooling
  • Completely vent hot to the outside and pull outside air into the chillers ( that may not need to do any chilling much of the year)
  • Use that heat to make living space warmer - supplement heat plant for office space
  • Use available coolants like water to pre-chill the hot stream and so many more.

 

Lastly, it is relatively easy to achieve. The barrier need not be perfect; heavy plastic curtain can be a cheap resource to isolate the air flows (think freezer sections in some groceries).

 

Virtually every customer I speak with knocks on the door of power, space, or cooling constraints.  Hot aisle/cold aisle separation can go a long way to reduce the cooling problem. Fortunately, I also have a solution to the power and space problem! I'll save that for my next post!

http://www.intel.com/idf/pix/marquee/IDF11_Sub-page_Banner_Quantum.jpg

 

 

The industry’s leading technologists will be at Intel Developer Forum in a couple of weeks, and we decided to grab them for two days of Chip Chat and Conversations in the Cloud livecasting!

 

On Tuesday, Conversations in the Cloud will have an excellent line up of Intel and industry cloud computing experts, while our mission critical, storage, and technical computing gurus will stop by Chip Chat on Wednesday.

 

If you can’t attend IDF, why not listen to our livecasts from IDF at 10:30 a.m. – 12:30 p.m. each day?

 

As always, we’re interested in what you want to know from the experts. You can send in your questions for each guest via our @intelchipchat Twitter account and we’ll pass them along during the livecasts!

 

Stay tuned for more announcements on the guest line up!

We're giving away two Netgear Servers at IDF 2011! Here are the official rules and information:

 

 

How the Contest works:

  • Give away held during IDF 2011 on September 13th 2011 and September 14th 2011
  • Follow @IntelXeon on Twitter to find out the exact location and time to visit for the prize.
  • The first person to visit the Intel Data Center Zone will be awarded a NetGear home server. 
  • To be eligible to win, participants must be present 
  • A total of two NetGear home servers will be given away during IDF, however only one NetGear home server will be awarded at each Tweeted location.  Winners must be present to win.
  • Winners will receive their prize by mail and must provide a valid address and contact information

 

 

For full rules and eligibility requirements please review the official Intel IDF 2011 Twitter Contest Rules.

Download Now

 

Ceské Radiokomunikace.jpgČeské Radiokomunikace is a modern broadcasting and telecommunications company with nationwide operations across the Czech Republic. It was the first company in the country to offer digital TV broadcasts. Alongside its TV and radio broadcasting services, it also provides a full range of voice, data, and Internet services. Looking to develop and strengthen its market segment position by continually improving services, it recognized the value in offering cloud-based services to Czech companies and built a data center from which to launch them. To ensure top server performance, security, and energy-efficiency, it implemented the Cisco Unified Computing System* powered by Intel® Xeon® processor 5600 series.


“Entry into the information communications technologies services market segment was a big challenge for us,” explained Marcel Procházka, head of business development and strategy for České Radiokomunikace. “We carefully chose collaborators who could deliver cloud computing solutions. Intel, together with Cisco, offered not only an optimal technical solution in the form of the Cisco Unified Computing System equipped with Intel® processors, but also wider business cooperation and IT knowledge to help us succeed in a highly competitive market segment.”


To learn more, download our new České Radiokomunikace business success story. As always, you can find this one, and many more, in the Intel.com Reference Room and Survival Kit.

 

 

*Other names and brands may be claimed as the property of others.

navisite.jpgDownload Now

 

NaviSite, Inc., a Time Warner Cable Company, is a leading worldwide provider of enterprise-class, cloud-enabled hosting, managed applications, and services. When the company launched its cloud computing solutions, it used Intel® Xeon® processors as the foundation, building a dense cloud environment that delivers more than 10 times the revenue per rack of non-cloud services. By expanding the environment with the Intel Xeon processor 5600 series, the company has further increased server density and enabled customers to move more and larger workloads to the cloud.


“The Intel Xeon processor 5600 series enables us to capitalize on greater core counts and memory capacity per server,” says Chris Patterson, senior product manager of cloud and hosting services at NaviSite. “As a result, we can accommodate more and larger workloads without significantly increasing our footprint.”

 

To learn more, read our new NaviSite business success story.

joyent.jpgDownload Now

 

Joyent is a multi-tenant cloud innovator that serves the stars and rising stars of social network gaming, real-time mobile applications, and other dynamic Web segments. It meets their performance and scalability requirements with Intel® Xeon® processors and cloud software that Joyent developed over years of running a public cloud. Company executives say that moving from the Intel Xeon processor 5500 series to the 5600 series helped Joyent effectively double its capacity and revenues, and that Joyent’s ability to support 400 virtual machines on a server is the envy of its competitors.


“Density equals revenue for us,” says Jason Hoffman, Joyent’s co-founder and chief scientist. “By increasing our core density and memory density, we can put more tenants on a server, which means higher revenues.


To learn all about it, download our new Joyent business success story

expedient.jpgDownload Now

 

Expedient Communications has added infrastructure as a service (IaaS) private cloud computing to its managed services portfolio, choosing the Intel® Xeon® processor 5600 series as a foundation for scalable, secure, and efficient cloud services. Upgrading from servers based on the Intel Xeon processor 5500 series, Expedient’s IT leaders say they gained a 100 percent increase in compute capacity and density. They also cut their watts consumed per GB of memory, a key indicator for energy efficiency in the cloud, by almost 50 percent.


“When we moved from the Intel Xeon processor 5500 series to the 5600 series, we essentially doubled our compute load and density per pod,” explained Alex Rodriguez, vice president of systems engineering and product development for Expedient Communications.  “When we bring in a pod based on the Intel Xeon processor E5 family, we expect to see a 130 percent increase.”


For the whole story, download our new Expedient Communications business success story.

From 1861-1865, the U.S. was embroiled in a Civil War between the Confederate States of America (the Confederacy) and the Federal Government (known as the Union).  The tactics employed during the conflict are best understood through a series of campaigns fought in places such as Manassas, Bull Run, Antietam, Fredericksburg, Chancellorsville and Gettysburg.

 

According to author and historian Stephen W. Sears, the Gettysburg campaign was the largest of the war. It lasted about 3-days (July 1 – July 3 1863) and accounted for combined losses of approximately 57,000 men.  One of the actions that defined this battle, and perhaps the one that ultimately determined the outcome of the entire Civil War, is known as Pickett’s Charge.

 

So, what exactly does this history lesson have to do with cloud computing?  My latest post on Data Center Knowledge draws parallels between Pickett’s Charge and my second fundamental truth of corporate cloud computing strategy:  Cloud is a top-down architectural framework that binds strategy with solutions.

 

After discussing the three levels of architecture we use at Intel, I try to impose some level of reality into the discussion by citing Pickett’s Charge as an example of a cause (think of it as a solution) that had an unforeseen effect on a broader strategy - something that’s essential to consider as you build your company’s cloud computing framework.

 

As always, I welcome your feedback and encourage you to share your company’s experiences about how you align top-down frameworks with cloud solutions.

 

For more information and any questions please feel free to contact me on LinkedIn.

Filter Blog

By author:
By date:
By tag: