Skip navigation
1 2 Previous Next

IT Peer Network

22 Posts authored by: CATHERINE SPENCE

Back in the early 2000s, Amazon may have been the progenitor of today’s API economy, when the company CEO issued a mandate that all teams must expose their data and functionality through service interfaces. While Intel IT’s executives have not issued such a directive, we make sure that every layer in our enterprise private cloud exposes and consumes web services through APIs. In fact, this a key component of our overall hybrid cloud strategy, as described in the enterprise private cloud white paper we recently published. As the IT Principal Engineer for Intel’s cloud efforts, I’ve taken APIs’ important role to heart and have integrated this concept into our architecture.



Why Is This so Important?


First, as Amazon foresaw, you cannot scale fast enough unless you automate and to do so you need APIs. Intel IT, acting as its own cloud provider, requires automation and self-service. If all you have is a GUI with no scripting capabilities, you cannot automate. For example, you cannot scale out and add more applications, and—just as importantly—scale back as business needs change. Therefore, we strive to provide an API and a command-line interface for every layer in our private cloud—including IaaS, PaaS, and DBaaS.


Automation, enabled by APIs, also helps IT keep our costs down and do more with less—the IT mantra of the century. Current industry data suggests that in a highly automated environment, a single system admin can manage 1,000 servers or more. While we haven’t reached that point yet, we have made significant strides in increasing automation and providing more services without increasing cost. More importantly, automation is critical for business agility supported by self-service capabilities. As the speed of business increases, users need tools and automation to “do it themselves” as opposed to waiting for specialized personnel to act on their behalf.


Finally, exposing APIs is a critical part of our move to a hybrid cloud model, where workloads can be balanced among clouds by using policies. Without consistent API exposure, such a hybrid cloud model would be impossible.


How Have We Implemented API Exposure at Intel?


We strongly encourage our application developers to create cloud-aware applications—even the cloud itself should incorporate cloud-aware principles. Part of being cloud-aware is implementing small, stateless components designed to scale out and using web services to interact among components. We are heavily promoting the use of RESTful APIs for web services, for several reasons:


  • The RESTful model is easy for developers to use in terms of methodology with specific methods (GET, POST, PUT, and DELETE).
  • REST is based on HTTP, which is designed to scale and is tolerant of network latency—very important in the cloud.
  • If you call an API multiple times with the same data, the calls return the same result, which facilitates retry and error-handling scenarios.


We’re adamant about API exposure at every layer of our private cloud for one reason: automation supports a highly agile environment and is essential to the success of our hybrid cloud strategy. Please check out the paper and tell us what you are doing in this space. I would love to read about everyone’s ideas.

- Cathy

Catherine Spence is an Enterprise Architect and PaaS Lead for the Intel IT Cloud program.

Connect with Cathy on LinkedIn
Read more posts by Cathy on the ITPN

Intel IT is actively implementing PaaS as the next logical step for our enterprise private cloud, to accelerate custom application deployment and promote cloud-aware application design principles. Our PaaS environment will build on our already successful infrastructure as a service (IaaS) efforts, and will provide an environment featuring self-service, on-demand tools, resources, automation, and a hosted platform runtime container.


But, as many companies discovered earlier this year during a well-publicized IaaS failure, a platform is only as useful as it is available. Companies that designed for failure weathered the outage far better than companies that simply hoped nothing would ever break. For example, one company that used stateless services, graceful degradation methodologies, and multiple availability zones (AZs) experienced only minor errors and higher latency; less prepared companies were knocked out completely.


In our PaaS environment, we promote design for failure at both the platform and the application levels.


Platform Level: We want the underlying platform to do as much as it can to provide cloud capabilities for applications written to PaaS.  For our PaaS pilot, we are providing high availability within the platform but it is not enough.   We strive to implement an active/active model for instances of PaaS which run in multiple AZs.   In an active/active model, applications will be deployed to and synchronized between a primary and secondary PaaS instance.  If there is a failure of the primary PaaS we seamlessly failover to the secondary PaaS using global load balancing.  The platform will provide “eventual consistency” of data.  This means that over a period of time, all updates will propagate through the system and eventually the data associated with all the applications running on various PaaS instances will become consistent across all AZs and uncommitted transactions are resubmitted by end users.  A important concept to provide eventual consistency is sharding, where data is horizontally partitioned in the database architecture.


Application Level: We are actively promoting the idea of cloud-aware applications. We want the application—that is, the application developers—to take more responsibility for designing for failure. Traditional applications take it for granted that five-9s infrastructures are available. But in the cloud, that’s not necessarily true.  They should expect and design for infrastructure outages.  Building cloud-aware applications is a new area for Intel developers, and we are helping build new skill sets into our development community to help them design simplified, fault-tolerant, modular services that run in a virtualized, elastic, multi-tenant environment.


For more information on Intel IT's PaaS efforts, including a detailed discussion of our pilot project and key learnings, see “Extending Intel’s Enterprise Private Cloud with Platform as a Service"

Intel’s team of collaboration architects won an IT industry award for the enterprise architecture (EA) deliverables which enable global collaboration solutions throughout Intel.




The basis for the award was comprehensive EA for the broad, integrated set of collaboration tools Intel IT provides on secure computing infrastructure.  The scope of collaboration includes videoconferencing, audio conferencing, shared content repositories and collaborative web sites, webcams, softphones and social media.  Specific examples of architecture called out in the nomination were unified collaboration & communications (UCC), presence-enabled applications and unified meeting resources (UMR), with excerpts of actual strategic, reference and solution architecture artifacts.


Other architecture nominations competing in the “Customer Oriented Business Models” category came from international companies in industries such as US military, insurance, real estate, engineering/construction, and telecommunications.


The award ceremony was held on July 28, 2011 in Bangalore, India and was attended by more than 250 top IT and management professionals including award recipients from over 10 countries.  The architecture excellence competition was sponsored by iCMG, a business consulting firm which also holds the annual Architecture World forum.   The idea behind the yearly competition is to encourage the development of a wide range of architecting talent.


Congratulations to the core architecture team: Cathy Spence, Cindy Pickering, Scott Trevor, Rick Kraemer, Warija Adiga, Ganesh Narayanan, Omer Ben-Shalom, Scott McWilliams, and John Simpson.


For more information on Intel IT’s collaboration solutions read Enabling Global Collaboration with Intel®-based Infrastructure.

Platform as a Service (PaaS) is the delivery of a cloud computing platform on which developers create and deploy web applications and cloud services.  Industry analysts claim that PaaS is already a compelling alternative to traditional application development due to high availability, low cost of entry, elimination of infrastructure management, and ease of application deployment and support.


In the external cloud, PaaS has unique advantages for applications which service many users outside the enterprise, require extensive integration with outside services, and/or are a temporary or highly elastic capability. On the market today, there are more than a dozen PaaS providers, each offering cloud development environments which lock-in to a particular development methodology.  We’re exploring some of the options with an eye toward enterprise scale, security models and programming methodologies. Each environment has different capabilities and limitations to develop and deploy applications. 


As our enterprise environment evolves into a single internal cloud which scales based on demand, application developers will construct applications structured for virtualized, web-based environments and mesh seamlessly between internal and external clouds.   Internally hosted PaaS is an opportunity to provide an integrated approach to the Cloud application and services layers.  With Services Oriented Architecture (SOA) as the foundation, developers could incorporate standard underlying services such as security and manageability, as well as encouraging the new applications to be designed as reusable services themselves. 

In a recent TechRepublic article, Jason Hiner asks: Are Netbooks quietly driving us to Thin Clients and Cloud Computing?


Of course, the article is primarily about netbooks and how wonderful they are. No argument here. But the question of thin versus cloud has popped up in an interesting way. Thin is important because of the nature of the netbook but what does that have to do with cloud computing? Not all cloud applications are thin.


Perhaps the logic is as follows:

  • If cloud then we are delivering services over the internet
  • If internet then we must be using a browser
  • If browser then the computing must be taking place in the backend with only the UI distributed to the client device
  • Therefore all cloud devices must be thin


So what about rich clients? We happen to think that they are perfectly suited to cloud computing. Maybe our latest whitepaper on Better Together: Rich Clients and Cloud Computing can help set the record straight – or at least prompt some alternate thinking.

As we abstract IT services and increase the use of SaaS and Cloud Computing, we contend with the force of consumerization. We are seeing a fundamental shift from enterprise focus to internet; and a shift from standardized offerings to user selection and personalization in choice of applications and client devices.


How can we improve agility and manage emerging complexity while balancing quality of service with innovation for different types of applications?  For some, large scale abstraction leads to efficiencies for commoditized functions, whether they are hosted in house or outsourced. Other applications benefit from a more ad-hoc approach and might be more easily sourced from the cloud.  IT will not be successful if we too tightly control services. Instead we must create a framework for our users to make smart choices.





Cloud Computing is getting a lot of press lately as a way to quickly add new computing capability and reduce costs.  Many enterprises, including Intel, are determing how best to extend to the cloud.


By "cloud" we are referring to a complex computer network, most often the internet.  Cloud computing consists of Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS).


The Intel strategy is to grow the cloud from the inside out.  We are taking advantage of SaaS and IaaS where possible and we are building a private, internal cloud (iCloud) computing environment.  Find out more by reading the following white paper:


Developing an Enterprise Cloud Computing Strategy

OS Streaming can deliver considerable manageability benefits to Intel IT training rooms with multi-user PCs.  To evaluate performance and utilization in a production environment, we conducted proof of concept (PoC) testing in two rooms located in different buildings on an Intel campus.  We found that OS streaming improved manageability and delivered fast client boot times with moderate server and network utilization, even during worst-case boot storms.


See full paper posted at:

Collaborative Argumentation Study

In this project, Intel Information Technology and MIT studied web-based social media as a tool for understanding collective intelligence and distributed decision-making. The useful question we posed was "What are good ways to balance the potential productivity advantages of open collaborative computing versus the data security needs of the organization?"






Over the three week period we generated 73 author accounts, with 51 from outside Intel. The users contributed 64 certified posts, with 40 from outside Intel. Twenty five ratings were collected. Our resulting deliberation map was well structured and remarkably complete.


What did we learn?





First and foremost, we validated that there is good potential in the combination of social media and argumentation. Social media gives us ability to host large scale discussions with a vast number of diverse users over the internet. It enables us to readily combine discussions that are internal and external to Intel, if desired.




One of the biggest benefits of Deliberatorium was the ease of generating the argument map. The "moderate-as-you-go" approach saved a great deal of time during the post-processing of the data collected. This was especially important as the number of users and topics scaled. The compact format was useful to reduce complexity and helped "make sense" of threaded discussions (conversations) characterized by other tools. It also provided an artifact that can be used for later data mining or as a historical record of the project.



Based on this effort, we have decided that the argumentation capability is an important overlay for social computing tools. In the future we want to find and link all related content regardless of the source: web, wiki, etc. The key is flexible input with robust analytics and reporting to get better output.


Thank you for participating

Thanks to everyone who participated in our study, especially our top contributors: Luca, Ultimo15, Adam, and Lfriedl. Inside Intel, thanks to our most active contributors: Chris Wisehart, Guillermo Rueda, and Matt Rosenquist.


For more information

Klein, Mark and Iandoli, Luca,Supporting Collaborative Deliberation Using a Large-Scale Argumentation System: The Mit Collaboratorium(February 20, 2008). MIT Sloan Research Paper No. 4691-08. Available at SSRN:






Visit the MIT Deliberatorium Tool

Intel Information Technology and MIT are studying web-based social media as a tool for understanding collective intelligence and distributed decision-making. The useful question we have posed is "What are good ways to balance the potential productivity advantages of open collaborative computing versus the data security needs of the organization?"


In order to maximize our analysis of the discussion results and MIT Deliberatorium tool, we are extending the deadline to October 6, 2008.




Please take five minutes to add one new post to the discussion and to rate one other person's contribution. This will enable us to gather a wider range of opinions on the topic and further investigate the value of the approach.



Add a Post

Rate a Contribution

Click on the "add" button located to the right of any item in the map. A dialog box will pop up. Enter your post and then click "submit" to save your entry.

Click on any item in the map that you want to rate. A description of the item will appear in the right panel. Click on one of the stars to select your rating.




Top contributors to the discussion will be recognized when we post our final project results.



For More Information watch the ten minute video clip for a concise overview. Contact with any questions.



Thank you for your continued participation!



Traditional corporate information is stored in highly secure repositories within enterprise boundaries. New forms of data are being created in emerging mediums such as blogs and wikis. Cloud computing and network-based systems offer new venues for processing and storage.


The Big Question


What are good ways to balance the potential productivity advantages of open collaborative computing versus the data security needs of the organization? Consider several examples:


  • Does having access to social media make you more productive? Is it secure?

  • Is having access to raw corporate data more productive than secure, specialized tools?

  • Does IT need more control or less? What new tools and/or methods are required, supporting a more open environment?


These questions may never see wide consensus and the decision is crucial to every CIO. We will use your feedback for IT analysis.





Make Your Opinion Heard



Become one of the first to test MIT's new collective intelligence tool called Deliberatorium. Join the fun in two simple steps:



1. Create your account and log in to the MIT Deliberatorium



2. Add your perspective to the discussion


    • View, rate or comment other author's posts using pros & cons

    • Add your own ideas for new solutions and issues


This discussion topic will be available until September 29, 2008, and then results will be shared on Feel free to forward this invite to any interested parties.




It's Cool to Argue



Research shows that a large group of diverse individuals tends to get the right answer because they bring different perspectives into the discussion. Help Intel and MIT learn more by participating in this web-based argument.





For More Information

Watch the ten minute video clip for a concise overview.

If you are at the Intel Developer Forum in San Francisco this week you might want to visit the System-on-a-Chip Community for a very cool demonstration of streaming media over WiMax. Our Intel IT Team put this demostration together working with the Intel product developers and their ecosystem partners.


This demonstration shows a connection between a corporate office and a remote branch office via WiMax. The branch office uses an all-in-one appliance (a secure mesh router) containing a WiMax radio. The corporate office has a WiMax basestation and streams multimedia content over the network connection back to the branch office.


The secure mesh router in the demonstration is built using an Intel(R) EP80579 Integrated Processor formerly known as Tolapai. EP80579 is the first integrated processor that is a system-on-a-chip (SOC). This is important because it hails a new generation of smart, flexible, light and simple devices for the embedded internet. As an IT Researcher I'm no product expert so check out the full official SOC Press Kit for more details.


Here are some pictures from IDF:





Stop in and say 'hi' to Bruce!

In this brief video clip, Intel Systems Engineer Christian Black illustrates the concept of application streaming used with virtualization. He provides an overview of how it works and reviews the benefits for users and administrators.





Information Week recently released an excellent Special Report on Software as a Service (SaaS). A poll of 374 business technology professionals showed that 50% of organizations are considering or running one or more enterprise applications over the Internet as a service. I actually participated in the survey and you can probably guess which quote is mine in the “Our Readers Weigh In” section of the article.


The analysis goes on to conclude that SaaS is maturing and becoming part of enterprise IT strategy. The recommendation is that “SaaS should be looked at as just one more delivery method that may or may not fit your specific organization’s need.” How true!


If you take the standpoint of an individual client system, services can be delivered to it in an increasing number of ways. The service can come from the Internet cloud or from within the Enterprise. The application processing can take place on the client or be hosted on a server somewhere. It might run within a virtual machine or natively within an OS. The client GUI might be installed locally or streamed or hosted with a web interface. The service could be mashed up or self contained. With all of these evolving service delivery mechanisms and options, it will be interesting to see how we arrive at the correct balance at the client.


One Size Fits All

Posted by CATHERINE SPENCE Mar 28, 2008

According to “one size fits all” is an adjective that means “acceptable or used for a wide variety of purposes or circumstances; appealing or suitable to a variety of tastes.”  In IT, we have used this approach for how we deliver client systems to users.  We pick a few key hardware platforms and create OS builds that meet security requirements and contain a base level of software applications.   Users take delivery of new systems and then customize from there with various configuration settings and specific software needed for their jobs.  The “one size fits all” model has worked pretty well over the years.  It has been a highly successful way for IT to mass produce systems and support users in a standard way.


The world is changing.  The number of available choices in hardware platforms is significantly increasing, ranging from desktops to portables to blade client to smart phones.  Users are becoming increasingly aware of the choices and want to participate in the decision over what devices are best suited to their work style.  In some cases, they want to use different devices simultaneously (for example, a smartphone and a laptop).  In terms of software applications, new computing models are emerging to respond to the complexity.  IT does not want to create new applications for each kind of device introduced in the environment.  A major challenge will be to consolidate backend infrastructure and provide a common user experience across the spectrum of client hardware platforms, not to mention all of the issues related to security and IT governance.  We must embrace these challenges because the days of “one size fits all” client hardware are numbered.

Filter Blog

By date: By tag: