Skip navigation

Hello, everyone. I’m Ed Jimison, Technology Evangelist with Intel IT.


In this series of blog posts I’ll talk about the concept of device-independent mobility and how client virtualization technologies are key to making this concept a reality.


First of all, what do we mean by device-independent mobility? Remember way back when, in the nineties, when the term mobile client was starting to be used a lot in the industry? The mobile client was a new approach to desktop computing; take the desktop that was fixed to… well, your desk’s top, add a rechargeable battery and an ethernet dongle, and voila, you had a mobile client. People didn’t want to be tethered to the desk anymore. They wanted to take the computer and information with them, and this was the first step toward enabling this.


Now, fast-forward to the present. We’ve made huge strides in mobile technology and it’s only going to get better. Device-independent mobility is similar in concept to the mobile client but it takes it up a notch or two. The central idea is that as a corporate employee I can now access my IT services, applications, and data from any device, anytime, anywhere. Whether I’m sitting in the waiting room with an elderly parent waiting for the late doctor, or sitting on my couch watching the baseball scores, I would be able to get to the important services I need using whatever device I had access to at that moment. It’s making my corporate world mobile and, most importantly, allowing it to be available to me from a variety of devices.


So remember that device-independent mobility isn’t about working more hours. It’s about making it easier to get access to whatever information you need as you go about your busy day. . Intel envisions a Compute Continuum that provides a seamless, consistent experience across devices. It makes sense to include IT applications and services as part of this continuum. We’ve already started big with email and calendar access on personally owned devices. The next challenge is to take our services to in-vehicle infotainment systems, smart televisions, context-aware tablets, and whatever else the world comes up with.


In my next blog post I’ll talk about the 3 key elements that form device-independent mobility and how client virtualization plays an important part in making this a reality.

The IT Security industry specializes in the protection of information processed, transferred or somehow controlled on computer systems. Yet there are several aspects of computer security that are misconstrued by those who just casually interact with computers to even those in the computer profession. With a topic so broad, it’s difficult to summarize in just a few short words. We could simply inform everyone to ensure their anti-virus software is scanning for malware or be careful as to where you enter sensitive information. These are important considerations but it’s only by understanding risks of computer systems and the information collectively, that we can comprehend the challenge.

In a video blog by Intel’s CISO Malcolm Harkins, he describes a common misperception of risk when it comes to information people are willing to share to the world through social media. For me, this brought up other parts to IT Security that are also misunderstood and compels us as information security professionals to share what we can whenever possible in order to help communicate that security is everyone’s job in an organization, and important knowledge for any computer user.

One area that presents many misconceptions is computer and network security. The computing environment does play major part of the risk equation because we need to verify it has the capability to provide a level of security required for its location. But a common misconception here is that by using a firewall to block unwanted external traffic or running antivirus software, all malicious traffic or software will be prevented from entering. In addition, having all computer systems protected with the same level of security throughout the infrastructure regardless of its purpose creates a one size fits all security model that is much more costly to maintain.

Another misconception is in the area of data classification and compliance. The concepts used to evaluate risk should not be based on the type of information alone but also where it resides, who needs access to it and what technology can be used to protect it. The scenario that evaluates the data alone may give the misconception that compliance is security. Compliance by itself is not security and may lead to a false sense that security can be achieved by following a checklist. The classification of information along with how it will be accessed is an important consideration for the risk equation because it allows for an evaluation of possible threats specific to that data, but it is also equally important to consider the computing environment by which it will be protected.

Perimeters that are protected with a firewall are no longer sufficient alone against more sophisticated targeted attacks. It’s important for constant re-evaluation of the security posture for any organization because of the growing list of factors to consider for securing information stored or processed on computer systems. Expanding on the security-in-depth strategy, the White Paper titled “Rethinking Information Security to Improve Business Agility”, leading information security experts at Intel IT describe a strategy for evaluating risk based on the location of the information along with the requesting user’s location and referred to these locations as “security zones”. Some of these zones can be considered trusted based on a score that evaluates the source of the request and destination of the data. Depending on the score, even a legitimate user might end up with only limited access to data due to factors such as trust level of the user’s current location.  This new paradigm for information security is designed to meet a broad range of evolving protection requirements that include the assessment of new usage models and threats. Additionally, the expectation that preventative controls such as firewalls are good enough for security; detective and corrective controls are also a very important part of an information security process as well as evaluating each for their effectiveness on an ongoing basis.

For these reasons, I believe that one of the most common misconceptions is that computer security is not a “set it and forget about it” list of security options but rather an ongoing process that evaluates risk based on both the type of information and the computing environment being used. Just as the bad guys are going through the same process to try and circumvent mitigating controls, we must continue to evaluate whether the appropriate countermeasures are in place. Information security is an ever changing challenge and the industry must constantly prepare for technology changes in order to prepare for the next wave of vulnerabilities and associated risks. The important thing is that we are now more commonly asking the questions about security implications for any new computing technology. Using cloud computing as an example; there is a greater concern about sensitive information being placed on uncontrolled or non-trusted computers systems than ever before. We can only hope that trend will continue.

Virtualization of our infrastructure is a vital part of our eight year data center strategy that is on track to create $650M of Business Value for Intel by 2014. Currently 53% of our Office and Enterprise environment is virtualized. We are accelerating our virtualization efforts as we continue to build out our enterprise private cloud architecture for higher efficiency and agility.  We have tripled our rate of virtualization during 2010 and have set an internal goal to be 75% virtualized.


Along the way we realized that not all cores are created equal and the differences are perceptible in virtualization.  Here's why higher performing cores matter:

  1. For demanding and business critical VM workloads where throughput and responsiveness matters
  2. For dynamic VMs whose compute demands fluctuate throughout the day and could exceed the headroom of a lesser performing core
  3. Reduce software licensing costs as some vendors charge on a per core basis and higher performing cores require less sw instances


This video illustrates nicely why server core performance matters in virtualization.



Once again, I find myself in a position where I should apologize for the delay in posting this last vblog conversation I had with Intel's CISO, Malcolm Harkins. I could say that I saved the best for last but frankly, they are all good (in my never-to-be-humble opinion ). The last question I posed to Malcolm was what are the security challenges a company faces when employees want to start using their own devices within the corporate environment?


Malcolm talks about both the benefits and challenges of IT Consumerization. For example, how do you intermingle or keep corporate and personal data separate on the same device? Check out his answer below and if you haven't had a chance to catch the other four, the links are here.




Having setup the Social Computing platform for the enterprise, how does IT determine the success of the platform? Should IT be satisfied that one more tool-set is available for the employees to engage with? How does IT calculate the return on investment made in enabling social computing within the enterprise? Should IT continue to invest on this platform? What are the indicators that show that the platform is used for the benefits of the company?


There are numerous questions that come to our mind when we think of enterprise 2.0 platform. It is a challenge to calculate the productivity gain resulting from the enterprise social computing platform. I don’t think there a direct formula for “Return on Influence”, an indicator which says “A knowing B through the platform" has helped A solve a complex technical challenge” and reduce the cycle time for the issue resolution. However, we do know that the platform acts as an enabler for collaboration and knowledge management. We may not be able to identify the tangible gains, but intangible gain of connected workforce, reuse of information, preventing re-invention of wheels are significant gains from the platform.


At Intel, we do have our internal Social Computing platform. Some of the metrics we track are related to adoption – such as active users (creators, synthesizers, consumers) and “unique visitors”. However, these indicators may not accurately represent the success indicators of the platform. Quality of discussions, impact of these discussions on the users, problem resolutions, agility in solving issues, ability to find subject matters experts quickly could be different parameters which can really show how successful the platform is.


As we will start looking into social analytics more, I would like to hear from on how you measure your enterprise 2.0 platform. Do you feel employees are more productive and collaborative through the use of the platform? Or is there a cultural barrier in using the platform to its full potential?

In early 2000, I wrote the paper as both a "remember when" and coursework for my masters degree. While destroying the evidence (shredding old paperwork), I ran across it and decided that some may get a kick out of reading it. And even though many enthusiasts have put Intel inside their newly revamped Commodore systems, that's not the point here. More a historical look at my beginnings.




In the early eighties, while completing High School, I became fascinated with computers. Each weekend began with me hauling a Commodore PET home so I could continue working on projects that I had not finished at school. This alone should be sufficient to paint a picture of my conviction, since this system was an all-in-one unit, much like the iMac, with about twice the physical dimensions weighing in over 35 pounds. Since there was no real public accessible Internet at that time, there was very little enthusiasm in the general public towards personal computers and their use for anything other than for work related tasks. During the summer of my sophomore year I worked two full time jobs with my end goal of buying a new computer on the market - - the Commodore 64.


When the PET was introduced early in 1977, their goal was to try and take over the market that had been created by Apple six months earlier. It was created using 4kb of RAM, a monochrome monitor, and the use of an audiocassette system for data storage, at the same cost of the Apple product except built in a factory versus a garage on plywood frames. Lucky for me, the science department at a North-Eastern Oregon High School decided that computers had a future in the world and bought six of them in 197 8. By 1982, when I began playing with the Commodore 64 (C-64) it had just been released, and the school was considering their use - - which is why I had my own personal PET on weekends. Since the PET systems were built with such a limited storage and memory capacity, the programming that could be conducted was limited in scope (size and complexity).


Having expanded my abilities to the point that the PET no longer provided a challenge, I began petitioning teachers to expand the computer capability that we had. By the end of my quest, we had replaced the six PET’s with twelve C-64 units using color monitors. The new capabilities within the C-64 unit started a whole new industry of computer game enthusiasts, and teenage interest grew from there. By my senior year I was teaching a C-64 BASIC class along side a teacher in a classroom containing twenty C-64 systems and students doubled up. Based on my experiences, 1983 was the year when a computer became user friendly enough, and contained sufficient capabilities to be dubbed a Personal Computer (PC) for the masses.


When I was first introduced to the PET, and later the C-64 I immediately wondered how this thing worked - - meaning the operating system and programs written to work with it. This led me to learning the programming language Commodore BASIC (Beginners All-purpose Symbolic Instructional Code) in order to build my own worlds. The programming language, BASIC, was first created in 1963 by John G. Kemeny and Thomas E. Kurtz at Dartmouth
University. The original intent of this program was to allow students to learn how to program in relative ease on a time-sharing General Electric computer. BASIC is a language that allows the programmer to write a relatively easy to understand program that then is interpreted by a compiler at run-time, which converts the BASIC language commands to one understood by the computer processor- - machine language.


In the late l960’s and early l970’s BASIC was taught in most Computer Science college courses because of the relative ease that college students understood it. As time progressed, eventually Pascal replaced BASIC as the core language within colleges, which has since been replaced by other languages such as C or Visual Basic (and beyond). This trend of changing teaching languages follows the increasing complexity of systems being used in the world. Initially, the systems were of limited complexity and thus a language of limited abilities was used - - BASIC. Today, systems are of higher complexity, and languages such as C are used, bringing us closer to the logical core of the hardware.


When you look at the market of Operating Systems today there exists domination by one company more than others - - Microsoft. Ever wonder why this is? I have, and the conclusion I came up with is that they developed Operating Systems and then Programming Languages or interfaces to allow those systems to be used. The C-64 is no exception to the rule since it is running Microsoft BASIC version 2.0. This version contained 70 separate commands
that when compiled into object based code could perform a plethora of tasks within the system.


The key to the success of BASIC as an operating system programming manipulation language is in the simplicity. As long as there was a run-time compiler written to work on the hardware platform/operating system package, any program written in BASIC was transportable between systems. This meant that just like Java touts today, you could write it once, and use it anywhere on any system simply by recompiling it correctly. Does this mean that the C-64 is as prevalent as it was in the late eighties? Although there are still computer clubs that share in programming experiences and software findings related to the Commodore, manufacturing the C-64 stopped when the company was sold in 1992. This brought to an end the single most successful, record selling sub-$1000 PC in history. In a ten-year period, this single model sold over 25 million units, with continued support from companies such as Creative Micro Designs; their use is still going.


Of course, since the OS and BASIC commands were burned onto the Read Only Memory (ROM) chip that runs the system, it is impossible to upgrade to newer versions. There are after market add-ons enabling you to upgrade to l6 MB of memory, and even increase the speed of the CPU, but not upgrade the OS. This has brought to an end the growth potential of the C-64 as a product for the future. Microsoft went on to improve their operating systems, and increase the functionality of BASIC through subsequent releases. For systems that were ran off of hard drives, versus ROM devices, upgrading is simple.


Is Commodore-64 still a viable solution for home computers today? Not if you are looking for a system that has the width of functionality of today’s systems. Especially if you are looking for the ability to program in more than two languages. At the time I was programming on the C-64 you programmed in BASIC or in assembly language - - that was it. Based on research, and the fact that the company is out of business, the C-64 owner and programmer of
today is a hobbyist. Of course there is rumor that the Amiga is about to be released by another company in a revamped, redesigned layout for the modern gamer and graphics artist - - that ought to be interesting.


(Written March 7, 2000, by me)

Many companies today simply do not have a mobile strategy. This is the great void that needs to have thoughts moved into it. Consider a cohesive (and complete) alignment of when and what to deploy as well as what devices you should include. 


Why is planning so important in this area? Consider that left to its own, any system will grow to fill the boundaries of its container. So if you let the customers and willing developers fill in the blanks you'll end up a non-planned, non-shared, disconnected environment that fails to deliver on the enterprise promise. 


So plan for growth, plan for stability, plan for disconnected use and plan for using the mobile landscape to extend and solve problems. Although doing it because someone feels its the next "cool" thing is through wrong reason. Too often we are asked to change direction based on a hunch, a single article from the fringe or based on an analyst talk. These may be the spark for the discussion, but DO have the discussion. And that discussion is the start of the planning cycle for your strategy. 


Strategy should never be dismissed as an unneeded step. It should start with a problem (or opportunity) continue through to plan (solution), include a timeline (roadway) and some form of a vision (or big picture). The strategy drives you down the right roadway and includes signs, refueling stops as well as the occasional highway patrol to make sure you follow the right rules. Along the roadway you may decide to change direction or pickup passengers; all according to your great enterprise strategy. 


Our strategy considers many things that are discussed above, with a sprinkling of extra prescient wanderings. Why include "extra" stuff? If you are also a provider of technologies and devices you should consider consuming them in your strategy. You should also think about how that strategy can affect the market and vice versa. That's part of the planning puzzle. There is no single answer in this space. 


To help you jump start your mobilization planning, let me give you you some points to chew on. Let me know how they taste.


It is ok not to support everything

When you finally get a chance to look at the market of devices,  you'll soon realize that the landscape changes everyday. You'll never keep up. It would be better to find a series if devices or specific operating system that you are willing to support -- that helps to enhance your company. 


Mobility to replace desktop

Use cases will help to really understand if there are roles that could survive completely off a mobile device. There also will be jobs that will be completely changed with the right mobile solution. Just know that the device is not the solution; know what apostate need to be there.


Cloudsource your content

Our security guys are lining up to yell me how I'm wrong here, but we need to be less controlling in some cases. The cloud is maturing rapidly and there are great examples of devices"secure enough". Yes the paranoid do survive but its time to secure in order to enable. 


Protect the castle with remote strings

Put in place mobile device management (MDM) so that you can remotely secure and wipe devices. You should also be able to remotely provision and patch the same. The devices will be out in the wild and need to have some strings attached in order to maintain a minimum level of security control.


Communicate and train

Nothing happens in a vacuum, just know that you'll end up with more problems if you don't involve the crowds in the discussion. Put in place a strategy for starting and maintaining that dialog.  Understand the gaps in end-user consumption, and develop/deliver training. The devices should be simple enough to use but you need a little consistency in your deployment and use.


Privacy is a concern

An item that our users picked up on was that through MDM we get access to content on the device. It was part of the end user licensing agreement (EULA) that they agreed to in order to start using their own device with our content. You need to be transparent about what content on devices you'll be viewing and what you will never touch. Alleviate concerns with communication.


Abstraction of architecture

Do not hard-code applications (or devices) to data sources without abstraction. And by abstraction I mean connecting through a service that can provide both security and connection that is independent of the consumer. You can continue to alter those two items without worrying about having to touch each device when a change occurs.


Data as a service

Just know that the abstraction spoke about above is key to sharing and consuming data consistently. Reuse is a n enterprise tenet that needs to be fully adopted into the mobile space.


Know what to test and how

Test early, test often, test everything. Put a testing strategy that involves the right mix of virtual, physical and simulation testing. The landscape is immature with regards to mobile testing tools today, but that is always changing. Know what you want to do and put in place a plan to get there. 


Stores need to be singular and consolidated

Your employees use multiple devices today to do their work. Between servers, desktops, laptops, tablets and phones we do the balance today. Maintaining that mix gets frustrating. Make it more simple with a single management approach for his applications are delivered and managed. As an example, today I have a smart phone plus tablet plus laptop. To keep data and experience synchronized, I use a mix of that same applications (mobile, web, desktop). Having to go to three place to do updates is frustrating.



These are your map to a successful future. Build it it, maintain it and publish it openly for user consumption. Roadmaps are your plan for the future.


What I've written above is not the end-all-be-all approach that many hope for, but that's alright. The biggest lesson you'll need to learn is that plans change and how you adopt to those changes will determine your success.  How is your strategy shaping up?

Security is a tough sell.  Plain and simple.  Security Marketing.jpg

Nobody really wants security.  It is necessary when we feel threatened or under attack, but it can be inconvenient, costly, and adversely affect productivity.  It is the lesser of two evils.  It does have an important purpose, to protect our valuables and access to a networked world.  Without it the situation quickly deteriorates into something completely unacceptable, but the overall value prospect can be frustrating as it is difficult to comprehend the risks of seemingly invisible threats.  To complicate matters, it is not simply a binary decision: to buy or not buy security.  There are infinite gray areas.  How much security is enough?  When will it no longer be effective?  What type and how many solutions do I need today or tomorrow?  The worst part is even with the best security, it may not be enough.  Sadly, you still could become the next victim, as there are no guarantees.

The desire for security is difficult to quantify.  It is not just technological, but also psychological.  Security is a personal determination, not an actual state of configuration.  If you are victimized, you will feel insecure, regardless of the current protections.  On the other hand if your environment shows no sign of risk or compromise, you will feel secure, even given the exact same controls. 

So how does a reputable company market and sell security features, capabilities, and services in this environment?

This has been the challenge for nearly 20 years in the information security industry.  Here is the key:  Security only becomes relevant, when it fails.  Security marketing will fall on deaf ears for someone who has never had a virus, lost their data due to malware, had their bank/credit accounts stolen, or been victims of identity theft.  However, for those who have been down that road or are more cognizant of the security risks bearing down on them, they will have interest in being more secure.  In general, consumers are not security savvy.  In fact, most consumers are driven by emotion.  In security world, it is primarily fear.  Fear of loss.  Driven by fear, but not technically savvy, they look for simple solutions from suppliers they trust. 

A decade ago, security vendors would peddle FUD (Fear, Uncertainty, and Doubt) to generate sales.  Wild claims of super-products protecting from impending doom was typical.  Of course, they were baseless.  The security industry today looks back at this time as consumers being widely victimized by snake-oil peddlers.  Customers soon realized they were lured by false promises and over time those companies earned themselves poor reputations and subsequently fell out of the security market.

The market has evolved.  Since those dark days, a middle layer of pseudo-experts has filled the gap to help consumers identify quality offerings from pure-marketing fluff.  These researchers, white-hat hackers, technology experts, labs, and testing organizations now provide analysis of new security products, services, and vendors.  They play an important role in vetting the industry.  Customers look to them as their proxy for understanding the complexities and nuances to determine if something is worthwhile.  Today, more than ever, security vendors must be conservative in their claims and be prepared to prove themselves again and again.  The threat and catastrophe predictions are still rattling about, but in a more realistic form.  They instill fear as intended, but are tempered by actual previous incidents and face an audience who has become somewhat desensitized over time.  

In information security, when bad events occur, they are sometimes quietly and sarcastically called ‘fund raisers’ as it drives organizations to spend on security.  Such events, although damaging, are wake-up calls which continuously prove the threats are real and still out there.  They cannot be predicted with any accuracy, but result in a significant increase in motivation of consumers, driving the sales of security related products and services.  

For security services and enterprises who wish to market the value of new security features within their products, timing is critical.  Success hinges on being in the minds of people when they independently come to the realization they need to be more secure.  These customers will move quickly to fill the need.  It does not happen simultaneously across the community, so the successful company must have already established their reputation within the target market.

Reputation is built through providing a meaningful security capability in a competitive manner.  Such capabilities must intersect pressing threats recognized by consumers, mitigate risks to an acceptable level, and with the ability to be integrated in a timely and affordable way.  A tall and complex order, which is why reputation must be established prior to the need.  Recognition as 'secure' can be the emotional pillar needed when customer's fear of loss increases.  Those with a solid standing are highly preferred, as compared to unknowns trying to rapidly prove themselves at the point when the consumer just wants to make a purchasing decision. 

Unlike traditional technology marketing, reputation is not built by throwing dollars around to get consumer recognition or build product excitement.  Security is not sexy.  Unlike the latest phones, tablets, or toys, people don’t stand in line for hours to buy the latest security product.  A different approach is required.  Marketing will fail without proper foundational measures.  Billboards, magazine adds, leave-behind pamphlets, and catchy feature names are largely a waste until a solid product is vetted and revered by the security expert community.  It is the industry proving grounds which feeds consumers.  Failure will lead to dismissal as a viable, effective, or competitive product.  Success will garner public praise, further testing, and even recommendations for enhancements, with a direct line to those customers interested in buying more security.

Without expert community support, marketing dollars are largely a waste.  Even worse is when a good security product is handed over to an inexperienced marketing team, who spin their normal yarn of tantalizing promotion geared for getting the attention of end-users.  These claims can push the bounds of what can be delivered and inadvertently inflate the 'potential' impact over the likely impact.  Experts have no tolerance for such exaggeration.  It is a fatal misstep sure to cause a backlash with the security community that will doom not only that product, but could destroy the reputation of the vendor.  Mature security companies have seen the crashes of others and approach marketing of their products carefully, even cautiously.  Claims are always backed with data and expected to be ruthlessly scrutinized.  They know who the experts are and extend them the courtesy to enable proper evaluation.  In return, they often get early insights on weaknesses and possible optimizations, which can further enhance the final product.  Ultimately, it is the end-user who benefits the most, with great products delivering on reasonable expectations bubbling to the top.

Although security is a tough sell, for those organizations that know the players, deliver a value-add capability, and can navigate the process, it can be very rewarding.  Computer security is not going away, nor will its importance diminish anytime soon.  Building a security reputation is an investment in the future.  One which pays many dividends to all involved.

In this IT@Intel Executive Insights on Intel IT’s Cloud Computing Strategy, we share how cloud computing is affecting our overall data center strategy.  Like many of our enterprise IT peers, Cloud Computing is a key area of innovation for Intel and one of Intel IT's top 3 objectives this year.  Cloud Computing is changing the way we inside Intel IT look at our architecture: from the client technology our employees will use to access business services and data to the data center infrastructure necessary to support those services.  We are adopting cloud computing as part of our data center strategy to provide cost-effective, highly agile back-end services that boost both employee and business productivity.


The IT@Intel Executive Insights provide a short 2 page summary that shares the insights and steps taken by the Intel IT team on core topics.

  • Page 1 provides a short overview of the topic and a summary of the key considerations that the Intel IT organization used while developing our strategy.
  • Page 2 provides summaries and links to supporting content for our IT peers to learn more from Intel IT generated white papers, videos, blogs and more.



I really enjoyed reading this article about Zynga IT's approach to cloud because of it's stark contrast to Intel IT's approach.  While the approaches for each IT organization were different, what stood out to me is the similarities in what drove the two approaches.  Both approaches were developed with the business needs in mind.  In both examples the business requirements drove different technical cloud implementations and strategies utilizing hybrid clouds (a mix of public and private cloud based solutions).


At Zynga, they concluded an "outside in" approach was best - launch new services in public cloud and migrate them internal as business demand stabilizes.


While at Intel IT, we determined an "inside out" strategy was best - focusing first on building a private cloud to gain cost efficiencies and businss agility while improving availability and maintaining information security - and then looking to public cloud to support non-differentiated services today and support cloud bursting use cases for highly variable workloads tomorrow.


I'm interested in hearing about other IT approaches .. just post the links below in the comments


Thanks Chris

Filter Blog

By date: By tag: