Catherine Spence

One Size Fits All

Posted by Catherine Spence Mar 28, 2008

According to Dictionary.com “one size fits all” is an adjective that means “acceptable or used for a wide variety of purposes or circumstances; appealing or suitable to a variety of tastes.”  In IT, we have used this approach for how we deliver client systems to users.  We pick a few key hardware platforms and create OS builds that meet security requirements and contain a base level of software applications.   Users take delivery of new systems and then customize from there with various configuration settings and specific software needed for their jobs.  The “one size fits all” model has worked pretty well over the years.  It has been a highly successful way for IT to mass produce systems and support users in a standard way.

 

The world is changing.  The number of available choices in hardware platforms is significantly increasing, ranging from desktops to portables to blade client to smart phones.  Users are becoming increasingly aware of the choices and want to participate in the decision over what devices are best suited to their work style.  In some cases, they want to use different devices simultaneously (for example, a smartphone and a laptop).  In terms of software applications, new computing models are emerging to respond to the complexity.  IT does not want to create new applications for each kind of device introduced in the environment.  A major challenge will be to consolidate backend infrastructure and provide a common user experience across the spectrum of client hardware platforms, not to mention all of the issues related to security and IT governance.  We must embrace these challenges because the days of “one size fits all” client hardware are numbered.

Greetings! Allow me to take a brief moment to introduce myself. I am new to the OpenPort community and will manage the overall OpenPort's site going forward. I am thrilled to be a part of this growing community and look forward to engaging in a plethora of ongoing discussions with you all.

 

Let me start with a truth: I am not a technologist. I don't even play one on TV. So I promise never to wax poetic on deeply technical things that you know more about anyway. However, I am an enthusiastic tech user in both my professional and personal life. So hopefully my insights won't be completely from left field. Oh, truth number two: I have worked in software for the last four years so sometimes my focus is a bit myopic.

 

With that little revelation it will probably not surprise you that I wanted to start by mentioning some recent headlines regarding Intel's announcement last week. Perhaps you heard, but if you didn't last week Intel and Microsoft announced they had awarded UC Berkeley $20million to fund research on new ways to program software so it would take advantage of the benefits brought forth in multi-core processors. The research is focused on addressing challenges to parallel computing and encompasses programing for applications & operating systems to ensure they take better advantage of multi-core processors.

 

This is an interesting development and once again illustrates how Intel works with the broader ecosystem to help propel technology of all kinds forward. I am often suprised to learn of the many behind-the-scenes efforts Intel helps drive to bring about technology innovation; things like pushing WiMAX standards for ubiquitous wireless access worldwide and the formation of moblin.org to host open source projects for the development of software targeted at mobile internet devices (MIDS).

 

 

I'm not saying Intel's efforts aren't in the company's own best interests. But these endeavors are meant to affect sweeping industry changes that help advance technology that makes all our lives better. It kind of gives me a warm fuzzy feeling.

 

 

Up to this point I have covered Application inventory as a cost savings initiative followed by a discussion of Application inventory starts with a definition, and finally Application inventory, what do you capture?

 

Following the natural progression of:

  • Why inventory

  • Boundaries of what to capture

  • What to capture

  • How to capture

 

The "How to capture" is not a simple task completed in a week or two.  For a company our size this task is still ongoing after fourteen months.  And our progress shows us that we will need at least till the end of year to approach some semblance of sustainability. By sustainability I mean that the information, process and people will be in place to keep the data in a fresh state so that true data-based decisions can be made at near real-time.

 

Every day the clarity of our inventory gets sharper and sharper as we identify and pull in the data owners. The quality of the information becomes more focused as more of the profile is filled out. There are internal systems that are starting to rely on the data we have captured.  That data is being transformed into true business information which has value and can be used to make the right decisions at the right time.  At times, it still feels like an uphill battle.  Each day we stand side-by-side with those who see the value and push on the back of our partners as we slowly progress up the hill.

 

Now knowing the definition and what data we want to capture we could have progressed in a multitude of ways:

  • Distributed work-load, individual owners

  • Focused work-load, our team owning (interviewing)

  • Centralized gathering (combination of above, driving people to a single location)

 

We chose to adopt the creation of a simple to use, centrally located (Intranet Application), that stored the data we needed.  As mentioned in the past, we did our analysis by looking at applications on our enterprise that already contained the types of data we were interested in.  What we discovered was that none of them had the flexibility to store the additional information nor the development resources to alter their systems.  This pushed us to obtain permission to build a new application.

 

At this point you may be saying, "You built an application to reduce the number of applications?"  Sometimes it is necessary to do the wrong thing in order to do the right thing.  It would have been too easy to drop a spreadsheet out there and start gathering information.  Short-term this would have cost the least and potentially would have allowed us to get part of the way there.  The issue is the long-term sustainability and trust that comes from a solution like that.  We would have security concerns, updating collision as well as the reduced ability to share the data easily with other applications.

 

Yes, we built a new application, using two people, in four weeks. Since implementation started we have supported weekly releases while expanding the data being captured, the usability for customers (and consumers) as well as enabling the removal of the majority of those other systems with parallel capabilities. We have great internal hosting solutions and have been operating non-stop since December 2006.

 

Our goal is still to do the right thing and properly manage our inventory through reduction.  We were instrumental in providing the information and process needed to remove over 500 applications (and associated hardware) from our environment since we started our process.

 

In my next entry I will talk about some future enhancements to get us through the next year and the further reduction in application inventory we are charged with. Perhaps its time to start looking at how our original analysis for "Low Hanging Fruit" was successful and now we find ourselves making hard decisions in order to continue refining our inventory.

 

Have you had similar issues at your company? Do you currently have this challenge before you? I'm curious to hear some of those challenges and potential solutions.

Some general thoughts and ramblings on application streaming - where it is better than web applications and where it might not be.

 

Application streaming is an interesting technology - you can create a client rich application with sophisticated graphics and processing and yet have a high degree of security and the benefits of server side manageability. In my mind this is the best of two worlds. On the one hand you can leverage the full strength of the latest processors and graphics cabilities and on the other you can manage security and upgrades quickly and efficiently.

 

The application doesn't go through an install process on the client so you eliminate some of the problems associated with different people installing the same application differently. The installation can be "isolated" to protect against conflicts (in some cases this provides backwards compatibility) which also raises some challanges, although this also provides some "challenges" for the integration of mulitple applications on the same device.

 

 

Upgrades are simple and guaranteed - since you only upgrade the server and anyone using that application gets the update at next use, true for security patches as well. For those that are using the applications offline (which you can do, try that with a web app) they will get the update the next time they connect to the network.

 

 

Streaming (some products anyway) provides a means for license management, so perhaps you don't need to own as many licenses as you thought by tracking concurrent usage and preventing over subscribing. This is can be important for some expensive purchased applications.

 

 

Streaming applications are also not subject to the multitude of exploits that are written to attach web browsers and web applications. I believe that for corporate applications they are safer and easier to protect. That alone may be reason enough to justify moving in this direction.

 

 

One area where web based applications COULD be better is if they are written to work on multiple platforms with multiple browsers (such as Windows and OS X). However in practice this seems to be seldom done, most apps are still written for one environment or the other and it's more of chance that the application works in the other environments. This could be a big plus if developers would truly develop for the heterogenous world we live in.

 

 

Another is that with client rich applications there is often more database traffic being routed over the network between the client and the server infrastructure whereas in a web application the database traffic can be kept between the application server and the database server. This puts the onus on the application developer to take this into account when architecting their application. It can be done efficiently but it does raise that "old" argument and problem.

 

 

So perhaps it is time to look at how we develop applications and rather than swinging the pendelum back to all client rich applications, maybe we should be looking at a better balance of applications leveraging the best technology for the requirements.

 

 

Just a thought

 

 

So we are on the home run of deploying the new pilot cube

environment, in fact I’m on site helping supporting day one move in at our

third US site installation which has certainly been interesting. Flight over

went quickly, though at some points it was rather roller coaster (to the point

coffee was spilt on laps)

 

But I digress…

 

 

 

I wanted to discuss an item I have brought up before;

benchmarking. The project has moved on and worth asking some questions around.

Intel IT has used classic benchmarking applications to compare platforms when

going to RFP (using standard off the shelf applications) but we discovered this

testing wasn’t helping us improve the performance of our software on the client

it was simply giving us faster clients (not a bad thing) We were missing some

critical decision making criteria for evaluating newer versions of

applications, client builds or software tweaks (identifying performance improvement

or impact) As we drive towards more out of the box applications we will also be

using the tool to evaluate impact on the environment.

 

 

 

So we kicked off a project to begin recording certain

productivity metrics to evaluate user perception performance; not necessarily

aimed at just understanding how fast each client is; but more what impact it

has to users

 

 

 

Some of these timing metrics include

 

 

 

  • Time into operating system

  • Time into email application (first email)

  • Time into first instant message conversation

  • Time to first spreadsheet/document application

 

Once changes are made to the client build or application

stack an impact is recorded through the metrics. This means we can start to set

goals and performance targets (10% faster build in 3 months…etc)

 

We hope to publish this data with some fellow travellers to get

some indicators on quantify the overhead an ‘IT’ build compared to an off the

shelf build (we classify it as vanilla OS)

 

 

 

Are you recording productivity metrics to compare

applications and build generations? Any thoughts on if this data would be

useful to you?

 

 

I won't go into a long dissertation, but I would like to hear what the masses are thinking about Green or Efficient efforts for the Data Center landscape.

 

 

As you all know Green is taking off -- our world is becoming concerned with the legacy we'll leave for our children and their children. I admire that because it identifies how we're a caring nation in the U.S. as well as a compassionate world.

 

But I believe we're mixing the messages down at the lower levels; In my opinion, "Green" means giving something back to mother earth. It means offsetting your carbon impact by planting trees (as I learned from one of the earlier companies I worked for ), or it means buying energy from alternative means such as wind power -- the direction Intel and other companies are moving towards. Those are green efforts from my point of view. However "Efficiency," is defined by Encarta's North America dictionary as this.. "The ability to do something well or achieve a desired result without wasted energy or effort."

 

 

Those are two different directions as I see them, and companies running programs to enable more efficient Data Centers must understand how to correctly identify their approach.

 

 

So, what does everyone else thing about Green vs. Efficient?

Share your opinion  Greening Data Centers or Make 'em Efficient?

I've got profiles everywhere these days, and not just on the internet, but on the intranet as well. I'm sure we've all got a variety of external faces, whether on Yahoo, MSN Spaces, Facebook, myspace.com, LinkedIn*, or the myriad of other social networking sites out there.

 

But what about on the corporate intranet? It can get just as complicated there, especially if you are trying to find someone who knows something about something that no one in your organization knows anything about!

 

We're starting to see social networking tools for the enterprise show up in evaluations, and I really do hope we implement something within the company - there's incredible value in knowing that I could search for organization development and find a person who is in another division that did an OD project last year that's exactly what I'm trying to do now. But we're not quite there.

 

Right now I've got a pseudo-profile on my internal blog, another on our internal wiki, another on our document collaboration environment, another that's part of my email signature line, and I'm sure there's yet another floating around somewhere. If someone wanted to know what I've been up to for the last 12 years at Intel, they would have to look around in three or four different places to get the full story, or just ask me for a copy of my resume.

 

Part of that is my fault - I just need to pick one place to keep updated and point everything else to it, but the problem there is that now I'm sending people to sites that might not be their PREFERRED location for social networking. As an external example, let's say you've got a personal blog on wordpress.org, but you've also got a myspace account and another on MSN Spaces. All three have blog functionality, which do you pick? Do you post to all three at the same time, or do you point people to one or the other? What if one of your friends prefers MSN Spaces, but you keep sending them to wordpress.org to read your blog?

 

It's profile overload! Not only do you have profile/personal info in 10 different places, but you're trying to communicate redundantly based on other people's preferences. Stop the madness!

 

I'm now to the point where I'm shutting down my profiles on sites that are just secondary or tertiary, and if people want to know who I am and what I'm wearing, they will go to the one site that has it all, because realistically, whichever site you choose will have another competitor in 6 months that everyone will flock to and add 500 friends they've never actually met before. In my mind, I'm seeing a group huddled together moving in unison from one corner of the room to the next as the latest social media site pops up.

 

Will it settle any time soon? I doubt it. There are many competitors that are getting into niche areas and offering more for your money (which in most cases is free). It's a challenge outside and a challenge inside. At least within the company you can create a "mandate" that says here is the site to create your profile and it's what the company is going to use.

 

Maybe some day everyone on the planet will have an ID number and their own website. I want to be 0100100001000101010000010101010001001000.com.

 

  • Websites and locations mentioned in this blog are trademarks and properties of their respective companies.

 

Most data centers have a very long history with the enterprises they provide services for. Data centers grew up around the users they provide services to and are generally located within a close proximity to the user base. As LAN capabilities improved, performance of local applications became less of an issue for enterprises but new capabilities were generally landed in existing facilities. As the enterprise grew through mergers and acquisitions, additional data center capacity was generally co-located with the new users.

 

When availability was the key metric for measuring performance of applications, it became customary for each new application that the enterprise was rolling out (whether developed or purchased) to have its own dedicated hardware, to protect the integrity of the application and improve stability. Over time this has led to data centers around the world that are full of hardware that operates at a fraction of what it's capable of! Application owners were loathe to consider the idea of ‘stacking' applications on the same server, looking to avoid potential conflicts that arise in a shared resource environment.

 

We now have multiple vectors for driving efficiency into our data centers: energy efficiency, sustainability and cost are some that are moving organizations toward initiatives that will transform the way we provide services from the data center and how we support those services going forward. Driving efficiency is a painful but necessary step in the overall transformation.

 

 

Transformation can mean many things to the enterprise but part of a data center transformation generally involves consolidation of data center facilities and compute resources that provide services to the enterprise. This can cause some amount of nervousness on the part of the end users, application developers and administrators who have grown accustomed to unfettered access to resources over time.

 

 

Why is this abstraction of resources from users, developers and administrators necessary? A primary driver is to standardize facilities, network, compute and storage resources so the operations staff is sustaining a small number of standard offerings, which provides them with a very predictable environment that they can become expert is sustaining. By centralizing to fewer facilities and driving toward higher utilization of existing resources (i.e. network, servers and storage) the enterprise can obtain more work out of their data centers for less energy and therefore less cost.

 

 

Removing the users, developers and administrators from the data center is a process and mindset change that will take resolve and ultimately executive support. So many IT pros feel they need to ‘touch' their environment but this leads to custom configurations, unknown/undocumented changes and instability in an environment that we're trying to standardize. Building the competency within the operations group will facilitate a change in the way excursions and outages are dealt with and will form the basis of a much more predictable environment that the operations team will feel proud to own. This is one piece of the transformation that cannot be overlooked when considering the evolution of the enterprise data center.

 

 

Intel just announced the brand name for it's newest mobile processor called the Intel® Atom™ processor. And as a new brand name, I have to admit I really like this one.

 

Brian Fravel, Intel Director of Marketing, Brand Strategy [recently posted|http://blogs.intel.com/mobility/2008/03/introducing_the_intel_atom_pro.php] a good article introducing the brand.

"Soon, you will see the Intel Centrino Atom brand on handheld devices that can bring an amazing internet experience in a device that fits in your pocket. You’ll see the Intel Atom processor powering a growing category of devices aimed at delivering affordable, Internet-centric uses."

 

Not only is this this Intel's smallest processor it is also contains the worlds smallest transistors. [Listen to|http://video.google.com/videoplay?docid=-6311311368352250249] Anand Chandrasekher, Senior VP of Intel's Ultra Mobility Group, explain what's so cool about the Intel Atom processor.

 


 

Pretty neat stuff.. but wait there's more! Let's bring this home by showing off some end products. [Watch|http://video.google.com/videoplay?docid=-6311311368352250249] Mark Parker show off early prototypes based on this new architecture.


 

Bottom line? Cool brand name for that will be at the heart of very cool technology coming our way. Want more? Visit these sites. * Intel Atom processor technology page: [www.intel.com/technology/atom/|http://www.intel.com/technology/atom/] * Mobility@Blog: [blogs.intel.com/mobility|http://blogs.intel.com/mobility] * Software Developer Mobile Community: [http://softwarecommunity.intel.com/isn/home/Mobility.aspx|http://softwarecommunity.intel.com/isn/home/Mobility.aspx]

 

Filter Blog

By author:
By date:
By tag: