A recent article in Information Week discusses how Credit Suisse has been very successful implementing virtualization in the data center and how they view the desktop as the next area of opportunity.  By virtualizing desktops and bringing them into the data center, Credit Suisse hopes gain the ability to quickly re-provision desktops in response to changing user needs.  But at what cost?

 

Depending on how many users Credit Suisse has, this means moving the processing from thousands of independent computing elements into the data center.  Out in the enterprise, power and cooling is abundant and there is no issue with rack space.  Perhaps they have ample space in their data centers.

 

What about user experience?  There is something to be said about moving the processing as local to the user as possible.  Some applications lend themselves to be hosted centrally and accessed via a browser or portal interface.  Other applications including multimedia, unified communications and complex user interfaces are better served at the endpoint device to enable the best responsiveness and/or mobility.

 

Virtualization is a great technology and it definitely creates new possibilities in terms of OS and application portability.  This might be the right solution for Credit Suisse, given their user needs, sets of applications and data center configurations.  Who can blame them for wanting to build on their past success.  However, there is a bigger picture to consider the correct balance of computing models for particular usage models.

A Proof-of-Concept (POC) conducted by Intel IT evaluated OS and application streaming in call center and manufacturing environments.  The four-part study included performance, usability, IT support and cost.  The POC successfully identified streaming as a novel, feasible technology in the tested scenarios.  The biggest benefits were related to locking down the client, improving security and eliminating service calls.  Challenges were encountered related to the learning curve and software maturity of application packaging and troubleshooting. 

 

A technical brief is available for downloading:

 

Software On Demand: OS/Application Streaming Client Study

Up to this point I have covered Application inventory as a cost savings initiative followed by a discussion of Application inventory starts with a definition.

 

In our specific implementation, we started with a base set of attributes. Some of those were very obvious while others were necessary for managing some of our base enterprise capabilities.  Items that were only captured in a 1:1 (one-to-one) relationship to any single specific application were:

  • Name

  • Description

  • Importance (a tiered level detailing the impact to our company)

  • Status (or state of the implementation)

  • Type (of application)

  • Manufacturer (if purchased)

  • Version

  • Owning Group

  • User Count

 

We also had some 1:M (one-to-many) related attributes which we cataloged in order to further build out the metadata for each instance.

  • Contact

  • Cost (develop, host, support, license)

  • Link (to external data)

  • Support

  • Technology

 

This was sufficient information for us to move along and begin consolidating data.  As we engaged more and more teams and discovered localized stores of this data, our metamodel expanded to include a few more elements.  Some of these also included associated increase in our own inventory tool capability.  As this capability was implemented we were able to start turning off applications through consolidation (one of our key goals).

 

Additional Items (one-to-one)

  • Product Line (for ease of grouping and management)

  • Hosting Platform

  • User Description

  • Cross-Site Consumption

  • Customer Located External (to Intel)

  • Data Classifications (for information security and control)

  • Disaster Recovery Details

  • End of Life Tracking (legal and recovery data)

 

Additional Items (one-to-many)

  • Alias (alternate naming; the key to our success)

  • Capability

  • Component/Module

  • Customer Country/Region

  • Interface (consumption and providing)

  • Network Ports/Protocol

  • Product Testing (results, for future enterprise releases)

 

Many of them are specific to how we do business inside our company, however, you might find value in some of our learning's.

 

As I mentioned we discovered pockets of data and some little (and big) applications utilizing some of this data.  It has become increasingly easy to implement an additional module that relates and consumes the data from the larger metamodel.  From an architecture stand-point, we need to be careful not to develop this into a "jack-of-all-trades" application that does everything for everyone.

 

Up to this point we still only capture data (and functionality) that is related to the Application through direct relationship.  As an example, we associate the application to what network port/protocol it uses, but not necessarily the network that is can pass across.  We will capture the hosting platform name but not the specifics of that host.  Instead we rely on interrelated systems to draw the larger picture of the whole enterprise.

 

Are we done?

Not even close.  As noted in our Intel Information Technology 2007 Performance Report (page 12), this application and the associated capabilities we are developing is having a big impact.  During 2007 we were instrumental in the end-of-life of over 450 applications.  The metadata we capture and maintain have helped to identity instances of duplicity as well as opportunities where support and consumption have dropped to the point we can turn off the application.

 

In my next entry I will talk about how we were able to use two people resources and build an application in four weeks to solve this problem. Also how that solution has been running non-stop, for fifteen months with no downtime or impact to customers while increasing capability and usability  while doing releases on average of every two weeks.  Future posts will talk about some future enhancements to get us through the next year and the further reduction in application inventory we are charged with.

 

Have you had similar issues at your company? Do you currently have this challenge before you? I'm curious to hear some of those challenges and potential solutions.

It seems counter-intuitive to think that applications streamed over the network could run faster than the same applications installed locally.  If the circumstances are right, it could happen!

 

Here is a Systems Manufacturing example.  We ran a series of key tasks across a variety of configurations to collect performance metrics.  Our script opened a work order in our ERP system, created packaging labels using a bar code generation program, looked up label part numbers for our product bill of materials and ran work order activity reports in three custom web-based applications.

 

Our baseline was a Pentium 4 desktop system (3.0 GHz).  Our trial system was a Celeron 215 desktop system (1.33 GHz).  Both had 1 GB RAM.  The baseline system had applications installed locally on its hard drive.  Applications and the OS were delivered to the trial system via streaming.  Throughput time of our script on baseline system was 6 minutes 15 seconds.  The same script executed on the trial system for 2 minutes and 45 seconds.

 

Two things come to mind to explain the difference.  First, our script contained a good mix of local and remote processing to maximize our trial processor.  Second, the nature of the computing model provides explanation.  Applications are broken up into execution blocks so we only need to load and execute the portion of the application that we need.  Further, since virtualization was used in conjunction with the application streaming, the virtual software layer makes things like registry settings easier and faster to access.

Every software project I have worked on always started with some form of conflict and complicated interactions.  This usually resolved itself through the use of a definition regarding roles and responsibilities.  That definition kept people on the same page and helped everyone to understand who was doing what.

 

Now depending on when you happened to look at my job title over the last 13 years, you may have seen one of the following:

  • Software Engineer

  • Application Developer

  • Enterprise Application Developer

  • Software Developer

 

This means only that I moved from one department to another, however, the physical tasks I employed were the same.  My output may have had a different installer/wrapper/output, however, it was the same.  I designed, developed, tested and deployed an application into our environment.

 

When it was time to define the characteristic (metadata) of an application, we needed to start with definitions. Not only what an "Application" is but what "Software" is and how (if) it differs from each other and from an "Operating System".

 

This is vitally important because no matter who you talk to, they will have a difference of opinion in this area.  Let me give you an example that we are currently dealing with.  We are implementing a CMDB (Configuration Management Database) for our Service and Support organization.  As our application data is pumped into that solution we had to decide whether it is an application or software.  The CMDB definitions basically stated that software was the core items used to build a hosting platform whereas an application is the code hosted on that platform.  A very specific definition for their very specific implementation.

 

Our definition was much more simple.

If it's coded, if you develop it, it is a software application simply referred to as an "Application".  This can be developed internally or purchased.  An application is not an operating system.

That means that everything running on our environment, that is loaded on top of an operating system, is an application and needs to be inventories.  That also means if it is a web-based solution, with software code, hosted within a web-hosting solution, however, it is still an application.

 

We did draw a very discreet line in that we did not want to inventory certain things.  Those are items that are "configured" inside of other applications.  Item such as:

  • Web sites without dynamic content, hosted within a dynamic web solution such as Microsoft Sharepoint or created with Microsoft Frontpage or another WYSIWYG client.

  • Templates configured for an application.

  • Fileshare

  • Hosting Platforms (configuration of hardware and application software)

 

To put forth some simple rules, that people can evaluate their "Application" before attempting to add it for evaluation, we came up wtih some simple rules.  It has to meet all of these with a yes response.

  • Installed on Intel (or contracted) hardware?

  • Initially used by more than one person (or application) at Intel?

  • Does this have (or has it ever had) a development/support team?

  • Does this have (or has it ever had) a development/release process?

 

This minimizes the possibility that we inventory applications that are sitting in a box, not installed on the environment.  It also means that items we paid for, installed, licensed and such, are included.  Whether on a server or on a client, we need to know about them so that we can work towards the simplification of our inventory.

 

Next I will cover how we have gone about gathering this data.  Some approaches work well while others don't.  Additionally, before you start gathering data you must have a solid review, maintenance and data quality processes in place or the data will be of no use for future analysis.

 

Have you undergone a similiar process?  Are you struggling with doing this inside your company?  Have questions?  Let us know.

 

For decades Intel employees have started and ended the day looking at the same gray/blue/brown (depending on your site) sound soak dividers. They provided solace and security, a place to get on with your work. They give you something to pin things to and space to hang your all important white board.

 

 

It's been the norm to start a ‘Meerkat' discussion with your neighbour, or throw foam balls to a team mate whilst a long call drags out. Cubes have been part of Intel's culture, as much as transistors and the bing bong.

 

 

This is about to change

 

 

I'm working on the IT side of a large project currently underway in the halls of several US sites. The project has one focus; challenge the way we currently work. Several organisation reports and visiting other companies have shown it's just not as effective for the employees we now have.

 

 

10 years ago people came in Monday to Friday; they worked in teams within the same geo or even exclusively on the same site. Teams or even whole groups sat near each other (from designers to manufacturers) they went to lunch together and all left for home around the same time.

 

 

This just isn't the case today. Intel's workforce, like many, is globally diverse. Your cube neighbour now manages a team out of the US that is working on a large project in Asia for delivery to a customer in Africa. These changes in workforce have had several impacts

 

 

People are not physically around as much: Technology at home has meant you can be as connected in the office as out of it. Wireless technology coupled with video and voice can mean employees can meet each other when cross over times allow.

 

 

When people are around they want to network, they want to use some flexible space to crunch a problem or perhaps hold private phone conversations

 

 

Private calls are not private any more, people want a smaller space they can make calls with remote managers

 

 

Because of these changes the pilots we are working on aims to re-enforce better facilitate those requirements. We are aiming to achieve this through several things

 

 

Smaller conference rooms designed for just one or two people. Enough space to sit and take calls, but not enough to be booked by teams. These meetings happen today but can hold up larger rooms making it harder for larger teams to meet

 

 

Deploying a more flexible IT environment allowing quick deployment and high demands. Mobile technology is something Intel IT has always focused on, here we are taking it a step further by using 100% wireless, even for desk areas (you can find out more about the primary wireless campus in our IT@Intel site) IT are also integrating phone services into the notebook to remove the need to have a desk phone. Those that specifically want to have a phone can also log into any handset.

 

 

Flexible, open zones to encourage quick white board problem solving, not so much about formally booking 60 mins of meeting time, just pulling around some chairs and working with the team around you

 

 

Free things like coffee and snacks are being introduced, again to encourage employees to come into the office.

 

 

At this stage employees can still choose to have a permanent desk, others have elected to be part of open zones, with no permanent home to call their own.

 

 

None of the things we are doing in these pilots haven't been tried and implemented by others - but this is the first time we are trying them with our employees; and as any good IT shop will tell you each customer group has its own requirements.

 

 

I will be posting updates as we see how the pilots develop.

 

 

 

 

'Flexible areas' with lots of seating and snack areas

 

 

 

 

Unassigned desks

 

 

Over on the IT @ Intel blogs, I talked about whether Corporate Blogs Really Matter some time back. Several of you provided comments and questions, and I wanted to take a moment to answer a couple of them.

 

Michael commented: "I like reading about what you are thinking about, and how you are making a difference in the lives of Intel staff."

On this topic, I did a two-part post on what we were doing to build a technical community within IT. You can check out these posts at the following links: Building a Community Within IT, and Lets Jam. Those posts are pretty extensive, and talk specifically about how we're making a difference in the lives of IT employees, so I won't repeat that here.

Yvan commented: "I would like to hear some of the management problems you encounter when doing your job."

     Here's a specific one that has been a challenge - Many of the employees who post here on the IT @ Intel blog are not directly part of the IT @ Intel program, and therefore don't have social media/networking as part of their job description. That means we have our normal jobs but also participate in this stuff on the side. Making the time for posting and commenting is one thing, but being recognized for it is the bigger challenge. How do you make sure that your manager sees your blog as strategic for Intel and not a waste of time that takes you away from your job?

     I've personally been very lucky that part of my job is focused on community development (you can read about that on the links above). On my annual performance review I have an entire section of accomplishments that are directly related to work I've done in support of social media. My manager didn't ask me to put it on my review, I did it because I felt that it was important - but I still had to educate him about it and the value it provides to the company.

     Sometimes middle and senior management just don't "get it". Unless they themselves are participating in the community they don't necessarily see the value it brings. To them it's just a diversion from what employees are actually paid to do. But what if the company saw it as a strategic advantage vs. a perk or side effort? What if the entire company, every employee all the way up to the CEO, was actively involved in being a spokesperson for the company?

Paul O., our CEO, is a blogger on our internal systems. It's not a weekly or monthly thing, but he does it, and it's something that employees appreciate and look forward to. Our CIO recently kicked off his first blog as an attempt to change the way he communicates to IT. It's been a huge success already. As soon as we start to see blogging as another form of communication like using the telephone, sending an instant message, or walking down the hall and speaking to a group of people, then it doesn't become a diversion/distraction, it becomes part of your life/job.

     Personally, I hate talking on the phone - I'd much rather have someone communicate to me via an email, a blog post, or a face to face conversation.

The way that we communicate as people is changing - blogging is one of those new ways. Making the switch from tapes to CD's was a big change; rotary to touch tone changed the way we dialed; learning how to send a text message instead of calling someone was huge; what's the big deal with blogs and forums??

 

It takes time to educate management on the value of social media, and it takes time for them to formally recognize it and make the time for it. But if you can get there, and you can start to use social media as a strategic advantage for your company, then you've got it made. It just takes the time to sit down with your boss and say - "Here's how my participation in this activity is adding to Intel's bottom line. And here's how it helps me do my job better." Speak their language, and the change will happen.

 

 

Keep the questions coming - let us know what you want to hear about as it relates to IT @ Intel.

As social media adoption is beginning to gain ground, "the" requests are starting to trickle in.  "I want to start blogging/wiki/forum internally or externally....but I only want a certain group to have access to the blog/wiki/forum." The enterprise and marketing social mediaites have done our due diligence and attempted to find solutions to meet the business needs, but it typically means advising them that social media may not be the right fit.  Then, I read today that a company called Mixx (Digg equivalent) is adding private email and group message boards to its offerings.  Whoa! Stop the presses.  I am challenged by what appears to me to be counter intuitive. Stepping back and looking into the enterprise I ask, "Can social productivity really be social productivity with velvet ropes?"

 

I have always been of the mindset that in order for community to be built, innovation to be fostered and collaboration to be achieved, that everything needed to be public.  If you started to form "silos" of private groups, private messages, private forums, private blogs then your ability to leverage the power of the community would be lost.  As Steve Bell in Social Networking - Bookmarks - Social Productivity and Sam Lawrence have referenced in previous posts "Social productivity...is about getting work done outside the team of like-minded people you work with everyday....an idea is introduced and all sorts of people get to chime in...your idea has developed openly by all sorts of people who bring their own valuable perspective."  Sam cites Wikipedia as a prime example of nontraditional collaboration at it's finest.  Intel started internal blogs & forums in 2004; built Intelpedia, our first internal wiki, back in 2005; and subsequently launched the internal IT Innovation Zone, collaboration & sharing site, in 2006.  These are open to the entire company and we have had strong success with these tools.  So is IT now getting requests to go smaller, go private because these tools aren't meeting business needs or because we as a company haven't fully embraced the culture shift to social productivity?  With the Mixx announcement I am giving deeper thought to what social media looks like within the enterprise; the desired results of social productivity and whether private subcommunities are necessary for optimal collaboration and communication.  I still say "no". I beleve that velvet ropes and social productivity are like oil and water. They don't mix.  Am I wrong?

Fourteen months ago my (new) manager approached me with a unique problem. It was discovered that we did a very poor job of keeping track of the applications (software) that we build and/or install within our enterprise environment. This lack of tracking also included all the associated data related to hosting (hardware, configuration and such). That's not to say that some organizations inside our company weren't doing a great job, however, these pockets of excellence were the exception not the norm.

 

My (now) manager explained that a new group will be forming to help find opportunities to save money in the application and hardware space. His problem for me to solve was that there was no single solution, no one source for data, nor any single location for recording information. As a technical lead with several years of experience in metadata modeling and tracking, he asked me to go off and propose a solution.

 

As I mentioned we had several pockets inside the company that were tracking data as necessary to perform their job functions. After performing an internal environment scan, analysis and in several cases, engaging with the teams and their tools, I was left holding an empty bag.

 

What did I discover? There were 12 different applications (out of vendorprovided and internally custom built) and around 14 additional data sources(spreadsheets and databases). Through engagement, it was determined that of the solutions, many were on aged technology and some had lost their developers to other projects. Additionally, most of the existing inventories had only one small piece of the puzzle. They were only concerned with that one bit of information that solved their business need. This meant that enhancing an existing solution was not going to work.

 

So off to the drawing board I went to try and come up with a quick solution that would enable us to rapidly gather our application inventory, the right way, the first time. My proposal was taken to senior management and four weeks later, three of us had completed the first version of an application which is still in use and the Record of Origin (ROO) for application metadata within our company.

 

How does this all save us money?

Let me give you an example that anyone can relate to.  Think of this in terms of an inventory of the food in your cupboards. If you go to the store without any knowledge of what food (canned, packaged, frozen or fresh) you have, how do you know what to buy? One of the kids may know you have some corn, another may know you have a pot roast, however, is the inventory accurate? Until you get into all your cabinets and do an inventory, you'll never know what you have. Additionally, without putting a system in place to maintain that inventory, the next time you make dinner, your list is completely useless.

 

Continuing my food sample, let's say you want to make a lasagna. How do you know if you have the ingredients? How do you know you don't have one already, frozen and ready to heat?

 

This is the problem we found ourselves in (applications not food). We had no idea if we had overlapping functionality. Our capabilities were meeting our (internal) customer demands, however, were we meeting that need at a higher cost than necessary? Were we asking people to use a tool that was outdated (technically speaking) when a newer, faster, cheaper solution existed in the room next door?

 

These are some of the problems that we looked to solve. And with this, we have some pretty interesting plans on how to save money. I will cover that in my next entry.

 

Have you had similar issues at your company? Do you currently have this challenge before you? I'm curious to hear some of those challenges and potential solutions.

Filter Blog

By author:
By date:
By tag: