Everyone wants information security to be easy. Wouldn't it be nice if it were simple enough to fit snugly inside a fortune cookie? Well, although I don't try to promote such foolish nonsense, I do on occasion pass on readily digestible nuggets to reinforce security principles and get people thinking how security applies to their environment.

 

 

 

 

 

Common Sense

I think the key to fortune cookie advice is ‘common sense' in the context of security. It must be simple, succinct, and make sense to everyone, while conveying important security aspects.

 

 

 

Here is my Fortune Cookie advice for June:

     A perfect security program does not make your environment invincible! It would be astronomically too expensive. The 'perfect' security program achieves the optimal balance of spending, loss prevented, and acceptable losses (residual loss).
     

 

 

 

Now if I can just figure out how to stuff these little cookies...

 

 

Am I contributing to the problem of over simplifying security? Or am I reaching out to those who might not take an inordinate amount of time necessary to understand the complexities and nuances of our industry? You decide and feel free to share your knowledge-nuggets.

 

 

Fortune Cookie Security Advice - May 2008

It is vitally important to give data consumers an indicator of the quality of your information. This helps to build a trust in the completeness and review state related to what they are consuming. What we have implemented is real-time, includes embedded business rules and a pretty little display.

 

So what did we do?

  • Created a Five tiered rating system Data Quality(DQ) State

  • Moving through each tier means that data completeness and audited quality checks are performed

  • As the software application moves through its life cycle, additional data elements become mandatory, which effects the dynamically calculated rating

  • DQ State value exposed for interfaced consumption

  • Shown on-screen with graphical representation

 

What is involved in each DQ State tier level?

 

  • DQ State 0: does not meet minimum required data

  • DQ State 1: Name, Business Description, Status, Manufacturer, Owner (Group/Contact)

  • DQ State 2: State 1 plus - Host, Software Type, User (count/location), Data Classification, Technology categories

  • DQ State 3: State 2 plus - Cost Assessment

  • DQ State 4: State 3 plus - Capability categories, Network communication details, Business Continuity details

 

This tiered approach begins to define higher quality for the data completeness as it moves up the defined levels. Not only having the blanks filled in, but the application of embedded business rules-based analysis to validate content, drives the state calculation. These are updated based on any change to any of the evaluated content.

 

What do you do in your organization? How do you ensure that the data "freshness" is preserved?

 

Previous topics include Application inventory, what do you capture?Application inventory starts with a definition, Application inventory as a cost savings initiative and  Application Inventory, the start of data sustainability?.

 

Since the previous post in October there has been much interest in our two pilots aiming to reduce information overload; and I've responded to all of them with the quintessential engineering attitude of "we'll have to wait until the data is in". Well, the data is finally in, and now I can reward your patience and share the main points.

 

 

You will recall we were running two pilots:

 

 

1. "Quiet Time" on Tuesday morning.

 

 

In this experiment 300 engineers and managers, located in two US sites (Austin, TX and Chandler, AZ), agreed to minimize interruptions and distractions every Tuesday morning. During these periods they had all set their email and IM clients to "offline", forwarded their phones to voice mail, avoided setting up meetings, and isolated themselves from "visitors" by putting up a "Do not disturb" sign at their doorway. The purpose was to see the effect of 4 hours of contiguous "thinking time".

 

 

On the whole, the 7-month pilot returned markedly positive results. It has been successful in improving employee effectiveness, efficiency and quality of life for numerous employees in diverse job roles. 45% of post-pilot survey respondents had found it effective as is, and 71% recommended we consider extending it to other groups, possibly after applying some modifications.

 

 

As expected, this is not a matter where "one size fits all": not all people found this a desirable practice, depending also on their specific job roles. But an interesting finding is that Quiet Time is useful to different people for different reasons. Some people need it to concentrate on creative tasks, as we had predicted, but even people whose work involves ongoing interaction with others found the periodic "breathing space" beneficial in restoring balance and getting back in control of an otherwise hectic work routine. One should, we learned, let each person decide how to use the quiet hours to best effect. A key success factor, however, is that people must realize that the "quiet" requirement is not absolute; when an urgent situations requires it, interruptions are permitted. Communicating this clearly was necessary halfway through the pilot.

 

 

2. "No Email Day" on the Friday.

 

 

It has been noted (and often ignored) that "No Email Day", or "Zero Email Friday", is a misnomer; but it has caught widely before we got to it and we kept the name. In reality, email is not forbidden on the Friday; the idea is to solve the problem where people send email to a coworker in the next cubicle rather than walk across the aisle and talk, by encouraging the use of face to face and telephone conversation in preference to email within an organic group, which in our case comprised 150 engineers and managers.

 

 

This pilot has achieved lower success than "Quiet Time", though 29% of respondents did find it effective, and 60% recommended we consider extending it to other groups. The issue, we found, is that there was a clear incompatibility of NED with the nature of work in the chosen pilot group, where many people are routinely away from their desks or in meetings much of the time. This renders asynchronous email the method of choice for connecting to people in the group. It is easy to conjecture that for NED to work better, it should be applied in teams that are not only collocated, but also tend to sit in their offices most of the day, so your coworker is predictably available to be spoken to synchronously when the need arises.

 

 

Our next steps will be to present these data to management and consider proliferation to other groups at Intel who might find either or both practices useful in the context of their work style.

 

 

I have just returned from the Intel sponsored Eco-Technology Great Debates where I was slotted into the topic of Thin vs. Thick Client Energy Efficiency.  I had the opportunity to weigh in on the side of "Thick" clients as the most energy efficient.  The bad news is that our team lost; the good news is that we didn't lose by much (29 to 24)!  The best news is that all of the teams had some very strong arguments (and even several very entertaining exchanges).

 

Being a simple data center guy, I learned a lot, especially as it relates to thin client architecture and energy impacts.  No contest, thin clients consume less energy at the device level than do thick clients (PCs and Laptops). But is that really the energy efficient answer?

 

For thin clients, compute and storage are necessarily displaced to the data center.  Data centers with thier concentrated IT equipment are typically inefficient to power and cool relative to laptops and PCs which are distributed by nature and cooled by ambient air.  Generally data centers require 1 watt of power for cooling and electrical distribution (house load) for 1 watt of IT load (newer data centers are more efficient but still incur additional power costs simply to power and cool).  Therefore, every kW of power that is shifted from distributed thick client use to a data center causes more or less 2 kW of impact in the data center!  Wow!

 

With the majority of the world's data centers facing power or cooling capacity constraints and some with no additional grid power available at all, total energy costs extend beyond the simple house load + IT load equation.  Expansion and upgrade of facilities increases energy consumption, as well. There are too many areas to detail here but needless to say the total power consumption for extracting and manufacturing data center components, transporting them to a site and construction of new facilities is non-trivial and likely larger per unit of compute than for the typical laptop. This collateral consumption is not comprehended in any calculations of alternative client model power efficiencies of which I am aware..

 

I also have no specific data on the power efficiency of PCs or laptops to provide rigorous comparison to data center power utilization efficiency. The above arguments, however, do appear to be logical.  More work needs to be done to collect the data and analyze these concepts in detail.....

 

If you want to see the instant replay of all of the debates (including the client debate, liquid vs. air cooling and ac power vs dc power in the data center), click on the web link above and look for the embedded webcast URL at the bottom of the resulting page.  There are also a couple of links to other articles on the subject that are well worth reading.

 

TTFN!

It's inevitable…  a few times a week, my system slows to a crawl doing seemingly mundane tasks.   Moving from one application to the next, or even navigating our intranet becomes a trial of patience.  Originally I thought it was the application set I was using on a daily basis.  Enterprise resource planning, internet browsers, development studios, mail and instant messenger clients.  Each of these a known resource hog vying for what little available scraps of memory my system would cough up.

 

After some hallway grumbling with my co-workers, I turned my attention to not what I was running but what was being run for me. Automatic backup utilities, automatic patching software, and in the anti-virus suite with its omnipotent host intrusion protection.  These applications lurk in the background, helping to keep us safe from the pitfalls of the electronic age.  They are absolutely necessary to protect our company and its stockholders, but the value can come at a high cost.

 

Any one of these apps coupled with your normal application load can bring an older system to its knee's on its own, but how about your backup utility kicking off while your antivirus software is in mid-scan as you happen to be running collaboration software sharing out a debug session in your development studio.   Not pretty.

 

The productivity loss is cumulative... two minutes here, five minutes there, ten minutes for a reboot after a hard crash.  Soon you've lost an hour or two over the course of the week, or a day or two over the course of a month.  These things can be minimized by having systems capable of handling the multiple application loads that both the users need, and the ever shifting security environment requires.  The threats won't ever go away. More than likely, they will get worse and the applications needed to stop them will get bigger and more resource intensive.

As the person responsible for driving social media within our enterprise, I have come to realize that the best darn enterprise social tools don’t magically turn your company into a social enterprise.  There is a core foundation that must be present or else you cannot reach social enterprise utopia. There are realizations that must occur or else you will not succeed.  There are (sometimes) painful things you must do. 

 

•     Silos must come down like the Berlin Wall: 

I bang into silos on a daily basis.  Corporations love silos.  I remember clearly one of my university professors stating that a threat to innovation is that people hoard knowledge.  Knowledge is power.  In order to become a social enterprise, sometimes a significant cultural shift has to occur.  Power must shift from teams, groups, organizations, individuals to the masses.  Knowledge needs to make the enterprise powerful versus silos within the enterprise.  For example, I recently happened into 3 proposed silos in our marketing organization.  One team wants to build a knowledge sharing and collaboration system to vet through innovative ideas..  Another team is budgeting to put in social networking software for all sales and marketing personnel, mainly for the field to “find experts”.  And lastly, the marketing organization as a whole will have an exclusive best known method (BKM) sharing and networking solution that just marketing will use. If you are making the assumption that all the innovative ideas and expertise you will need is housed within one organization- then you are sorely mistaken.  Applying a social tool over a silo doesn’t suddenly make you more innovative.  Smashing down and not allowing any new silos serves innovation up to the company  Social media routes around those silos and traditional boundaries.  It connects people based on interest, not position in the hierarchy. True social enterprises apply social tools that allow wisdom of crowds and six degrees to prevail.

 

•     Consumerism affects what you do inside your four walls: 

How people use technology to interact, collaborate and communicate outside of works DOES affect what they want to do inside work.  There is a very clear bar that have been set  by expectations because of the consumerism of social tools.  For example, if your social networking tool isn’t as intuitive to use as external sites, employees won’t use it.  This doesn’t mean employees want a “wall” to write on or widgets that allow you to throw pies at each other. They just want a similar ease of use and utilitarian enjoyment that we receive externally but appropriate for business. Read  What Gen Y Teaches Us About Enterprise Social Networking for ah-ha’s out of a focus group with recent college graduates.

 

 

•     Understand that people will go down with the email ship:

We are not delusional and think that any of these social tools will replace email for people.  We all know that email was never meant to be a collaborative tool, but somehow it is reality.  Social tools need to be engrained into current business processes.  For example, email alerts should occur when I am asked to join a community or someone comments on my blog post.  My profile that I have in my social networking tool should be the unified profile that everyone sees in the company directory, email, instant messaging, blogs and wikis (to name a few).  The Wiki should be incorporated into team workspaces and easily accessible.  Implementing social tools in a disparate way or thinking that you can replace current knowledge management tools – will be a barrier to adoption.

 

•     If it takes a manual to use it – throw it out the door: 

When was the last time you read a manual?  Seriously.  Does any software or computer even ship with one anymore?  Can you even find an online manual with Digg, LinkedIn, Twitter or the like?  If you answered no to these questions, then you will need to say “no” to manual required for pulling social capabilities inside the enterprise.  It all comes down to usability.  Ease of use has to be your #1 criteria.  We are recommitting to user driven design.  We have painfully realized that the complexity of our enterprise architecture has the capability to turn our social software into mush.  Our users are guiding us to rise above the complexity and to focus on simplicity without sacrificing feature richness.

 

•     If IT doesn’t act now, then someone else will: 

Social media tools can quickly “go wild”. Listening to your business customers and becoming keenly aware of what people are doing within external applications or what is housed on a server under someone’s desk, is critical to tame the wild beast within social tools.  Just like instant messaging (IM) got into your enterprise, so will social tools.  We have some taming to do…particularly with wikis.  We are at the critical inflection point of deciding to pull in enterprise grade social networking.  If we in IT don’t act swiftly, I guarantee you someone else will.  It is a reality IT cannot run from.

 

So far my key learning comes down to the above.  I fight these challenges daily.  It all boils down to the fact that at the end of the day, social media isn’t about the tools….it’s about people.

I remember back when I worked in the field of organic agriculture and environmental marketing. No one had a clue what I meant when I referred to the importance of "going green." Yet today the green debate has rapidly spread from the rows of organic farms to the halls of corporations all over the world. Even technology companies are joining the movement and debating the issues at hand.

 

On June 11, 2008 experts on various sides of the eco-technology issues will converge in Santa Clara to debate these "hot" topics:

 

  • Data center efficiency: AC vs DC power

  • Data center efficiency: liquid vs air cooling

  • Client: thin vs. thick client

 

In addition to the debates, the event features keynotes from Lorie Wigle, general manager for Intel's Eco-Technology Program Office and president, Climate Savers Computing Initiative and Andrew Fanara, head of the ENERGY STAR product development team, U.S. Environmental Protection Agency. Register to attend in person or tune into Open Port's Blog Talk Radio the day following the seminar to hear interviews with the speakers.
 
     This debate should be quite compelling with industry experts from esteemed organizations like IDC, The Lawrence Berkeley National Laboratory, Emerson Network Power, Intel, Microsoft, InfoWorld, and Verari Systems--to name a few. View the complete schedule and register today for this one-of-a-kind opportunity.

Measuring the value of information security programs is difficult and a problem for the entire industry. Come join us for a 3 part series discussing the challenges, how Intel is taking a practical approach, and where the future may take information security metrics.

 

Last week, Matthew Rosenquist & I discussed an actual Intel case study with Enrique Herrera. In this last of the three part series, we will discuss some practical approaches to determine the value of information security initiatives, including some future-looking ideas, and how security metrics might be implemented on a national scale.

 

The show is 30 minutes, starting tomorrow (June 4) at 10:30 PDT. To listen in, go to the OpenPort home page, and a little ways down on the left side you'll find the BlogTalk Radio link. Take that link and follow the instructions. You don't need an account to listen or participate in the discussion. If you can't make it live, you can also find the recorded sessions there too, after the show.

 

See you there!

 

Return On Security Investment - BlogTalk Radio

Wednesday, June 4, 2008

10:30 AM PDT / 1:30 EDT

http://communities.intel.com/index.jspa

Measuring the value of information security programs is difficult and a problem for the entire industry. In the second of the three part series, Intel discusses a practical approach to determine value of information security initiatives. Intel security professionals Tim Casey, Enrique Herrera, and Matthew Rosenquist discussed the success of Intel’s security value methodology outlined in the Whitepaper - Measuring the Return on IT Security Investments

 

 

 

Listen to how Intel utilizes this strategy as one means to measure the value of security programs. The whitepaper is available for download.

 

The 30 minute discussion can be replayed here:

 

 

 

 

 

The last of the three part series, Future State of Security Measurement, will occur on Wednsday June 4th. Everyone is welcome to participate or just listen in. Details can be found here:

http://communities.intel.com/openport/blogs/it/2008/05/12/how-do-you-measure-something-that-doesnt-happen

Greetings!

 

The great behemoth that is Intel is in fact made up of many, many tiny cogs...and I am one such cog.

 

Working as a program/project manager within Intel's Information Technology group, my efforts are focused on addressing the IT aspects of Intel's acquisitions and divestitures activities (aka mergers and acquisitions...aka M&A...although it seems lately that we've been doing a fair amount of the ‘divestiture' projects...but I don't think you'll see the term M&A&D being used anytime soon! It's just not very sexy.)

 

 

I'll probably take a trip down memory lane in a later post, as this is in fact my second foray into the world of IT M&A.

 

 

In short, my role involves working with the various business units within Intel when they decide to acquire a company or divest a piece of their business, and ensure that all IT aspects of the transaction are addressed successfully. The PM role is responsible for everything from network connectivity for desktop and laptop systems to servers and storage to telephony and Blackberries. We work closely with our "partners" within Intel (we used to call them "customers," but I prefer the term "partners"...later post topic?) to ensure that people, assets and intellectual property are 1) brought in smoothly to the Intel fold in the case of an acquisition, and 2) handed off/out smoothly in the case of a divestiture.

 

 

Wow, so much more I could add on this topic alone, but I'm a brand new blogger, so I must pace myself!

 

 

Thanks!

 

 

Chad Clemons

 

 

 

 

 

Filter Blog

By author:
By date:
By tag: