Skip navigation

Take a company with more than 80,000 users who are administrators on their machines, mix in thousands of known applications, countless unknown ones, then just for kicks add in a move to 64 bit computing.  On top of that, give your users a new web browser and a new way of handling administrative permissions and you get a sense of the magnitude of deploying Windows 7 to Intel's enterprise.


Last year, Intel made a decision to skip deployment of Windows Vista in favor of Windows 7.  As a part of this, we partnered with Microsoft in their Technology Adopter Program (TAP) to assist in defining the OS and making it as bug-free as possible.  What Intel gained out of this is not only a stable and functional operating system, but an early look at what it would take to work within the constraints of a new security model.  As a result of these efforts, Windows 7 is now considered the "plan of record" Operating System for Intel.   Now the heavy lifting begins.


One of the heaviest lifts is application compatibility.  There are numerous vectors to "app compat", most of which require a significant investment.  In this blog, I will talk about User Account Control, 64-bit compatibility, IE8, and application compatibility with older operating systems.


The first, and most significant, is UAC (User Account Control).  UAC was introduced in Windows Vista, but for various reasons, it's implementation at most companies was delayed.  Microsoft has done a much better job with UAC in Windows 7, and it is Intel's intent to leave UAC on at the highest level, except for a few settings which simply cannot be deployed yet.  The best way to describe UAC is to talk about administrative access.  In Windows 7, a user can be an administrator, but not be provided administrative access by default.  This is described as a split token.  The security token has the capability to provide administrative access, but does not do so unless it first  "informs" the user that they are about to do something that requires administrative access, or the user does something to manually get around such a prompt.  If an application requires access to protected areas of the file system, or to the registry, it  informs a user about this need for  access by displaying messages on the desktop that the user is requesting something that requires admin control.  The user confirms that this is what they want to do, and the application proceeds.  The idea here is that a user will acknowledge it if they are aware they are doing it, but if some malware attempts to install something bad, the user will be informed, so they can deny the request.


Where this gets to be a problem is when an application is not written to inform the user.  In that case, the application just fails, with no message to the user as to why.  Microsoft has provided an easy solution to this problem; by right-clicking the application icon, the user can choose to "Run as Administrator", and the application proceeds with full administrative access.  If this resolves the problem, the user can choose to always run the application as administrator by changing the shortcut used to start the application.


A decision made by Intel during the TAP was that this was the time to make the move to 64-bit computing; this allows us to be prepared for future needs, as well as take advantage of the higher memory capability of systems available on the market today.  However, 64 bit computing brings with  it some significant challenges for application compatibility.  The primary challenge is that16 bit applications are no longer supported.  Initially, you would think this would not be a big concern; 32 bit computing has been around for many years, and most applications have been ported to 32 bit.  However, many legacy applications still exist in an environment such as ours that is required to support older operating systems; in addition, many application have been packaged using 16 bit installers.  These installers and applications will need to be changed.  How we are planning on handling that is discussed later.


Another issue with 64 bit Windows is that it uses a different path for 32 and 64 bit program files.  Applications that are hard coded to look for "Program Files" at runtime will fail when the application is installed in "Program Files (x86)".


The requirement to use Internet Explorer 8 introduces even more challenges.  Intel has delayed deployment of IE7 and IE 8 in our intranet due to known issue with some very important applications.  With the move to Windows 7, IE8 becomes a "must have" compatibility.  IE8 does offer an IE7 compatibility mode, which can mitigate some issues, but other applications are written to require IE6, and mitigation of these issues must be addressed.  There are also known issues with such things as Office Web Components, IE plug-ins, java versions, etc., that can really make this a challenge.


Finally, there are a whole lot of other issues that can crop up.  Such things as OS version checking, where an application is written to check for a specific version of the OS, rather than a minimum version, can cause an application to fail during either installation or runtime. 


What does all of this mean?  It means that a significant amount of work needs to be invested to prepare for Windows 7 application readiness.  Comprehensive application inventories,  application owner engagement,  user segment analysis, test environments, testing workflow, remediation plans & tools, and "safety net" environments all have to be  managed.  What do I mean by "safety net"?  This is a term we use internally at Intel for solutions that provide XP native functionality, usually in virtual environments.  We have some interim solutions in place using Windows Terminal Services and XP Mode, but we are also taking the time to look at all available options, including both client and server-based enterprise virtualization solutions.


I am currently working on a white paper that will provide a look at how Intel has dealt with these issues, which I will publish here as soon as it is available.  In the meantime, I would love to know how others are preparing for the move to Windows 7.  Do you see the same kinds of challenges that we do, or are there others lurking out there that we have not yet considered?




I was remiss in not sharing the tremendous value we see in moving to Windows 7 in my earlier post. Any time a major company migrates to a new OS, issues will arise. We expect this and Microsoft is working closely with us on our deployment to ensure we have a fully compatible environment ready for Windows 7. So far, Intel’s deployment has been moving along routinely with no changes to either of our expectations for total success. Our deployment is on schedule and our expectation for success has not changed. You may have read this elsewhere, as we’ve shared it before, but it’s worth repeating. Intel expects to reduce operating costs by $11 million over the next three years using Windows 7 and, 97% of our employee early adopters said they’d recommend Windows 7 to their colleagues. So far, so good!

In Hannover Germany next week, technology leaders from around the world will be in attendance.  Intel's CIO Diane Bryant will be presenting on Tomorrow's Potential, Shaped by Information Technology Today


IT faces many challenges - accelerating user requirements, globalization, and growing security issues. IT needs to be agile, responsive, and ultimately create value. Diane Bryant will highlight how creating value is not a one-time effort. Value relies on continuous investment, innovation, and foresight. The pace of business demands is astonishing. Equally astonishing is the pace of technology.


You will hear Diane talk about how despite the worldwide recession and subsequent focus on cost, Intel IT continued its client and server refresh strategy and investments in IT. This lead to enhanced productivity for all Intel employees, world-class recognition for Intel’s supply chain due to advanced use of IT and through the deployment a high-performance compute environment Intel positively impacted the development time of the latest generation of Intel products.


Attend this sesssion and you will hear more about how Intel's IT organization delivers strategic value by providing professional support, applications, and solutions that enable Intel's growth and transformation.


Convention Center (CC), Room 2, 03.03.2010, 11:00 – 11:30  h

For the last 10 months we have been actively focusing our software inventory (focus on End of Life) to be less reactive and more proactive. We have had great successes, as I've discussed in past articles, and last year we did indeed approach our 50% reduction in inventory based on our initial inventory.


We are happy, but we can do much better.


If we keep doing the same things we did in the past (prior to our program kick-off in 2007), then we will end up with the same results:

  • Bloated inventory
  • Poor management controls
  • Underutilized or abandoned systems
  • Slow to react


The Application Portfolio Managment (APM) approach looks at our software inventory and calculates the benefits compared to the costs. Knowing what you spend, and the value that that software solution provides, now means you can determine if there is any value with keeping the same application (of if there is overlap). Since we already mapped our applications against a set of cabability frameworks, we are using those to drive our process.


Let me stop and give a quick explanation of our capability frameworks since it is so imperative to our process (and success). From an architectural perspective, we have three different frameworks (enterprise, infrastructure and cross-enterprise) utilized to map applications to. Each framework has multiple, hierarchical, tiers with increasing levels of detail. Any application can be mapped to any single-framework and many different nodes. We have specific owners of each hierachy who is responsible for managing the nodes and who work with the individual application owners to obtain and record the applicable information (in future posts I'll get more specific).


Here is an example of one node (two level-4 items) of our Enterprise Capability Framework:

  • Enterprise Capability Framework
    • Information Systems
      • Managing the IT Capability
        • Technical Infrastructure Management
        • Enterprise Architecture Management


What do we do with this? When thinking of IT as a business, there are certain capabilities that we need to deliver for our internal customers. These are as wide as the complete set of activities performed in our worldwide organization. There are many features which we off-source, such as the manufacturing of our processing equipment, but many we manage internally, such as email, architecture or telephony services. Knowing how the complete landscape exists means we also know where we have gaps (or growth opportunities). This is the focus of the framework mapping and as-is analysis in this area.


Now understand, this is but one component of the overall process. The actual mapping is the easy part; knowing that an application has certain functionality and delivers capability to businesses. The harder part is taking that data and sitting down to deterine the breadth and width of that capability along with how well (health) it does that job. With health assessments in place, coupled with cost assessments, we can begin to get a better picture of how our APM process will play out. This also means we need to start considering areas we have not historicaly played in, such as licensing management and the costs associated with that.


Our journey into this area started with a few key areas under a pilot approach to hammer out process issues and train the key architects in how to manage their own Level-2 capability nodes. We are learning as we go.


Have you taken your software management to the next step? If so I would love to hear about your lessons learned.

Intel IT runs 95 data centers worldwide with almost 443,000 square feet (41,000 square meters) collectively. These data centers house approximately 100,000 servers enabling both the operations and innovations in Intel. How is a tour inside these data centers sound to you? Sounds good? Here is an opportunity!


IT@Intel just released a series of videos that takes you through a tour of a few Intel IT’s major data centers. Not only we will show you the inside of our data centers, we will also share our approaches in facilities, compute, networking, and storage, as well as our strategy to create USD 650 million of value for Intel.


Now sit back and enjoy the ride!


In late 2008, we were experiencing an average of about 5,500 “blue screen” system crashes in our client environment per week. As a result, users were not satisfied with the stability of their PCs.


In order to improve it, we implemented a proactive problem management process based on analysis of objective, largely system-generated data from client PCs across our worldwide environment. Using this approach, we have increased client stability by reducing the number of blue screen system crashes by more than 50 percent, and we are beginning to realize benefits in other areas including unexpected shutdown and boot time.


The solution has two aspects – one is the business process based on Information Technology Infrastructure Library (ITIL). The 2nd aspect is based on collecting exceptions from the clients’ environment.


As the client’s environment is very rich and changes constantly,
it is very important to identify in advance what are the exceptions that impacts customer experience. Blue screens for example were identified as something that customers don’t like (no big surprise there I guess).

Once we start collecting the data and analyze it, we could identify trends and the main root because, that then we can fix in the entire client environment (see the banner on page 4, on the left).


Another approach to it was to recall Top N customers with faulty systems to the support center, which was a nice surprise to those customers realizing that IT knows that they have stability issues and fix it without them needing to raise an incident ticket.

From my experience, the main contribution to the change of the trend to reduce blue screens from 5,500 a week to 2,000 a week is doing a proper problem management and release solutions to the whole client environment. Fixing specific system one-by-one is not enough when managing so many systems.


Updating drivers across the enterprise usually did the job, and we focused on the ones we found to cause most of the blue screens.

The success indicator is the trend-down graph showing the decrease in the weekly blue screens count - this enables us to review progress, set goals and know if we’re on the right track - see below screen shot of the trend.


Key to these efforts are partnership across IT teams, indicators in place to measure and show progress and an execution plan that eventually improved customers’ experience from their client systems.

What is your experience on this field? How do you improve your clients’ stability?


You can download the whitepaper from here.




Last year, I spent many hours sharing the key learnings from Intel IT's server refresh strategy and talking with others about how we had to re-justify our refresh investment internally last year due to capital spending cut-backs. We created a tool based on our IT finance learnings that was used by over 40,000 people last year.  Many of our users had excellent suggestions on how we could improve the tool to meet a variety of user situations and needs.


We have just completed a several month project to update the estimator with many new features that were suggested by tool users including


  • ability to model multiple server environments simultaneously
  • ability to model consolidation virtualization based on VM (virtual machine) loading
  • ability to view inputs and outputs simultaneously
  • addition of many new cost and savings variables that we had not used at Intel but were relevant to other IT professionals


Watch this short video about the history of the tool to or visit our user forum where you can comment, ask questions and share insights


Access the tool here



If you like the Intel Server ROI Calculator, here is a good news for you! A new and improved version is coming soon!


Intel IT continued our 4 year server refresh cycle in 2009 despite the challenging economic environment. With that, We have avoided a cost of US$12M in 2009 (read more in our new 2009 Intel IT Performance Report). The learnings from Intel IT has been captured in this Intel server ROI Calculator so we can share with our peers in other IT organizations.


Here are some great new features in the coming version:

  • Ability to run the tool in both online and offline mode
  • Support for heterogeneous server environment; up to 5 server configurations can be modeled/evaluated at the same time
  • More responsive side-by-side modeling so you can see results immediately while making changes
  • More intuitive virtualization modeling that allows for adjustment of virtual machine density


Here is the teaser for the new version! Stay tuned!

So you go to your next security conference and what will you most likely find? Most likely you will see many sellers of security in a box? Not always but much of the time, FUD is the technique used for selling their solution. FUD is commonly known in the information security community as fear, uncertainty, and doubt.


Who are the buyers of such security solutions? To most salespeople, the buyers are whoever has the money and the need for their product. Most buyers of such security solutions are influenced with a FUD strategy and maybe that will help the buyer feel good about their purchase. But it will only provide a secure state of mind unless there are defined security processes in place that are being made more efficient by the product.


Operations security (OPSEC) is a part of a layered defense program that keeps the business running securely. All of these controls should be considered a part of the operations security process because they must be audited regularly to evaluate their efficiency and changed when determined necessary by a risk assessments.


Technical controls: Standards for system hardening, passwords, encryption standards, anti-virus and anti-spyware, Firewalls and IDS/IPS, and general use of hardware and software to mitigate risk.

Physical Security: Limit physical access to systems to a select few and only those who need it. Equipment should be in a controlled environment with regulated temperature, power, and ventilation.


Managerial or Administrative Controls: This includes policies that require the aforementioned controls. This includes policies requiring background checks and segregation of duties. The security policies must be communicated through security awareness training for all stakeholders: owners, custodians, and users.


Vulnerability and patch management is one area that falls between administrative and technical controls because there needs to be a patch management process that defines the acceptable timelines for patch deployment before enforcement through technical controls should be implemented. Additionally, audits should be focused on determining the efficiency of either technical, physical or administrative controls.


Security should be considered a process in any organization and therefore, any product that is purchased should be done so to improve that security processes. Many organizations have created a false sense of security after installing a firewall without hiring a knowledgeable firewall administrator and defining a process to control its updates and configuration changes. However, a robust cyber security program is more than a collection of techniques and technologies put together in defense of a network. A sustainable security program must bind these into a cohesive framework driven by risk and compliance, and supported by assessment and training with the common goals of protecting the confidentiality, integrity and availability of information.

Time really flies, we are already in February 2010. I still remember the question being posted by Saqib on a post by Steve Bell (My Time, My Key Learning's for Social Computing), decided to post an update on the Social Media adoption in Service Desk.



Back in early 2009, we were given the chance to conduct POC (proof of concept) on a winning idea on opening up an alternate contact channel to end user for handling “How Do I?” type of inquiry and also to promote learning organization within Intel. Through selection of platform for the new contact channel, the team decided to use the new community network (includes blog, forum and groups (a.k.a Communities)) as the platform for communication. We started small, by piloting only 3 software applications support (setting up 3 groups (communities)) and few key Service Desk analysts to be involve in the pilot project. Once we have everything in place, we started to promote the new contact channels to the rest of Intel’s employee via email.



We face some challenges at the beginning as end users got use to calling, emailing or chatting with ServiceDesk to get help and putting another contact channel does not seem to impress the end user. We decided to change our marketing strategy, in mid-2009 the team participate in a innovation competition where we setup booth to introduce the new groups (communities) to end user and this time we are smarter, instead of telling the end user that it’s a alternate contact channel to get help from Service Desk, we change our marketing approach by telling end user that is a new social network communities for the specific software where people come together to share knowledge, BKMs (Best-Known-Method), and also to get support. I could recall vividly, throughout the exhibition day I was telling end user that if they like FaceBook they will definitely love the groups (communities) we setup. Since then, we starting to see growth in size for the pilot groups (communities).



In November 2009, even better news come by, the pilot team got an engagement to setup a new support group (communities) for handheld devices. The new support group went live end of January 2010, it’s been 3 weeks since we last started the new group (communities), we have 200 plus members and around 150 communication entries. The new support channel through community network community is growing, the team is confident to establish more groups (communities) for software application that ServiceDesk support.


Metrics Show the Relevance of Information Security  

Everyone wants information security to be easy.  Wouldn’t it be nice if it were simple enough to fit snugly inside a fortune cookie?  Well, although I don’t try to promote such foolish nonsense, I do on occasion pass on readily digestible nuggets to reinforce security principles and get people thinking how security applies to their environment.  The key to fortune cookie advice is ‘common sense’ in the context of security.  It must be simple, succinct, and make sense to everyone, while conveying important security aspects.


Fortune Cookie advice for February, 2010:


Metrics Show the Relevance of Information Security


Although not easy, metrics show the relevance of information security programs or the lack thereof.  Internal security does not generate revenue, it is a cost center.  The value of such initiatives is derived by the amount of loss they prevent.  Metrics can show this relationship and represent the value.  Sounds simple, but in fact it has been one of the long-standing challenges in the security industry. 


Security metrics are immature.  No pervasive standards exist and organizations continuously struggle to independently show value.  Advances are being made, but we are not at a stable point of comfort and confidence.  More research is needed.  A recent Department of Homeland Security report ranks metrics as #2 of top security research areas.


Some metrics do exist, but organizations are currently faced with an awful decision: meaningful or accurate; pick one.  Vague metrics are possible but lack tangible results which can be compared or quantified.  A flashing red light does not speak to dollars saved, how systems can be improved, or the future outlook.  Nor do simple metrics accurately reflect true causality correlations.  More accurate metrics are very difficult or in many cases impossible to deliver.  The industry has not settled on provable and reliable methodologies which scale with any confidence.  What can be produced with high accuracy typically provides little substance and not much assistance when making complex decisions.  Although specific metrics can provide dollar savings for small environments, they are likely to lack accuracy and can easily be challenged.  Such false predictions may be cause for overall loss of confidence in a security organization.  A risk many groups don’t want to take.  Security metrics still have a long road to travel, though their role is undeniable in showing the relevance of security.




Fortune Cookie Security Advice - Confusing Security Measures and Metrics - September 200p

Fortune Cookie Security Advice - No Royal Road to Security - July 2008

Fortune Cookie Security Advice - Strategic Compettive Secure - June 2009

Fortune Cookie Security Advice - May 2008

Fortune Cookie Security Advice - June 2008

Fortune Cookie Security Advice - August 2008

Fortune Cookie Security Advice - September 2008

Fortune Cookie Security Advice - November 2008

Fortune Cookie Security Advice - December 2008

Fortune Cookie Security Advice - January 2009

Fortune Cookie Security Advice - February 2009

Fortune Cookie Security Advice - March 2009

Fortune Cookie Security Advice - April 2009

Fortune Cookie Security Advice - May 2009

During our recent annual Intel IT Leaders Conference, we put together a fun video to highlight to the IT management team some of the key contributions that individual Intel IT experts are making inside our organization. I really enjoyed seeing my peers participate in this video and share their stories on camera.


While we highlight a few individual rock stars in this video, what really grabbed me was Tim Verrall’s statement in the video: “I don’t see myself as a Rock Star, just a member of an awesome band”.  This attitude is shared by all of our IT rock stars, as they truly understand what it takes to build a great team.


I invite you watch this video – hopefully it makes you smile.


If you want the more serious side of Intel IT and the difference our people, operations, solutions are making for our business, read the 2009 Intel IT Annual Performance Report.



More from Intel IT

Network World just warned IT to prepare for tremendous network traffic during the SuperBowl.  Peak Demands happen routinely inside IT organizations and IT has to be ready.  In order to be ready, IT must prepare in advance by having a strategy to handle and manage peak demand.


Intel IT deals with peak demands inside our business constantly and this sizing paper talks about some of the impacts that govern our server sizing decisions each year.


Follow Intel IT on twitter

Google is the latest major player to establish a financial reward bounty for reporting software bugs in their products.  Opinions differ on paying outsiders for vulnerabilities in such a manner, but for the record, I fully support the idea!


I think these programs support security objectives on a number of fronts.  It brings to bear more resources to find the vulnerabilities, leverages positive aspects of greed to accelerate the process, and targets the motivations of potential attackers to undermine their destructive activities.


Bounty programs tap extended resources to identify bugs in a constructive and competitive manner.  Even though Google likely has a very proficient security design team, they still will miss vulnerabilities that external researchers may find.  A financial incentive can direct more volunteers to the effort.


Reward initiatives leverage the ‘greed’ of potentially competing attackers and researchers.  Greed can be good.  In this case it creates competition among researchers and against attackers.  Researchers will strive to be the first to report a bug.  It accelerates the process of finding and closing vulnerabilities before an attacker can take advantage.  In doing so, pressure is put against attackers who are looking to exploit a new bug.


Bounties directly target the motivations and objectives of attackers.  For threat agents who are motivated by financial gain but are not set on doing harm, this provides an opportunity to leverage their hacking skills without crossing moral boundaries or be at risk of criminal prosecution.  These programs will also appeal to those seeking personal fame.  Positive recognition and validation by the software vendor is something which builds reputation and looks very good on a resume.


Lastly, I suspect such enticements may also lead to conflicts within the internal dynamics of attacker groups.  Weak members, who may feel slighted or undercompensated, may choose to go behind their cohorts to directly benefit from newly discovered exploits by reporting it themselves.  There is no honor among thieves.  The potential of driving a wedge between members will give pause to organized groups of attackers and force them to limit who they involve and manage their own internal security.  In a small way it turns the tables against those very people who seek to undermine information security.  The irony is sweet.


Overall, I think a well managed bug bounty program, is a very good idea.  Only time will tell if the benefits can be measured and understood.  I fully applaud Google, Mozilla, and the likes for taking this approach and hope to see others follow!



Last week I had the opportunity to attend the Intel IT Leaders Summit.  As a relatively new member of the IT Team, the ability to network and gain insight into the priorities, challenges and governance of the organization was invaluable. It is not be possible for me to capture in a single blog all my learnings from this two-day event, so I will just focus today on the role and vision of IT.



The Leaders Summit was an internal gathering of over 700 of the senior managers of the Intel IT organization to focus on the year ahead for the purpose of achieving our vision: Making IT a Competitive Advantage for Intel. 



Diane Bryant (Intel CIO) kicked off the session talking about how our customers (the business leaders of Intel) assess our performance from 2009 and identify their needs for 2010.  The business identified three key themes for worth noting that guided our discussion during the summit.

·         More Dependency on IT

·         Business desires greater strategic alignment with IT

·         “IT should just work”



For the past five years, I have personally talked to many IT managers, quoting industry consultants about similar themes being important to IT leaders and how they shape their organizations.  However, to see firsthand these concepts actively being used to guide our planning was a powerful experience.



Additionally, we had one senior intel business executive conduct a QnA with the IT leaders and when asked about the value of IT, he made three statements:


·         without IT, there would be no Intel

·         if IT is mediocre, then Intel will be mediocre

·         if IT excels, then Intel has a foundation for excellence



I have thought about these statements a lot in the last week and believe they capture the essence of the Intel IT vision of making IT a competitive advantage for Intel and more broadly the role of any IT organization.



What do you think? Do the statements above reflect the relationship between IT and business?



At the event, the 2009 Intel IT Annual Performance Report (APR) was unveiled.  Much of what we discussed at the Leaders Summit is captured inside the APR.  The APR is an in-depth look at Intel’s IT operations, solutions, imperatives and key metrics. 


The new 25nm, 2bit/cell chip can hold 8GB of digital capacity, more  than 10 times the capacity of a standard compact dic [700MB]. The chip  measures a mere 167mm2 -- small enough to fit through the hole in the  middle of a compact disc.


Intel and Micron Technology reset the NAND flash capacity bar a  little higher and the chip size threshold a little lower Feb. 1 with the  introduction of the world's first 25-nanometer solid-state processor.


The new 25nm, 2bit/cell chip can hold 8GB of digital capacity, more  than 10 times the capacity of a standard compact disc [700MB]. The chip  measures a mere 167mm2 -- small enough to fit through the hole in the  middle of a compact disc.


"This is not only the smallest NAND lithography in the world, it is  the smallest silicon manufacturing technology in the world," Intel  Marketing Director Troy Winslow told eWEEK on a conference call.

Resource Library:

"This is now the largest capacity multi-level cell device on the  market, at 8GB. We were the first on 34nm, now we're the first on 25nm."


The smaller size allows multiple 8GB chips to be packaged more  economically to increase storage capacity. The new 25nm 8GB device  reduces chip count by 50 percent compared to previous process  generations, allowing for smaller, yet higher-density, designs and  greater cost efficiencies, Winslow said.


For example, a 256GB solid-state drive now can be enabled with only  32 of these devices, versus 64 previously, Winslow said. A 32GB  smartphone needs only four, and a 16GB flash card requires only two, he  said.


NAND flash memory, used in consumer devices such as smartphones,  digital cameras, and personal music and media players, stores data and  retains the information even when the power is turned off. NAND flash  also is gaining market share for use as components in high-performance  solid-state drives for servers and storage arrays.


IM Flash Technologies, Intel and Micron's NAND flash joint venture,  continues to cram more capacity onto tinier pieces of silicon about  every six to eight months. IMFT debuted its 34nm, 3Bit/cell NAND flash  chip last August.


The 25nm/8GB device is sampling now and is expected to enter mass  production in Q2 2010, Winslow said.




The vast majority of people, and even businesses in the country, don’t have to worry about being hacked or spied on.  Even the largest corporations won’t necessarily be at risk.  However, nobody will ever really know if they are being targeted unless they have a pro-active security policy and the staff to enforce it.


For the rest of us, it makes sense to protect ourselves anyway, just in case.  Especially is it doesn’t cost anything or take any extra effort on our part.  Secure email is one such facility that very few of us even know about let alone use.  Those of us who use Gmail may use the HTTPS only option, but that still isn’t foolproof.


For businesses the answer to how to secure email is pretty easy.  You employ someone to do it for you.  That can either be internally, or a third party vendor that offers secure email hosting, or a certificate agency like Thawte or Verisign.  If your organization is involved in financial or medical transactions then things get a little more complicated.  You have to worry about things like SOX and HIPAA which takes an extra level of security and infrastructure to comply with.


For the rest of us there are free digital certificates that can offer verification of identity of the sender and also encrypt the contents of the email.  This secure email methodology is easy to set up, needs no extra administration and best of all, is free of charge.


If you do a Google search for secure email you will find a host of hosted email providers that can offer secure, encrypted email services, for a fee.  There will also be a couple of companies like Thawte and Comodo that offer free digital email certificates for individual use.


Using one of these certificates in your personal email client can protect your personal information, and communications from prying eyes.  The certificate means that the email sender is trusted by the certificate provider who acts as a middleman in the process.  While this may seem a little over the top, if the public took more care over their private information, cybercrime would start to go out of fashion.


So while it may seem a little over the top, protecting yourself in any situation is key to keeping your identity and credit score intact.  Understanding and managing risk is all about identifying perceived threats, then taking action against them.  The other part of it is designing counters to these threats that don’t impact too much on your life, or get in the way of what you’re doing.


Using a digital certificate to secure email, private or business related is a good first step in securing yourself against attack or identity theft.  Even if nobody is watching you, using encryption is a good safeguard for when they do.

There has been a lot of chatter about Advanced Persistent Threats (APT).  APT is a sexy acronym to be sure, but let’s not get too excited or distracted with this latest catch phrase.  This is nothing new, rather just a trendy description to an existing threat.


APT’s are threat agents, not vulnerabilities or specific attacks.  They are people who plan and conduct the attacks.  They are people who are focused, talented, and possess significant resources.  APT’s conduct directed attacks, with malicious intent, to achieve specific objectives.  That combination represents a powerful and serious threat to any organization.  They sit at the top of the threat-agent pyramid, the elite, being few in number but very dangerous to the victims they target.


Before paranoia sets in, keep in mind we deal with all kinds of threat agents.  From masterful economic espionage spies planning long term orchestrated attacks to steal the crown IP jewels, to the Homer Simpsons who walk out with the same IP on a USB drive in their pocket, only to lose it at the bar while going to the bathroom.  They are all part of our ecosystem, and must be accounted for in our continual balancing act of managing security.  APT’s are a dangerous threat agent to be sure, but they have and will likely always exist.


So what should an organization do?  Even if you are a likely target of APT’s, don’t give up. They have to play in the same field.  It’s just they may be a more challenging adversary than what you may be accustomed to.  Every threat agent can be undermined, deterred, or minimized.  Knowing attacker characteristics, such as objectives, motivation, capabilities, limitations, etc. is important in making good security decisions for each organization.  Think critically and focus on the attacker.  Know the most likely methods they will employ and determine the right balance of defense-in-depth controls to attain the optimal level of security.


We should never limit our focus to just the elite attackers.  I think we must all be flexible in understanding the broad threat agent landscape for our specific environments.  Stratagems effective against one agent may be useless against another.  Do APT’s exist?  Yes.  So do Homers and a whole span in between.  All the relevant threat agents must be addressed.

In this game we play, most of us reading this blog benefit from home field advantage, playing defense.  We own and manage the field.  However, let us not be consumed nor ultimately comforted by that fact as “…knowing your adversary is far more important than knowing the condition of the field”.  If this is somehow a foreign concept to you, it is time to look up and see who you are playing against, as the opponents may include an APT 800 pound gorilla.  And look closely, as he may be wearing one of your employee badges.



Filter Blog

By date: By tag: