As I sit here fresh from a leadership conference for IT employees, I find myself thinking about that. Does IT need radical change? After hearing several examples of how people engineered solutions to solve specific problems or reviewed projects they had developed over the past year, I can answer with a definite yes. While it wasn't simply this experience that pushed me to realization, it definitely helped complete the pattern I had noticed in today's IT.

 

I spend most of my normal role investigating and researching emerging and next generation technologies. With this role came many headaches from pounding my head against the wall of established processes, procedures and preconceived notions. But to borrow an idea from Gene Meieran, that is simply the toll I am paying on this road to my success. But I look at this and ask a simple question, why?

 

When pushing to adopt a new technology, why do we have to wait until it meets all of our established requirements? Why do we try to make vendor's products adapt to us, versus us considering the possibility to adapt to them? Why does it take us 2 years to adopt a new operating system or major product? Why do we run projects for 18-24 months to implement a product that exists out on the shelf today? In looking at several examples of what people consider successful products today, I look to see what makes them different, attractive, and a must have. I then ask what would it take to make IT different, attractive and a must have for any corporation.

 

Five or six years ago, people came to work and looked to IT to get the latest hardware, OS and innovations, because we had it here. We spent the dollars and time to solve problems and innovate. But in the last few years, people have adopted technology must faster at home than we do at work. They use the iPhone, a Wii, social networking tools, cloud based services, etc. They are enabled at home with more options than we provide as an IT shop. We use instant messaging in IT, not because we developed it as a way to eliminate small emails, but because instant messaging was a consumer product that grew so fast, that IT had to adopt it. Social networking is doing the same thing. So I wonder, what would it take to get IT back ahead of the curve and become an enabler of new ideas and solutions, rather than an implementer & reinventer of existing technology?

 

We need to get back to freethinking and innovation that is core to our roots. Companies like Intel were founded on thoughts like the famous quote from Robert Noyce - "Don't be encumbered by the past, go out and do something wonderful" yet in our day to day life I see many encumbered by the past and am waiting for the wonderful. We choose solutions that have more of the one size fits all. Instead of picking the best solutions for the roles that exist; we try to find the one item that can solve all of our problems. Rather than choosing the optimal product for the "one size", we should look at the product that enables the end user to perform optimally. Imagined if corporations took this approach with their products. Image a shoe manufacture that developed the one size fits all. It would be an opened toe, ¾ shank athletic tread, men's size 10, 3-inch heel, sneaker pump. It would meet most of the needs of the shoe-wearing world, but wouldn't be the right shoe for many, if anyone. So why do we settle for the same model in IT? We need to be innovative. We need to look at Apple, Google, Nintendo and others. They didn't just develop products that do what everyone else's products do today, but they did them differently & in many cases better. What does it take to make your part of IT the next iPod, iPhone or Wii? How can we enable our partners to perform optimally? What does it take to just go out and do something without worrying about how many existing committees; review boards, processes and groups have to be engaged to just get it going? The answer is radical change. We need to change how we work. We need to change the level of control we have today. We need to shrink what we try to manage. We need to strive to enable the partners versus totally control their work life. We need to ask so what every once in a while. When someone says if we do A then B might happen. Ask the question, so what? We spend all this time doing the day-to-day moving from spot to spot, never worrying about the resources, costs and effort put into the status quo. When we try to implement something new, it goes under the microscope and quite often is held to a different standard than existing solutions. Requirements seem to be a never-ending monster of growth, instead of the simple point-by-point items they should be for solutions. Many times the solutions themselves are actually listed as the requirements. So I challenge us all to start a process of Radical Change. Start asking the question So What? Start pushing back on the status quo, quit being encumbered and start a process of innovation. Help your partners perform optimally and be a key part of their success rather than just one of their suppliers. It won't be easy, it won't always be fun, but it will be rewarding.

Intel Information Technology and MIT are studying web-based social media as a tool for understanding collective intelligence and distributed decision-making. The useful question we have posed is "What are good ways to balance the potential productivity advantages of open collaborative computing versus the data security needs of the organization?"

 

In order to maximize our analysis of the discussion results and MIT Deliberatorium tool, we are extending the deadline to October 6, 2008.

 

 

 

Please take five minutes to add one new post to the discussion and to rate one other person's contribution. This will enable us to gather a wider range of opinions on the topic and further investigate the value of the approach.

 

 

Add a Post

Rate a Contribution

Click on the "add" button located to the right of any item in the map. A dialog box will pop up. Enter your post and then click "submit" to save your entry.

Click on any item in the map that you want to rate. A description of the item will appear in the right panel. Click on one of the stars to select your rating.

 

 

 

Top contributors to the discussion will be recognized when we post our final project results.

 

 

For More Information watch the ten minute video clip for a concise overview. Contact https://communities.intel.com/Catherine.Spence@intel.com with any questions.

 

 

Thank you for your continued participation!

I'm sure you've already seen press on the new 7400 series of processes. It is a really exciting time to see 6 core procs coming out. Being an engineer that supports enterprise applications and technologies this should provide a lot of extra power to apps that were CPU bound to 4 procs. One such technology is virtualization and Microsoft's Hyper-V. Previously the limit of Hyper-V was 128 physical procs and 16 logical procs. Microsoft just released an update that will increase those previous limits to support up to 192 physical procs and 24 logical procs! WOW, I can't wait to see that in action. This should definitely help organizations that need to limit their physical footprint of servers with their consolidation efforts.

 

This is a great example of two companies combining their technology in ways that really benefit the customer. Very exciting times...I can't wait for even more cores!!!

It's great server weather here in New Mexico today. The current temp is 74f , the high will be ~82f ,the low was 57f ,and the humidity will be ~30% all day. These are all "in range" of the air needed for a densely packed server to breath. Depending on how many servers are packed into a rack, they can heat this 74f degree air by as much as 50f degrees. Removing that heat from the server exhaust air consumes energy and expensive equipment. A key system trade off in data center design is that density reduces the unit cost of deploying and operating servers for most inputs but increases the cost of dealing with high density heat. If we can find ways to address this heat with less energy and cost then we can be more economical, and in today's words, more green.

 

It was on a day like today, a while back, where Tom Greenbaum, a fellow Intel engineer, and I were brainstorming how to get outside air directly to the servers. Because of Tom's experience in custom air handlers, we were focusing on economizers that are normally used for buildings. Economizers are commonly designed into office buildings, homes and in a simple fashion, most cars, but they are rarely are thought of for data centers. The key concept is that you have access to two air supplies: the outside air and the exhaust air. To be most economical, you want to continually decide "what is the best air to use next?". This takes measuring, deciding and switching. In your car, you are both the measurement device and decider as you press the re-circ button or the flow-through button on the dash. In a highend building air conditioner this is an outside weather monitor, an inside weather monitor, a simple controller and some extra duct work.

 

In our experiment we used an vane controller in a looped air duct that could blend outside air with the exhaust air to get us the best air for the servers to inhale at the lowest cost. The accuracy and controllability of our monitors and controllers were not as capable as we wished, but they did the job. The perfect controller would have measurements of outside air and exhaust air for: temperature, enthalpy and dew point. It would have policies that you could modify for best economy based on the requirements of your equipment. For example, it should be able to blend exhaust heat into the outside air to hold the minimum temperature at say 55f and no more in winter and it should be able to start incremental cooling loads as the temperature of the coolest of the two supplies rises above the maximum allowed by your equipment, in summer. It would decide to use some exhaust air when the dew point was higher than the outside temperature to control air humidity. You get the point, it would condition the air using all available inputs. Our PoC used DX cooling based units that usually are considered not as economical as water based cooling. But, in this mode of operation, they worked well and reduced complexity. In addition they used no water which is a plus in many desert locations. You can imagine evaporative systems in similar designs that could replace the DX units or work with the DX units for even more "economi-zation".

 

 

In the video you will see dust on the back of the servers. We had filtration on the air intakes and a control system that can indicate when the filter needs replacing, but we had a door system that let in dust near the inputs during really windy days. It was a design flaw in our temporary room. Once it got into the system we decided to let it run to see where it went. It collected in the exhaust areas but then created very little risk becasue most of the time we exhausted the air. Something that would not have been true in a closed loop system.

 

 

Don Atwood was able to negotiate for and create the production capable configuration for a sufficient number of production servers that were dense enough to run a proof of concept (PoC). These servers run high volume batch computing and are nearly always running above 90% utilized, perfect heaters for the job. It is important to note that this PoC was dependent on the concept of a "Compute Center". A Compute Center is the idea that high density servers can be isolated in their own air space a very short distance from the storage. The storage is left in the classical, and perhaps now more aptly named data center. Where this concept is able to be used, it can help free up traditional close loop environmental control for storage systems. If anyone knows of a great economizer controller, please lets us know. An Atom based design would be a plus

 

 

The temporary "compute center" we established and operated would not have be successful without the help of several great engineers contributing insight and innovation. Tom Greenbaum, Marvin Bailey, Steven Bornfield, Natasha Bothe, Greg Botts, Demetruis Ferguson, Ryan Henderson, Dan Links, Don Wright all contributed to learning what is possible during our PoC.

 

 

Well, it's 79f outside now and still great server weather, there are hot air balloons in the sky this morning, as we get ready for the Balloon Fiesta here in early October. Hot air balloons, like racks of servers, love this air becasue they can inhale cool air and then heat it to create work that moves people. Finally hot air balloons spend very little energy exhausting the resulting heat out the top. Its a simple model really, it just takes a smart controller.

 

 

Video: http://video.intel.com/?fr_story=2d6e0fbbef76b72c6119cc7fe7889bba20cb5192&rf=bm

 

 

Paper : http://www.intel.com/it/pdf/Reducing_Data_Center_Cost_with_an_Air_Economizer.pdf

 

 

 

Hi I’m Don Atwood, author of the newly released white paper and video that discusses our proof of concept (PoC) that tested cooling our Data Center with outside air. The topic of humidity control and if this would work in an ultra high humid climate keeps coming up. Most OEM spec’s allow for a wide range of humidity and it’s our belief that this cooling methodology could be used almost everywhere globally. Our only uncertainty comes around trying this near the ocean with high levels of salty corrosive wet air. We know it would negatively affect the servers at some point but the question is how quickly and is it within our refresh timetable. During a trip to ASIA last week I discussed trying a small scale “near the ocean” PoC to test this theory.. Does anyone thing this would add value to your company?

The third and last part of the video series discussing how you can make use of the vPro system defense capabilities the easy way is out, this video shows an example of how your existing security server can implement network quarantine using system defense on provisioned devices without having to know a thing about AMT.

The video follows on the second video which showed an example of using system defense through the Microsoft SCOM GUI and shows a proof of concept implementation that only requires the security server to input an event into the local windows event log which is easily doable with almost any programming/script language. Behind the scene the SCOM agent installed on the security server intercepts this event, sends notification to the SCOM server and as a result the SCOM server implements the blocking policy on the offending host.




The beauty of this is that now you can choose any server to collect and correlate your security events and take quarantine decisions and all that without this server having to be an AMT management server. the existing AMT manager (SCOM in this example) is doing the hard work for you.


as before I hope you find this useful, I would love to hear comments and answer any questions.

Cheers


Omer.

Everyone wants information security to be easy. Wouldn't it be nice if it were simple enough to fit snugly inside a fortune cookie? Well, although I don't try to promote such foolish nonsense, I do on occasion pass on readily digestible nuggets to reinforce security principles and get people thinking how security applies to their environment.

 

Common Sense

I think the key to fortune cookie advice is ‘common sense' in the context of security. It must be simple, succinct, and make sense to everyone, while conveying important security aspects.

 

Here is my Fortune Cookie advice for September:

 

In information security, like in sports, knowing your adversary is far more important than knowing the condition of the field.

 

 

 

 

Information security is an adversarial pursuit. It all begins with threat agents, those people who will negatively affect your organization. Some are malicious, others are not. The key is they are living, breathing opponents whose motivations drive actions which cause loss. They learn, adapt, and change as they seek their objectives.

 

Know your threats. This is an important first step. Knowing all your vulnerabilities is fine, but secondary in importance.

 

For those who are malicious, understand what they target and the likely methods they will employ. Only then can the vulnerabilities be narrowed to show the most probable exposures. This prediction gives the security professional a focus on what to protect, how best to monitor, and preparations necessary to respond when needed.

 

 

 

So am I contributing to the problem of over simplifying security? Or am I reaching out to those who might not take an inordinate amount of time necessary to understand the complexities and nuances of our industry? You decide and feel free to share your knowledge-nuggets.

 

 

Fortune Cookie Security Advice - August 2008

 

 

Fortune Cookie Security Advice - June 2008

 

 

Fortune Cookie Security Advice - May 2008

 

 

Deconstructing Cyber Security Attacks - Threat Model

 

 

Defense in Depth Information Security Strategy

Hi all,

 

We in the vPro Expert Center are looking for folks that would like to do an internship with Intel over the fall session.  We have posted a job @ www.intel.com/jobs  Search for requisition # 559370. 

 

Here's the brief description:

The internship is based in Folsom, CA or Hillsboro, OR, and would be focused on social media related to the Intel social media sites on http://communities.intel.com/. In this position, you will primarily focus around the following areas blog radio show, the Emerging Compute Model Forum, the vPro Expert Center, and the Server Room. A successful internship would involve regular participate in the community blog sites, creating content and collateral for the various communities, and deeply engaged with the radio show. We would expect a successful intern to be able to grasp the details of where Intel is going with the technologies covered by the community sites, and communicate the direction to the outside world.

 

If you have questions email me, blog me, twitter me, etc.. 

Background

 

Traditional corporate information is stored in highly secure repositories within enterprise boundaries. New forms of data are being created in emerging mediums such as blogs and wikis. Cloud computing and network-based systems offer new venues for processing and storage.

 

The Big Question

 

What are good ways to balance the potential productivity advantages of open collaborative computing versus the data security needs of the organization? Consider several examples:

 

  • Does having access to social media make you more productive? Is it secure?

  • Is having access to raw corporate data more productive than secure, specialized tools?

  • Does IT need more control or less? What new tools and/or methods are required, supporting a more open environment?

 

These questions may never see wide consensus and the decision is crucial to every CIO. We will use your feedback for IT analysis.

 

 

 

 

Make Your Opinion Heard

 

 

Become one of the first to test MIT's new collective intelligence tool called Deliberatorium. Join the fun in two simple steps:

 

 

1. Create your account and log in to the MIT Deliberatorium

 

 

2. Add your perspective to the discussion

 

    • View, rate or comment other author's posts using pros & cons

    • Add your own ideas for new solutions and issues

 

This discussion topic will be available until September 29, 2008, and then results will be shared on Intel.com. Feel free to forward this invite to any interested parties.

 

 

 

It's Cool to Argue

 

 

Research shows that a large group of diverse individuals tends to get the right answer because they bring different perspectives into the discussion. Help Intel and MIT learn more by participating in this web-based argument.

 

 

 

 

For More Information

Watch the ten minute video clip for a concise overview.

Filter Blog

By author:
By date:
By tag: