Every year industry pundits predict the future of computing technologies, investment decisions and resource requirements to move the industry in a direction that meets their investment criteria, venture endeavors or writing competencies. Many of these predictions are well known and "foregone" conclusions for growth to us who live with these products, people and plant decisions every day. So this year I thought I would throw my "hat into the ring" and give some of my predictions for the next decade. For those of you who know me you will not be surprised to "stretch your imagination" for the coming decade. For those of you who are reading this blog for the 1st time or sometime in 2011 than your comments are welcome.

 

Risk taking is a core part of Intel's culture....so here goes, from #10 to #1:

 

DISCLAIMER: All of the views expressed below are my personal predictions for the next decade and do not reflect the public position of Intel Corporation. If I offend anyone, I apologize in advance. I did not consult a committee, discuss in an open forum or otherwise "cleanse" my opinion. Predictions are difficult to make, data is not readily available and all NDA's (personal and professional) have been honored.

 

 

10. The "universal" healthcare momentum in the US will fizzle in bureaucracy but the healthcare investment in technology and efficiencies will continue at an accelerated pace. The resulting momentum will see the development of "Virtual Machine Medical" records and "Care-based" clouds to optimize records transport, reduce IT infrastructure expense and lower liabilities for caregivers. The resultant data impact will lower Big Pharma manufacturing costs, increase profits and reduce FDA approval cycles. These costs will not be absorbed by the wealthy but rather through industry consolidation. Innovation NOT prevention will be core to this evolution.

 

How?: Every person in US and other developed nations will be required to carry their secure and encrypted information in 2 locations. Their primary Healthcare professional and their personal wireless device (Notice: Digital and Cellular will be obsolete terms by the end of the decade).

 

9. The automotive industry will cross the chasm into innovation for the first time in 50 years. Cloud Computing infrastructures and "General Auto" Apps Engines will allow consumer to design their own vehicles through a sophisticated CAD design tool for consumers. The resultant infrastructure will be linked to supplier databases, union labor will bid to work on it and customers will have choices between Gasoline, Diesel or Hybrid engine designs.

 

Why?: It is long overdue for the Auto industry to innovate at a speed more akin to other industries. The Asian market opportunity alone will drive massive innovation. 5% MPG, Horsepower and Leather Bucket seats improvements are not enough.

 

 

8. Music will be digitally free and universally available...with an internet connection. Free from DRM (doesn't work anyway) and free to be transported to any device that consumers are willing to pay for. iTunes, Vevo, Jay-Z TV, Rhapsody, Panorama, JamRocket, Napster and others will foster a decade of song. It will be an exciting and transformative decade of integrated audio and video stimuli to a generation of digital super users worldwide. Regardless of language, the best music will transport it's way across the digital clouds to be mixed, re-recorded and consumed billions of times everyday. The music industry will survive and thrive as it has for centuries. Music is universal to our existence the best have always found a way to rise to the top. Thank you Susan Boyle!

 

Who cares?: My children, your children and anyone under 25 who does not wish to be straddled by intellectual property rights attorneys who cannot defend inertia or the will of the 21st century human.

 

7. The internet will be accessed by over 70% of the world's population on at least a monthly basis. Some folks may assume this to be a "not so bold prediction". Let's look at the statistics today. It has taken almost 40 years for the Internet to reach 1.7 billion people worldwide (See Chart below). The global expansion of Internet access is a global initiative more important than the Kyoto accord in this author's opinion. (See Prediction #2 for further clarification). Current population figures put the world's population on course for 7.67 Billion humans on the planet in 2020. 70% of this number is 5.36 billion people with regular access and usage of the Internet and Cloud Computing technologies, greater than 200% increase in adoption in a decade. The internet is the growth market for the technology industry….the Internet adoption growth will "slow" in the next decade to slightly over 200% primarily in Asia, Eastern Europe and Latin America.

 

WORLD INTERNET USAGE AND POPULATION   STATISTICS

World Regions

Population

( 2009 Est.)

Internet Users

Dec. 31, 2000

Internet Users

Latest Data

Penetration

(% Population)

Growth

2000-2009

Users %

of Table

Africa

991,002,342

4,514,400

67,371,700

6.8 %

1,392.4 %

3.9 %

Asia

3,808,070,503

114,304,000

738,257,230

19.4 %

545.9 %

42.6 %

Europe

803,850,858

105,096,093

418,029,796

52.0 %

297.8 %

24.1 %

Middle East

202,687,005

3,284,800

57,425,046

28.3 %

1,648.2 %

3.3 %

North America

340,831,831

108,096,800

252,908,000

74.2 %

134.0 %

14.6 %

Latin   America/Caribbean

586,662,468

18,068,919

179,031,479

30.5 %

890.8 %

10.3 %

Oceania /   Australia

34,700,201

7,620,480

20,970,490

60.4 %

175.2 %

1.2 %

WORLD TOTAL

6,767,805,208

360,985,492

1,733,993,741

25.6 %

380.3 %

100.0 %

 

Pasted from <http://www.internetworldstats.com/stats.htm>

 

As usual there will be "super users" or "power users" within this group of over 5 billion who will require constant access to internet technologies, cloud computing, wireless networks and communication tools. Availability will be critical to serve these users as will the margins required to maintain the network availability for the scale of the less frequent users.

 

So what?: Have you ever attempted to architect a network to manage 500 million users on a wireless multi-national network? Cloud Computing, virtualization, security and mobile/remote energy infrastructures are requirements. Standards, scalable databases and WW data management standards are also required. Network availability, network scale and network responsive are key. The world is transitioning to digital infrastructures at a rate faster than all previous predictions…this next decade will be widely exciting.

 

6. Governments worldwide will become the largest Cloud Computing Infrastructures for human services. Economies of scale in managing governments worldwide dictate a rapid transition to paperless infrastructure. In addition, as governments worldwide clamor for more resources to provide more services (Joseph Schumpeter and Adam Smith may believe the market dictates innovation but they have yet to meet a government regulator who shares this view), their infrastructure will become increasingly dependant upon telecommunication networks to deliver the "Post" and less upon written notices or edicts. This will reduce fossil fuel consumption for letter carrying quasi-government entities worldwide, while easing the escalating cost infrastructures of governmental agencies. This does not reduce corruption, largesse and invasive government practices on a global basis but will be the 1st phase of governmental efficiencies through technology.

 

Duh? Factor: The US government's decision to become a large deployment of Cloud Computing technologies is not an easy effort. While we have a government identification system within the US, we have a done a singularly terrible job of using the social security and state identification systems  to their fullest benefit for the citizens who pay the taxes for these endeavors. Japan and the UK lead ahead of the US here in my opinion, but are far from "maximizing efficiencies". This next decade will be critical for government's to identify their strengths, maximize efficiencies, or in general lose their singular ability to fund these activities within the developed nations that face the most complex issues in the next 10 years. Simple access to government services you already pay for to protect your life, liberty, property and fidelity is not a privilege, it's a service and it is their chosen profession. Available access to government services is 24x7x365 endeavor because there is not a single nation in the world that sleeps at the same time.

 

5. The next decade will see the emergence of a software and Cloud Computing "super power" outside of the United States. I am not predicting a Chinese leader here, I am predicting the emergence of a company that does not exist today, that leads the market in their given market segment, one that eclipses the revenue of Google in 2009 (Approx. $23B), does not sell out to one the current "majors" (MSFT, GOOG, ORCL, SAP, IBM, BIDU) and develops a      programming language that eases the adoption internet technologies for 1 billion users (not emerging markets, emerging users).

 

How can I invest? Get in line. The best part of the technology industry is that Entrepreneurs emerge, the best and brightest find a way to be heard. It is very rare that nationality barriers keep strong companies from building strong teams. The question becomes for these young women and men is will they maintain the will to become an important part of the technology industry on a world stage. In any case, it will be fun to watch and work with this team.

 

4. The "Core War" will become paramount for silicon technology providers in the Cloud Computing marketplace. The Cloud Computing Center (C3) will be a battle for architectural supremacy that starts at the silicon innovation level. Pricing may be a factor, but innovation will hold the key. Government interventionists or not, the best products will win and 48-Core single socket systems with 32MB cache on-die with optical I/O infrastructures will not be the only innovations required to be a successful provider of silicon technologies in the next decade. Will Intel compete favorably? Of course, it is what we do and why we innovate. We are inherently competitive, paranoid and technically demanding of each other. However, technologies and instrumentation will become equally critical as virtualization, security, energy efficiency and bit integrity require software to function with maximized efficiency to serve maximum scale. Cloud Computing Devices (CCD) will emerge as a core growth market for developers and fortune 100 companies worldwide. These devices will have      higher levels of integrated security, management and performance features which simplify access, upload and scale for the C3 marketplace. Don't believe me? See how well you phone functions in the midst of a traffic jam or natural disaster with limited access to secure networks because your silicon lacks the capability to manage multiple protocol stacks across multiple spectrum. Availability matters, Core design matters, communication integration matters, manufacturing process technology matters and materials science matters.

 

Are you suggesting "Attack of the Cores"? Similar to Star Wars, when designing technology for Cloud Computing, we must design for scale across a     universe of technologies. In the next decade the term "out of the box experience" will become meaningless. The software that "ships"on your device at purchase will become obsolete within a year, your carrier, your company or you will upgrade/update the O/S to perform new instructions which are compatible but not designed specifically for the Core/Cores in which you are running. The security,virtualization, energy efficiency and visualization capabilities of this platform must also work or the software will be considered niche and unattractive to many.

 

3. Storage technologies hits the wall of innovation at 110-MPH. The current marketplace for storage has been growing based upon similar interfaces, voltage consumption and I/O standards for over a decade. The explosion of storage requirements for enterprises AND consumers devices will force the most rapid transition in the history of the almost 60-year old industry. A transition that will lead to a reduction of moving parts within ALL storage devices to nearly zero. A reduction of power consumption of up to 70% and increased Data Array intelligence, through QoS, VM Management and I/O optimizations. Overall, I am predicting a gross revenue gain of 10% CAGR for the decade on volume increases of 18-22% CAGR for the decade. I predict that their will margin acceleration north of 15% per annum of early innovators, while laggards will see operating expenses eclipse revenue and volume increases due to margin erosion. (Simplified: Winners win big, loser get punished)

 

If you hit a wall at 110-mph does it hurt? Yes it does. Seagate, among others, have a major problem with their power consumption of their current architecture. For Data Center managers and consumers alike traditional HDD technologies will have to change. Is Flash (NAND) technology the answer? Maybe if MTBF (Mean time between failure) can increase and management tools to insure availability of consumer AND enterprise devices keep pace. However, even the leaders of the new revolution Samsung and Intel are not immune the coming speed bumps. Storage is core to technology, it is capturing the history of our current, our past and our future civilization. The next decade will be a transformative one for these storage technologies as we transition into Cloud Computing. I'm not sure if Gene Roddenberry, George Lucas or James Cameron has it right but...it is going to fun showing the world and these social scientists(read Movie Directors) how to make it work in the real world.

 

2. The dawning of the "Solar Decade" is upon us. While government debate the importance of CO2 emissions and Carbon taxing threatens the "foundations" of business (according to Reuters). The message to business and consumer alike is clear. Reduce your carbon consumption or face financial penalties. Does this mean every human on the planet is going to be provided with a Carbon consumption target? No, that's ludicrous but not beyond discussion of fool-hardy academics. It does mean that as a human race we must examine the where, what, when and how that we consume unhealthy carbon emitting goods and services. We must re-examine our power consumption sources and push ourselves to digest more and more renewable energy. Wind is intermittent though definitely renewable, Water is consistent and reliable but not widely available on a global basis for Hydroelectric and Geothermal is typically remote or not widely available. So this leaves us with Solar power as the most obvious choice in the next decade to grow and be consumed at an accelerated rate. This next decade will see manufacturing process enhancements that will accelerate efficiencies beyond any of the other technologies. Silicon has become the most consistent, year after year, advancements in terms of innovation. The industry will adapt to lower cost supply, innovate the cabling, racking and shielding industry while monetizing the global government subsidies. This will be a decade long process but one we MUST be successful. This is not      just about life on this planet, these innovations will be a requirement for us to even begin to discuss viable space travel as a species. In my opinion: Availability of Solar power will become a driving force the building of next generation C3 facilities and powering remote Internet transmission central offices.

 

If Al Gore invented the internet, does this mean Barack Obama invented Solar power? The obvious answer is no. However, Abraham Lincoln didn't   invent civil rights but as the US President his position, legislative leadership and historical foundation have left an indelible impression upon the world. The US, Europe and China have the most to gain and the most to contribute to the world's environment through Solar innovation and incentive will begin here. A final thought on Solar availability: If the Sun doesn't insolate the human race everyday, we have much more to worry about as a species than Carbon taxes.

 

1. Cloud Computing, the Internet and Intel are inseparable. For the next decade, Intel's focus on manufacturing process technology, Core optimization, energy efficiency, virtualization, storage, I/O technologies and Visualization place us at the forefront of Cloud Computing. We can no more separate our company from its future than we can from Moore's Law. Do I wish we could compartmentalize our business and technologies? Yes, but unlike the NFL, NBA, FIFA and Major League baseball our "franchise" is open to close scrutiny worldwide. While we both have to put a "winning product" on the "field of competition" every year that has changing "players/components", the expectations to "win" are very high and the rewards equally high for our ecosystem of over 200,000 OEM, channels members, partners and suppliers. Cloud Computing and the Internet are the growth engines for Intel in the next decade, in my opinion. iPhones? Yes. Blackberry? Yes. Netbooks? Yes. Railway Systems? Yes. Medical Record Consolidation? Yes. Wi-Max? Yes. LTE? Yes. Google? Yes. Each and everyone of these similar and seemingly disparate applications drive growth for Intel. Each and every nation that adopts internet technologies around the world provides and opportunity for growth. For networks, computers, devices and the applications that run within them, availability is a critical factor in determining viability of product strength. For Intel, this will have to be a core part of our deliverables for the next decade.

 

What is Intel's Market cap in 2019? Great question. Ask Paul Otellini, our CEO at the end of 2009. Paul probably won't answer, he is too smart to make   crazy predictions like that. Andy Grove once asked a room full of my colleagues: " What is the most important question to ask? " to which someone replied " The 1st question!", as Andy shook his head in amazement at the ambitious (there is another term I was thinking)  young man he replied: "The question that leads you to the best answer". The next decade of innovation is going to transform our global economy and technology as we know it today. Intel will play an important role because we have the passion, the persistence, the resources and the competitive desire to be leaders in this future of available computing. I look forward to working with you all to ask the right questions that launch us towards our future.

 

Have a Happy New Decade! The past is behind us, the future is today, leave a better sunrise than the one you were given.

SC09 was over last month and I was gratified by the energy and enthusiasm of the HPC community.   Equally gratifying was seen the growth of the Intel® Xeon® 5500 processor in the Linpack-benchmark based Top500* list in just over 8 months, 95 entries in the Top500.

Part of my job focusing  on HPC  in Intel’s server  competitive and performance marketing team is providing  our sales force and customer with the best intelligence to make informed decisions.   While it might be easy to make a buying decision on who has highest Linpack score alone, I think most of the users out there don’t buy their systems to run Linpack.   Imagine having made a buying decision on Linpack alone during the transition from Hapertown (Xeon 5400) to Nehalem-EP (Xeon 5500).  Both processors, have relatively the same Linpack scores  (~91.7 for X5570 and ~92.5 for the X5470), yet when you compare their  delivered application performance, the X5500 delivers somewhere between ~2X-3X on variety of HPC applications.  The key difference here is the vast bandwidth improvements in the Nehalem architecture.  Others users might look at memory bandwidth alone (as measured by the STREAM benchmark), yet cores can only consumer so much bandwidth and then you become compute bound again.    That’s why you often hear this word in HPC the word and our key HPC focus here at Intel, balance.  Not only balance of raw speeds and feeds, but also, power and cost/TCO.

That’s why it was gratifying attending a standing room only Birds-of-a-Feather session during SC09, Benchmark Suite Construction for Multicore and Accelerator Architectures (slides),  facilitated by Kevin Skadron from the University of Virginia.  It was interesting to hear how current benchmarks are either meeting or not meeting user needs or how a particular benchmark might or might not mirror the “real” workload of a user.   I am looking forward to the dialogue and discussions out this.

In the coming year, the architecture choices for HPC users are varied from x86, RISC, GPU.   Users should look beyond the “whiz-bang” numbers that all of us marketers use to get your attention, focus on only one benchmark that matters and that is your own application.

 

*Other names and brands may be claimed as the property of others

Intel announced the Intel Cloud Builder Program, on Oct 29 2009,  to help early cloud adopters who are faced with “Where do I start?” “How do I implement cloud infrastructure? What do I need?”, etc questions.  I have spoken to several customers facing this problem and have collected a myriad of questions that they are looking for answers.

 

Some of these questions include:

 

  • What are the key management tools available?
  • What hardware configuration should I use?
  • What are some BKMs that I need to follow?
  • How do I integrate my management interface with cloud management tools?
  • How do I move my applications to the cloud?

 

 

This program has generated outstanding response from the industry and we now have several software vendors (Canonical, Citrix, Microsoft, Parallels, Red Hat, Univa, VMware and open source Xen.org) that have joined the program and many more are in the process of joining. Several experts from Intel including teams that work with end users, software enabling, data center group, cloud strategists – are contributing to this program (Thanks Team!!!). Lots of hard work and we have been burning the mid night oil to put the lab infrastructure in place, working with the ISVs in defining the use cases, requirements and other technical details. In fact, we have a few of the software installed on our Intel labs at several locations worldwide (Dupont, WA, Hillsboro, OR to name a few). We are also in the process of documenting the test results in the form of whitepapers jointly with these ISVs which we are targeting to publish starting in Q1 2010.

 

 

Cloud computing is still relatively new and I anticipate that the Intel Cloud Builder program help solve some of the aforementioned pain points to cloud service providers, enterprises and hosters.  So far, it has been a great learning experience for me in running this program, working with different stakeholders both internally and externally to help shape the program. I am certainly looking forward for an exciting new year ahead!!!!  I will come back early next year with more details on the program.

 

Stay tuned Enjoy the holidays!!!!

I just finished reading the update to the Intel IT Data Center Strategy paper in the IT@Intel Community.  What stood out to me was that despite being able to reduce data center count from 150 to 97 in the last two years, Intel IT’s strategy is not to focus on data center consolidation as a goal, but rather a tactic.


This paper talks about why Intel IT shifted from a strategy of trying to reduce the number of data centers down to a very small number world-wide to optimizing the entire network of data centers to support service levels for the business. Effectively the strategy is to Rationalize each data center individually and then Optimize for efficiency with a variety of strategies.  The results are impressive: A 2.5x increase in data center performance with a 65% reduction in capital costs through virtualization, server refresh, a new HPC solution featuring parallel storage and facility closure/upgrades. 


In the new Intel IT strategy, Data Center consolidation is no longer the goal, but rather a tactic for efficiency while ensuring service optimization. Rationalization and Optimization of all aspects of operations are covered from the compute, network, storage to the facility and more.


Another fact that I learned in reading this paper was how Intel IT divides up their data center types (Design, Office, Manufacturing, Enterprise) and manages them differently.  I was not aware that the Office/Enterprise environment uses only three (of the 97 total) data centers. 


So, for your business and IT organization, Is Consolidation …

 

  • a Goal?
  • a Strategy?
  • a Tactic?

 

Chris

While shopping for Christmas gifts, I noticed that the purchase drivers and purchase process for a train set for my 8-year-old son were, strangely enough, very similar to the ones for a small business server. No, living and breathing in the server world at Intel has not made me crazy. Let me explain...

 

Purchase drivers:
• In the SMB world, the number one driver for a server purchase is “natural upgrade” (caused, for example, by warranty expiration.) That can translate into a transition from a desktop-on-the-side to a real server, or a server refresh. The next driver is “support for new applications” such as a Customer Relationship Management database.
• In my son’s world, it was time to upgrade from a wooden train to a real model train. He also wanted support for a new application: an automated railroad switch.

 

Trusted advisor:

• I talked a few days ago with an Intel Channel Partner about one of their customers. One thing stood out from our conversation. He told me that his customer “is not in the IT business. He’s in the education business.” When it’s time to look for a new server, Intel resellers bring to the table years of experience and sound business recommendations for IT solutions that will give you more time to focus on your area of expertise — your business.
• My son is not a train engineer (yet), so he naturally turned to his closest trusted advisor, somebody with a long history of deploying train sets — his dad!

 

Where to buy:

• It’s no surprise that most small businesses rely on a local Intel reseller when it comes to buying and installing a server. Some of you have already gone a step beyond and rely on your IT partner to host your storage database or manage your IT remotely. Cloud anyone?

• My son used holiday catalogs to research train set options. Then we went to the local toy/model store to get more advice and check out the offers. I have a feeling the relationship with this store is meant to last for several years to come….

 

Financing and ROI:

• In this difficult environment, everyone is re-evaluating spending. The government as well as vendors have put in place some programs to support IT investment. Investing wisely now can help save money, increase productivity and prepare you for the turnaround. Refreshing a 3-year-old server with one based on the Intel® Xeon® processor 5500 series can give you up to 5.2x faster performance (1) and up to 3.7x more energy efficiency (2).  The performance improvement with the entry server based on Intel Xeon 3400 series is significant, too.
• For my son’s train set, I took advantage of a stimulus package funded by his grandparents and uncles. I think his big smile on Christmas day will be proof of a substantial return on their investment.

 

Product choice:

• When you’re purchasing a server, you have some choices to make. Do you want one or two sockets? Is DDR3 or FB-DIMM memory best for you needs? These choices are going to be based on your needs for reliability, compatibility and performance.
• In the model train world the’re choices, too. HO, O, N, or Z scale? Which brand? How did he choose? Quite simply, he chose a size compatible with the 30-year-old trains from his father (yep, he saved them. Don’t ask.), and a renowned and reliable brand. .

 

More headroom for the future

So, with the help of your IT advisor, you’ve made your choice. The good thing about Intel-based servers is that they provide you headroom as your business grows and as your employee and customer bases increase. The Intel Modular Server even gives you the option to add up to six compute modules in one chassis.

• For my son, I see railroad extensions and train cars in his future for many birthdays and Christmases to come.

 

 

The only differences: footprint and noise with the new Intel Xeon processor 5500 series you can consolidate your infrastructure from nine servers to one

 

quite a space saving! And Intel-based servers keep getting quieter. I don’t think the footprint of my son’s railroad can be limited to a simple track. And I can imagine the roar of the engine and the whistle running through our house.

 

When I think about it, buying a server just might be simpler and faster than buying a train set− especially with the help of a trusted advisor like your local Intel reseller. What do you think?

 

 

This is my last blog of the year. I’m looking to hearing from you in 2010.


Happy holidays, happy servers everybody!

 

 

(1) Baseline Configuration and Score on Benchmark: Supermicro* X7SBE system with one Intel® Xeon® processor E3120 (Dual-Core, 3.16 GHz, 6MB L2 cache), EIST Enabled, Hardware Prefetch Enabled, Adjacent Sector Pre-fetch enabled, 8GB memory (4x 2GB DDR2-800 ECC), WDC SE WD1200JS 120G SATA, 7200rpm, SuSE* Linux Enterprise Server 10 SP2 for x86_64, kernel: 2.6.16.60-0.21-smp. Source: Intel internal testing as of October 2008. Scores: 45.95 for SPECint_rate_base2006

New Configuration and Score on Benchmark:  ASUS Z8PE-D12X based server platform with two Intel Xeon processors X5570 2.93GHz, 8MB L3 cache, 6.4GT/s QPI, 24 GB memory (6x4GB PC3-10600R, CL9-9-9, ECC), SUSE Linux Enterprise Server 10 SP2 x86_64 Kernel 2.6.16.60-0.34-smp, Intel C++ Compiler for Linux32 and Linux64 version 11.0 build 20090131. Source:  www.spec.org/cpu2006/results/res2009q1/cpu2006-20090316-06703.html. Score: 241


(2) Baseline Configuration and Score on Benchmark: Supermicro* X7SBE system with one Intel® Xeon® processor E3120 (Dual-Core, 3.16 GHz, 6MB L2 cache), EIST Enabled, Hardware Prefetch Disabled, Adjacent Sector Pre-fetch Disabled, C1E Enabled, 8GB memory (4x 2GB DDR2-800 ECC), WDC SE WD1200JS 120G SATA, 7200rpm, Microsoft* Windows* Server 2003 Enterprise x64 Edition SP2 OS. Source: Intel internal testing as of August 2008. Scores: 529
New Configuration and Score on Benchmark: IBM System x3650 M2* server platform with two Intel Xeon processor X5570, 2.93GHz, 8 GB (4 x 2) memory, Microsoft Windows Server 2008 Enterprise* OS. IBM J9 Java* 6 Runtime Environment JVM.    Result submitted to www.spec.org at 1977 ssj_ops/watt. For additional details see: http://www.spec.org/power_ssj2008/results/res2009q2/power_ssj2008-20090519-00165.html. Score: 1977


(3) Intel estimates as of Nov 2008. 8 month payback is an Intel estimate based on comparing the cost savings achieved in 9:1 server consolidation from both power/cooling and OS licensing versus the estimated cost of purchasing a new server featuring Intel Xeon processor 5500 series. Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Performance comparison using SPECjbb2005 bops (business operations per second). Any difference in system hardware or software design or configuration may affect actual performance. For detailed calculations, configurations and assumptions, see www.intel.com/performance

It is undeniable that cloud computing activities have come to the forefront in the IT industry to the point that Gartner declares “The levels of hype around cloud computing in the IT industry are deafening, with every vendor expounding its cloud strategy and variations, such as private cloud computing and hybrid approaches, compounding the hype.”  As such, Gartner has added cloud computing to this year’s Hype Cycle report and placed the technology right at the Peak of Inflated Expectations

 

Michael Sheehan in his GoGrid blog analyzed search trends in Google* Trends as indicators technologies’ mindshare in the industry.  Interest in cloud computing seems to appear out of nowhere in 2007, and interest in the subject keeps increasing as of the end of 2009.

 

Also worth noting the trend of virtualization, one of the foundational technologies for cloud computing.  Interest in virtualization increased through 2007 and reached a plateau in 2008.  Likewise, the trend in terms of news reference volume has remained constant in the past two years.

 

 

Blue line: Cloud computing

Red line: Grid computing

Orange line: Virtualization

 

 

GoogleTrends.png

 

 

 

 

Figure 1. Google Trends graph of search volume index and news reference volume for cloud and grid computing and virtualization.

 

 

Given this information, is cloud computing at its peak of hype, about to fall short of expectations and bound to fall into the trough of disillusionment?  According to Gartner, the goal of this exercise is to separate hype from reality and enable CIOs, CEOs and technology strategists to make accurate business decisions regarding the adoption of a particular technology.


Cloud computing does not stand for a “single malt” technology in the sense that mesh networks, speech recognition or wikis are.  Rather, cloud computing represents the confluence of multiple technologies, not in the least including grid computing, virtualization and service orientation.  Hence the Gartner Hype Cycle may not be an accurate model to be useful in predicting how the technology will evolve and will be adopted in the industry.

 

If the Gartner hype cycle theory is to apply to cloud computing, it cannot be in isolation.  In addition to the three enabler technologies mentioned above, we need to add the Internet for making possible the notion of federated computing.  From this perspective, what we may be witnessing is actually the Hype Cycle’s Slope of Enlightenment.  The search volume index for the Internet is shown in Figure 2.

 

 

 

InternetTrend.png

 

 

 

 

Figure 2.  Google Trends Search Volume Index for the Internet.

 

The graph by itself does not look very interesting until we note that it is actually a picture of the Trough of Disillusionment: the time frame is actually too short to be meaningful.  We can claim that the Peak of Inflated Expectations actually occurred in the years 1994 through 2001, that is, the period of the infamous Internet boom.


Beginning at the end of 2007 we see the convergence of grid, virtualization and services into the cloud and the Internet infrastructure build-out beginning to pay off.  Grid computing moves from niche applications, starting with scientific computing, to technical and engineering computing, to computational finance into the mainstream enterprise computing.  Cloud computing would not be possible without the dark fiber laid out in the 90s.


The technology trigger period is actually much longer than what the Gartner graph suggests.  For a number of watershed technologies there is usually a two or three-decade incubation period before the technology explodes into the public consciousness.


This pattern took place with the radio industry from the 1901 Marconi experiments transmitting Morse code over radio to the first broadcasts in 1923.  With the automotive industry the incubation period spans from the invention of the first self-propelled vehicles in the late 19th century to 1914 with Henry Ford’s assembly line manufacturing and the formation of large scale supply chains.  For the Internet the incubation period took place during the government internet with the creation of ARPANET in 1969 and the trigger started with the commercialization of the internet marked with the official dissolution of ARPANET in 1990.


The trigger point for a technology is reached when a use case is discovered that makes the technology self-sustaining.  For the automobile it was the economies of scale that made the product affordable and Ford’s decision to reinvest profits to increase manufacturing efficiencies and lower prices to spur demand.  For the radio industry was the adoption of the broadcast model supported by commercial advertising.  Before that there was no industry to speak of.  Radio was used in small scale as an expensive medium for point-to-point communication.


Consistent with the breadth of the technologies involved, the commercial development of the internet developed along multiple directions during the speculative period in the 1990s.  The Peak of Inflated Expectations saw experimentation of business models, with the vast majority proving to be unsustainable. The speculative wave eventually went bust shortly after 2000.


Hence we’d like to claim that the recent interest in cloud computing, taken in the context of prior developments on grid computing, the service paradigm and virtualization and over the infrastructure provided by the Internet, is actually the slow climb into the Slope of Enlightenment.  Experimentation will continue, and some attempts will still fail.  However the general trend will be toward mainstreaming.  In fact, one of the success metrics predicted for the grid was that the technology becoming so common that no one would think about the grid anymore.  This pattern is already taking place with federated computing and federated storage.

K_Lloyd

Happy New Year

Posted by K_Lloyd Dec 14, 2009

I am excited for 2010, and a bit misty about 2009 ending.  Not to imply 2009 didn’t have issues, but it was a bang-up ( http://www.answers.com/topic/bang-up ) year for Intel’s server products and technologies.   2009 was a transformational year.  Why?  In a word, Nehalem.  Intel Xeon 5500 was the biggest jump in performance and efficiency in a single processor generation I have ever seen.

 

To put some perspective on this, in the four generations before the Xeon 5500, Intel increased two-socket Xeon performance almost 600%.  To do that required an improvement of about 80% per year.  Not a shabby achievement and it was a good reason to move to the next generation of Intel Xeon servers.

 

Looking at these four years before Nehalem, a cynical person could argue that a lot of Xeon’s performance gain was simply an exercise in adding cores – not that core addition is simple.  And truly, a fair chunk of that 80% can be attributed to more cores per processor.  What is interesting, and profound, is Nehalem’s leap in performance.  With the same number of cores as the Xeon 5400, operating at a slower clock speed and with less processor cache, the Xeon 5500 delivered a jump of about 2.5 times over the 5400.  This established an Intel lead in two socket performance and efficiency.

 

Re-reading what I just wrote, it sounds a bit like a puff piece, but I really mean it, Nehalem made 2009 incredible year.  Intel had its challenges this last decade, but they were not delivery of server products and technology in 2009.  In 2009 Xeon rocked.

 

Now back to my excitement about 2010.  Take all that goodness of the Xeon 5500 in two sockets, inject the silicon equivalent of steroids, and you get a sense for the Nehalem EX four+ socket processor.  This monster is positioned to change forever what it means to be a “high end Xeon server”.  With up to eight cores per socket, and designs of four and eight sockets ( that would be 64 Nehalem cores, 128 threads ) there are not very many jobs in the enterprise that won’t fit into on one of these platforms.  The addition of mission critical reliability features harden this platform to a level never before seen in the x86 market.  This is a machine that can do it all, scale to the biggest enterprise jobs, with reliability features for mission critical applications.  2010 should be very interesting indeed.

 

Happy New Year!

I want to move from RISC, BUT, I think it is too hard, too risky, too expensive. These are some of the common valid concerns that I hear from Customers on a frequent basis when discussing moving a solution from a RISC architecture to Intel architecture.

 

I think the IT industry is at a very interesting inflection point in terms of IT buying decisions on RISC architectures. There is a big challenge out there on CIO's to deliver increased value to the business but in a reduced budget environment. The IT budget trends in '09 are likely to continue and a spring back to increased IT budget levels pre the economic crisis is highly unlikely in my opinion. One of the options that we would like to suggest to help CIO's deliver increased value but at a fraction of the hardware system cost is to take advantage of the economic value proposition offered by Intel based systems. I have written previouslythat the Nehalem architecture is also a big inflection point in that the we are adding more capabilities to the Xeon product line that previously you would only expect with a RISC architecture (20+ New Advanced RAS capabilities

 

Moving from one architecture type to another architecture does require a careful planning process to ensure that the transition will happen smoothly and at no risk to your business. The good news is that there are multiple companies that have very defined and robust process's that focus on making these types of transitions. In addition to having processes these companies also have experience of doing this with a broad range of customers.

 

I wanted to share with you one of methodologies that is out there. HP has a pretty robust process and approach to these migrations to help you move from RISC architectures to Intel architectures. Briefly described below are the key steps in the process and the key activities at each step in the process.

- Business Analysis - whats the business driver; do the TCO; analyse business processes; business impact of change

- Application Planning - As is analysis; To be analysis; Risk assessment; POC; transition planning

- Pilot & tools customization - current tools identified; re-use/new tools required; conversion needs; interdependencies

- Application transition - conversion; system and integration testing; documentation; acceptance

- Pre-implementation - interfaces; stress testing; performance tuning; installation; roll-out plan; user training

- Implementation - support transitioned application; hardware maintenance support

 

There are two recent webinars which I think will be pretty valuable to you if you are considering migrating some of your applications from a RISC environment to an Intel environment. These two webinars provide a good overview of the two phases to migration.

 

Module 1: Taking the Risk out of RISC migration: Building the Business Case with HP and Intel

URL: http://hpbroadband.com/program.aspx?key=OMKMKLDMDL

 

Module 2: Taking the Risk out of RISC Migration: HP Process, Methodology and Services for Migrating

URL: http://hpbroadband.com/program.aspx?key=NMPPLJDIFE

 

 

 

 

Hopefully Module 2 will be of help to you if you are considering 'how to make the move'. Let me know what you think.

For those of you or those you know of, who are struggling to justify escape from RISC platforms, here are four example scenarios of migration, each achieving powerful TCO reduction.  Please share with your colleagues, customers to get the migration conversation going.

   

RISC to Intel® Xeon® process or migration: High and fast ROI for SMBs

Oracle on Sun Fire T1000 with Solaris to Oracle on RHEL, Dell T610 with Intel® Xeon® processor X5570, achieving $710,000 savings over three years, 155% ROI, 3 month payback. 

(http://communities.intel.com/docs/DOC-4604)

   

Migrating from Sun RISC to Intel® Xeon® processor: Flatten TCO, achieve unprecedented ROI

Oracle on Sun Fire 15K with Solaris to MySQL on RHEL, Dell T610 with Intel Xeon processor X5570, delivering $3.5M savings over three years, whopping 15,037% ROI, 3 months payback. 

(http://communities.intel.com/docs/DOC-4606)

   

Fast TCO. Quick and painless ROI.

MySQL on Sun T5440 with Solaris to MySQL on RHEL, Dell M710 with Intel Xeon processor X5570, presenting 70% TCO reduction over three years, ROI of 456%.

(http://communities.intel.com/docs/DOC-4603)

   

Oracle On Red Hat Enterprise Linux:

Oracle on Sun T5440 with Solaris to Oracle on RHEL, Dell R900 with Intel Xeon processor 7460, delivering almost $1 million savings over three years, ROI of 992%, 3 month payback. 

(http://communities.intel.com/docs/DOC-4605)

Register. 

Come talk to  Intel, ask questions, and obtain the latest information on RISC migration and  virtualization, from your computer without travelling. 

Intel is participating in Red Hat Virtual Experience 2009 (http://www-2.virtualevents365.com/rhexp/about.php) on December 9.  This is a  virtual tradeshow you can visit through your browser.  Intel has a booth in the Exhibit as well as  participating in the Virtual Experience panel discussion, introducing the latest  technologies, ideas, around virtualization and migration from RISC/UNIX to  Intel/RHEL. 

 

 

     Exhibit:  8:00 AM – 6:00 PM  EST

          - Come  chat with, ask questions to Intel experts on server platforms, virtualization,  RISC migration

          - Watch  videos, download whitepapers and product briefs on Intel server platforms,  virtualization, and RISC migration

 

     Virtualization Experience panel - Making Impacts in the Datacenter  with Server Virtualization:  12:00 PM –  1:00 PM EST

          - Intel’s  Mitch Shults is joining other experts from the industry and discuss the impact  of virtualization in data centers

 

 

See you at the Intel booth!        

fjjensen

Promises Delivered

Posted by fjjensen Dec 3, 2009

By Frank Jensen, Performance Marketing Engineer

Data Center Group Marketing, Intel

 

 

Intel woke up many years ago and realized that if we didn’t keep sharpening our skills and a laser focus on delivering better experiences for our customers that they would go away.  And they started to.  So, a promise was made (and kept to-date) that we would deliver performance that mattered to our customers.  This turned into the “tick-tock” model where we shrink our manufacturing process every other year and on the alternate years we introduce a new microarchitecture.  You probably already have been reading about the multiprocessor segment (MP or EX-expandable servers as we call them internally); and you likely have heard rumblings about “Nehalem-EX”, but I wanted to let you know what I’ve seen on the performance side to-date.

 

I dug up this old chart from 2007 (originally in a press briefing from 2004) talking about how we forecasted performance gains expected over the next 5 years or so (and no, we didn’t sandbag J).

 

Forecast2004-07.bmp

 

Figure 1- Source: http://download.intel.com/pressroom/kits/events/idffall_2007/BriefingSmith45nm.pdf

In the multi-processor space, we’re seeing the same trend.  Our upcoming launch of the next generation Intel® Xeon® processor (codenamed “Nehalem-EX”) is a WOW – even bigger than the 5500 series launch (formerly “Nehalem-EP”).  We’ve already disclosed some details – like delivering greater than nine times the memory bandwidth available to the applications over the 7400 series (formerly “Dunnington”), and we also talked publicly about being able to drive three times the number of transactions in database workloads.  The recent SuperComputer trade show (SC’09) had more discussion supporting that indeed this is a processor to keep an eye on for even HPC workloads - compute demand continues to be insatiable for researchers.

 

DB over time.bmp

Figure 2 - Source: Intel internal measurements with preliminary Nehalem-EX results.

 

My friends, this 55x result is off the chart from our promises of years ago!  Does performance matter?  We think so.  In so many ways, the quicker a job gets done or the more responsive a server is, the quicker a solution is found or more satisfied the end customer becomes – improving ROI or allowing for more options to be explored in the same amount of time.  That's how we hope to fulfill our promise.

 

There a lot of benchmarks and metrics, some probably useful, some not – but what’s important to you?  What do you look for when deciding whether to buy a "big iron" computer or a couple of smaller standard ones?  Let us know!

Intel, Sun, Federated Media and Techdirt today unveiled IT Innovation, an online community where IT professionals can find tips, discussions, webcasts and other resources to help.  Topics include how to align business and IT strategies, reduce costs, plan for growth and take advantage of innovation.  Join the discussion today.

Filter Blog

By author:
By date:
By tag: