The Data Stack

8 Posts authored by: jahengeveld

Every once in a while you get to touch on a project of incredible scope and vision.  The Texas Advanced Computing Center system called Stampede is a really good example.  The system was launched in January and was dedicated today in a ceremony in Austin and contains not only 12800 Intel® Xeon®  E5 processors, but also the first Petascale adoption of Intel® Xeon Phi™ Coprocessors (6880 of em which deliver over 7 additional Petaflops of  peak computational performance).. This system was sponsored by the National Science Foundation (NSF) and is the most powerful system in the NSF’s Extreme Science Engineering and Discovery Environment (XSEDE).  Its one of the top 10 most powerful supercomputers in the world and one of the most programmable and accessible.

 

stampede_pic.png

 

This system is taking on some of the most interesting challenges in Science and Engineering in the world.  Stampede was intended to take on problems like modeling climate change, predicting earthquakes, studying viruses DNA and molecular behavior, modeling hurricanes and simulating space.

 

A couple of my favorite of these from early work on Stampede:

 

An assistant professor at MIT(my alma mater) is conducting computational studies on Stampede to explore new ideas for how to manipulate the surface of substances to do important tasks—like clean the air—that have never done before. Her studies are trying to convert CO2 into usable industrial materials.

 

A team at University of Texas at Austin is using Stampede to Map Antartica and its Ice Sheets: Stampede’s  advanced design helps researchers map the Antarctic terrain by running thousands of simulations of how the earth, water, ice and wind interact.  Scientists can calculate what the earth must be to create the surface effects we see.

 

Scientists are using Stampede to develop new methods to quickly pull together massive amounts of data from MRI scans and to combine this data with biophysical models to better represent the full extent of tumor growth in a patient. Their research requires large amounts of complex computations and Stampede exactly fills the bill.

 

When you put it all together, there are a lot of new applications based on the research activities of thousands of scientists over decades.

 

 

How can it do that?  The system has two portions.  It has a traditional cluster architecture based our Xeon E5 processors, but also what the NSF calls their “Innovation Capability” with Intel Xeon Phi coprocessors at its heart.  The power in Stampede is that while the Innovation portion of the machine drives up performance per watt, performance density and parallelism to new levels, it does so with a programming model that is completely compatible with the traditional portion of the system. So we get to the point where we can take advantage of the ease of programming of Intel Architecture products combined with the scaling capability of Intel Xeon Phi coprocessors.  Research Scientists and Engineers have developed algorithms over many years and realized that code on Intel processors, can follow a straight forward process to transition to the higher core counts and performance density portion of the system.  They get to preserve their intellectual legacy and start to stretch for new insights more quickly.

 

I love it when we focus on the real science and applications in our work.  HPC matters most when it changes how people will live in the future.  It does this when new technology is born, or new insights brought to science, or new cures found to disease or find more energy reserves.

 

The new Stampede system will only change the world.  That’s what we do at Intel.  Stampede’s future is bright.  Stampede is due to be upgraded with next generation Intel® Xeon Phi™ products when they become available.  We are excited to be a part of this vision.

 

Hats off to Jay and the team for a job brilliantly begun.

For 5 years.. Intel® has been playing the role of the in car navigations system or smart phone… we have been saying to the Technical Computing industry “Get in the right-hand lane, get ready to make a right hand turn --- parallelize your code, thread and vector your application, take advantage of the performance we have and be ready to take advantage of breakthroughs in the future.

Its time to take the turn.

 

Intel Sr. VP Diane Bryant announced our product line at Supercomputing 12

diane sc12.jpg

We are very excited today to announce that the Intel Xeon Phi™ Coprocessor 5110P is now generally available.   The Intel Cluster Studio XE 2013 development software tools required to use it are ready, the platforms to support it are ready, and the product itself, shipping to a select few for 4 months or so… is now available to help bring insight to the toughest problems in the industry.

 

It brings along over a teraflop of peak performance on a chip, the architecture behind the #1 system in the Green500.  7 systems deployed and listed in the Top500 in mere weeks prior to SC12.  Major achievements in performance, performance per watt and programmability.

 

NICS.JPG

(NICS Glenn Brook at SC12)

 

Now… I keep cars for a long time… I still have my 1985 Honda Accord … I am a little stubborn about sticking to the path I am on.. I like the confidence of knowing how I am going to get where I need to go…  I know I probably would be better off with more airbags,  a more powerful transmission, heck even a glove box that works…

 

So I sympathize with those who haven’t made the turn to parallelism…

 

You want to keep the car you know…

 

And That’s fine.. as long as you aren’t in a race..

 

Today’s technical computing is a sleak luxury sports car.. Parallelism enables HPC customers to not only open up the throttle, but use all of its gears…

If you are doing technical computing today, your imperative is to drive insight faster, gain ground on the solutions more accurately, find the finish line as soon as possible.

 

To compete, you must compute.

 

If you are Audi making cars, or Dreamworks making movies, or Intel making chips, or a researcher making new science… you probably wont win without taking advantage of all your data and using all the compute your systems allow.  That requires exploiting parallelism…

 

At least get the best out of the Xeon Processor based Workstation or cluster you have… but If you have highly parallel workloads, its worth considering Intel Xeon Phi products as well.  Dreamworks is finding that adding Intel Xeon Phi is speeding their movies into production faster.  Customers like NICS are saying Intel Xeon Phi is delivering “unparalleled productivity”.

 

I get annoyed when my cell phone says “Please turn around .. if possible.. you missed a turn”

knc chip.JPG

 

Today Intel shared our point of view on the HPC industry with the media.  Intel showed the performance of the Intel Xeon E5 product and we talked about how it is impacting the supercomputing industry.  Intel also explained our success in new clusters - like the Purdue Carter Cluster.

 

The big news was around our acomplishment in 1997.  Intel worked with Sandia Labs to create the first system that passed a sustained double precision 1TF measurement. And, Today.. for the first time, Intel showed our first silicon from the Knights Corner Product.  It runs.  Even more yet, it showed 1 teraflop double precision -- 1997 was dozens of cabinet --  2011 is a single 22nm chip.

 

This is a banner day for Intel and a banner day for the HPC community.  Jeff Nichols a Director at Oak Ridge National Labs came on stage to tell customers how their efforts to port code to the Intel MIC architecture have been going.  They have ported "millions of lines of code... literally in days"

and achieved outstanding productivity on Intel MIC.

 

The MIC architecture products are first and foremost compute nodes.  They run an open source linux OS, they are networked and can run applications.  The usage and programming model of HPC are preserved here... not set aside.

 

My favorite analogy to this is the notion that with most accelerators, you have to do the technical equivalent of burning down the library at Alexandria.  You must deny your technical heritage to gain performance.  Unlike those accelerators the Intel MIC solution embraces the technical legacy and makes performance available far more broadly.

 

Intel is changing the technical computing world again... and we were there.

top500.bmpToday, the top500 list has been announced at SC11 in Seattle and it is fascinating. Published twice a year, this list acts like a scorecard on the industry's direction and health.

 

 

 

Intel is once again the #1 processor architecture on the list, fueling 384/500 entries, and providing over 56% of the processing flops on the list.  The #1 processor on the list may still be the Xeon Processor 5600 series, but we are very excited to see the first 10 listings of the Xeon Processor E5 family.  When these listings combine with the already public announcement of petaflop systems (GENCI, LRZ, IFERC), Sandy Bridge is making a huge impact on the Supercomputing Landscape.  While the top10 systems on the list remain unchanged for this cycle, don't expect that to happen again in June 2012

 

The top500 list brings to light some good data on initial Xeon Processor E5 performance.  The top Rmax (Linpack) score for the 10 systems is 146 Gigaflops per socket.  This is a record for an IA based processor, well exceeding the new Interlagos systems shown on the list.

 

In other SC11 news, I get stopped everywhere here for people to tell me how excited they are about the Intel(r) MIC architecture products.  They love the programming model, and they like how they can rapidly able to use their previously developed software on the MIC software development vehicle. They have also mentioned how eager they are to see the future performance of the Knights Corner product.

 

Things are just getting starting here.The show floor opens tonight, and the booth looks great. The flight simulator is thrilling, and I cant wait to get this show on the road!

 

Intel HPC, Flight Simulator

I was very excited to host a “megabriefing” for about 100 international press and analysts at IDF 2011 to share some great steps forward we're making in HPC.

 

We were joined by HPC pioneer  Prof. David Patterson of Berkeley and Rob Neely from LLNL.  Professor Patterson (one of the minds behind one of the famous seven dwarfs of HPC) shared a vision for the use of computation to achieve a breakthrough in the understanding of the genetics of cancer.  Dr. Neely shared his vision of far greater public access to computation at the LLNL open campus.

 

I had the pleasure to provide background for people on how intel is making a difference in HPC with our Xeon Processor E5 family, and our future MIC coprocessors.

 

We got some great questions from the audience around the programming model and architectures that will help make exascale a reality, and a few questions around how Intel collaborates with our partners.  All in all a very rewarding conversation… catch the video below.


 

I’m really excited about work we are doing with the Texas Advanced Computing Center (TACC) at the University of Texas at Austin and wanted to share some of the details with you. You can also listen to my recent podcast on MIC. (A link to the TACC announcement on this topic is here.)

 

You might remember Intel first talked about its plans for the Intel Many Integrated Core (Intel MIC) processors last fall at IDF and our VP Kirk Skaugen gave an update at IDF Beijing earlier this month. The Intel MIC architecture provides optimized hardware performance for data parallel workloads. We are making a lot of progress on MIC since SC10 (where we did a demonstration on the impact of making football helmets safer) and we are accelerating our efforts even still.  But today, I’d like to focus on work we are doing with TACC in the development, porting, and optimization of science and engineering applications for future Intel MIC processors.

 

We recently delivered our “Knight’s Ferry” software development kit (SDK) to TACC and they have now started porting some of the more interesting science applications they encounter from all over the HPC spectrum. They also started creating new science applications optimized for future Intel MIC products.  Applications like molecular dynamics, and real-time analytics involving massive, irregular data structures like large trees and graphs should all see benefit from this development over time.

 

One of the keys to our approach with MIC is having a consistent programming model and optimization approach between our Intel Xeon processors and our future MIC processors. Due to that shared architecture, developing and adapting applications for MIC is intended to be far more straightforward than alternative architectures. We expect this to ultimately save cycles for developers as compared to porting code to another architecture.

 

Our partner, the Director of TACC Dr. Jay Boisseau has this to say about our collaboration on MIC: "We are excited to be working with Intel to help researchers across the country take full advantage of both future Xeon processors and forthcoming Intel MIC processors to achieve breakthrough scientific results. These powerful technologies will enable our researchers to do larger and more accurate simulations and analyses while using well-established current programming models, enabling them to focus on the science instead of the software."

 

I am often asked how our future MIC architecture products fit in to our overall HPC strategy. The Intel MIC architecture, along with our Intel Atom, Intel Core and Intel Xeon processors create a complete portfolio of optimized solutions for a broad set of mainstream HPC workloads. Our future Intel MIC products will enable customers like TACC to draw on decades of x86 code development and optimization techniques and create new science and new discoveries.

 

These are the kinds of applications that will solve some of the world’s biggest research challenges, as well as hopefully lead to great scientific discoveries in the future. THAT is what I love about my job.

So we're in Las Vegas for The Autodesk One Team Conference.  I had great conversations with lots of people in this industry.  The conversations with Autodesk folks about their vision of "Suites" (with every possible pun on that word used).. was really rewarding to hear.  Intel Workstations make the suite experience.. satisfying.. we demonstrated some technology previews of our HD Graphics P3000 capability.  Our team showed a great demo of autocad and inventor running in multiple screens using a next generation Xeon based entry level workstation.  It looked great.. the team here was rather proud.

 

We also discussed visions for cloud computing with people.  People see a future of cloud computing in technical compute.  An ongoing issue for the industry.

 

In what seems like a digression .. but isnt... I had sushi at a restaurant here where the chef tried to mix peruvian and japanese concept.. The original intent of each dish was lost.. and what we got with muddled and unpleasant... by the well intended mixing of ideas.... combination and editing.. doesnt always return a positive result.

 

So I went to see a great play in portland called "Futura" about a future of cloud computing gone very wrong.  In this vision,in the future... not only has paper been eliminated (along with pencils and pens and all books.. gone are the paper editions of tolstoy and twain).. but the control of content... has been centralized.  The "corporation" in the spirit of political correctness edit the great works of literary history (like recently happened to huck finn) to match the whims and will of its leaders.  Some great works simply become unavailable.

 

The Intel Cloud 2015 vision of federated automated and secure cloud computing is great.  but fidelity needs to show up someplace.  In a world of blogs and mashups.. our nuevo content chefs enable content to be seen and modify  leaving open the possibility that the original huck finn will be lost to us...  maybe not a big loss perhaps..  but certainly a moment to give us pause.

 

 

Is that a bad taste in my mouth?

So I admit it .. at heart I'm a geek.  I love geeky things.. High performance computing, workstation applications that do dynamic 3D modelling, HDTV.  But I am also a recovering academic.. I have studied and taught marketing and strategy for about10 years.. Love that too.

 

Now.. I spend a great deal of my time.. travelling around and talking to people about the really cool things they do, and how intel might help them.  So.. yep.. I'm a wandering geek.  Last year: Beijing, Shanghai, Tokyo, All over the US, London, Paris, Hamburg, Barcelona, Munich, etc.. Mostly northern hemisphere... have to work on that.

 

I am thrilled to see the amazing things people are doing with our technology.  the guy modelling race cars, the folks making football helmets safer (met Drew Brees at SC10), the folks who are reinventing medicine, finding new sources of energy (and under utilized old ones).. Weather modelling and forecasting in the pacrim ... someday someone will take on snow removal in boston with HPC... but not today.

 

I was really happy to see Dr. Stephen Wheat get listed as a person to watch in HPC.  I assume HPCwire meant that metaphorically...  While somewhat visually unpretentious, Dr. Wheat is a passionate advocate for his industry, his company and his country.  There are interesting people to FOLLOW in this industry.. I am blessed to work with a cadre of them.

 

Congrats Dr. Wheat.. see you around the world.Tokyo-Arrival_ANiceNightforWalking_03-09-10_0303.jpg

photo credit: Tom Metzer.. "A Nice night in Tokyo" Spargo left.. Geek right

Filter Blog

By author:
By date:
By tag: