1 2 3 Previous Next

The Data Stack

1,416 posts

By Mike Bursell, Architect and Strategic Planner for the Software Defined Network Division at Intel and the chair of the ETSI Security Workgroup.


Telecom operators are getting very excited about network function virtualization (NFV). The basic premise is that operators get to leverage the virtualization technology created and perfected in the cloud to reduce their own CAPEX and OPEX. So great has been the enthusiasm, in fact, that NFV has crossed over and is now being used to also describe non-telecom deployments of network functions such as firewalls and routers.


A great deal of work and research is going on in areas where a telecom operator’s needs are different to those of other service providers. One such area is security. The ETSI NFV workgroup is a forum where operators, Telecom Equipment Manufacturers (TEMs), and other vendors have been meeting over the past two years to drive a consensus of understanding around the required NFV architecture and infrastructure. Within ETSI NFV, the “SEC” (Security) working group, of which I am the chair, is focusing on various NFV security architectural challenges. So far, the working group has published two documents:



The various issues that they address are worth discussing in more depth, and I plan to write some separate pieces in the future, but they can all be categorized as follows:


  • Host security
  • Infrastructure security
  • Virtualization network function (VNF)/tenant security
  • Trust management
  • Regulatory concerns


Let’s cover those briefly, one by one.


Host security

For NFV, the host is the hypervisor host (an older term is the VMM or Virtual Machine Manager), which runs the Virtual Machines. In the future, hypervisor solutions are unlikely to be the only way of providing virtualization, but the type of security issues they raise are likely to be replicated with other technologies. There are two sub-categories – multi-tenant isolation and host compromise.



The host is not the only component of the NFV infrastructure. There may be routers, switches, storage, and other elements which need to be considered, and sometimes the infrastructure requirements, including the host and any virtual switches, local storage, etc. should be included as part of the overall picture.


VNF/tenant security

The VNFs, and how they are protected from external threats, including the operator and each other in a multi-tenant environment, would fall under this point.


Trust management

Whenever a new component is incorporated into an NFV deployment, issues of how much – or whether – it should trust the other components arise.  Sometimes trust is simple, and does not need complex processes and technologies, but a number of the use cases which operators are interested in may require significant and complex trust relationships to be created, managed, and destroyed.


Regulatory concerns

In most markets, governments place regulatory constraints on telecom operators before they are allowed to offer services.  These concerns may have security implementations.


Although the ETSI NFV workgroup didn’t set out to specifically focus on problem areas, these categories have turned out to be useful for generating possible concerns which need to be considered.  In future blogs, I will consider these questions and a number of the possible answers.

By Tim Allen and Prabha Ganapathy


Strata + Hadoop World is coming right up, but for hackers dedicated to humanitarian causes, there’s another big data-related event that’s even more imminent.


AtrocityWatch Hackathon for Humanity


In support of AtrocityWatch, an organization that works to provide an early warning of crimes against humanity through crowd sourcing and big data, Intel, Cloudera, Viral Heat, Amazon AWS, and O’Reilly Media are sponsoring a hackathon to apply data science to the cause of preventing atrocities. The event, AtrocityWatch for Humanity, will focus on using big data and mobile technologies to build a Geo-Fencing app based on a “sentiment” API that monitors social media channels to identify and track potential human rights abuses. The app would help the world spot atrocities before they occur by analyzing social media and raising awareness when atrocities do happen. The AtrocityWatch hackathon will be held 5pm to midnight on Feb. 12, 2015, at Cloudera’s offices, 1001 Page Mill Road, Palo Alto, CA. Registration is free; just bring your laptop. Click here for more information, and be sure to follow and tweet with the #AtrocityWatch hashtag to join the hackathon conversation online.


Strata + Hadoop World – Make Data Work


Just down the road in San Jose, Strata + Hadoop World, the world’s largest big data and data science conference, will take place Feb 17-20. It’s where leading developers, strategists, analysts and business decision-makers gather to discuss emerging big data techniques and technologies, and where cutting-edge science and new business fundamentals intersect. As an innovator in big data platforms, Intel has queued up a full program of keynote speakers, sponsored sessions, booth demonstrations and other events to showcase its latest advances in big data technologies.


As the focus of big data computing broadens beyond the data center to storage, networking and toward the network’s edge and the cloud, there’s more and more strain placed on a business’s underlying computing infrastructure. To deliver high performance big data solutions requires a flexible, distributed and open computing environment, and Intel is leading innovation into key platform technologies, such as advances in the software-defined infrastructure that can optimize your platform for data intensive workloads.


Here’s your chance to learn more about Intel’s approach to big data platform innovation. Attend the keynote address Intel and the Role of Open Source in Delivering on the Promise of Big Data by Michael Greene (@DadGreene), VP and general manager for Intel Software and Services, System Technologies and Optimizations Group. This presentation, held on Friday, Feb. 20 from 9am-10am in the Keynote Hall, discusses Intel’s vision for a horizontal, reusable and extensible architectural framework for big data.


For more insights into big data architectures, plan to attend From Domain-Specific Solutions to an Open Platform Architecture for Big Data Analytics Based on Hadoop and Spark, a tech session presented by Vin Sharma (@ciphr), Big Data Analytics Strategist at Intel, and Jason Dai, Intel’s Chief Architect of Big Data Technologies, on Thursday, Feb. 19, 11:30am-12:10pm in room 230 B.


Intel experts are also taking part in the following presentations:



Stop by the Intel booth #415 to say hello and to attend one of the ongoing presentations from Intel and its partners at our in-booth theater. We’re looking forward to seeing you, or tweet us at #Intel #StrataHadoop!


Follow Intel’s big data team at @PrabhaGana and keep up with Intel advances in analytics at @TimIntel and #TechTim.  


By Albert Diaz, Intel VP Data Center Group, GM Product Collaboration and Systems Division

When Intel’s Platform Collaboration Solution Division (PCSD) was approached by EMC & VMware® about collaborating on a best of breed Hyper-Converged Infrastructure appliance we realized that PCSD had the ability to integrate assets across Intel’s product groups and make a compelling solution to meet a growing storage market need. The EMC® VSPEX® BLUE hyper-converged infrastructure appliance is more than the convergence of software-defined compute, networking and storage infrastructure; it is the convergence of great brands.   Each company brings its expertise to ensure that the resulting product addresses the many challenges that Enterprise IT is facing as their organizations evolve to meet the real-time workload demands of private/hybrid cloud deployments.  VSPEX BLUE gives IT managers what they need, but without complicating their lives, the product just works! 

Hiding all the complexity from the user is…well its complex, but we knew we were up for challenge.  It is all about ensuring that pluggable fixed configuration H/W is all synchronized through a common S/W stack.  We needed to ensure the product had the memory and I/O bandwidth to meet the demands of the enterprise and mid-market, and what better choice than the Intel® Xeon® Processor E5-2600 Product Family.  Putting 8 processors into a dense 2U chassis was made easier by our 20+ years of experience in the server board and system business. The solution includes a modular Intel 10GbE Network connection from our Networking Division (ND), giving users the choice of fiber SFP+ or copper RJ45 connectivity and ensuring that users have the flexibility to integrate as their cable plant requires.  Adding high performance solid state disks with technology from the Intel Non-Volatile Memory Solutions Group (NSG) and being able to seamlessly scale, from 100 to 400 Virtual Machines and 250 to 1000 VDTs, with the goal of getting customers up and running in 15 minutes, was super challenging from a H/W integration perspective. 

From the initial requirements discussion with EMC and VMware through final production release, we always kept the design goal of SIMPLICITY in mind.   Installation and management needed to be fully orchestrated.   Patching and upgrading needed to be intuitive.  Very importantly, the EMC VSPEX BLUE appliance had to easily grow and contract based on business needs in order to offer mid-market enterprise customers the fastest, lowest-risk path to new application and technology adoption.

I want to be sure to mention that our team enjoyed working on the VSPEX BLUE project.  Storage has reached an important inflection point.  Delivering truly converged solutions that have great brands doing the validation and integration together makes successful deployments of private/hybrid clouds predictable. IT Directors want proven configurations that enable the businesses they support to go from idea to solution without incurring the risk normally associated with new cloud deployments.  Our team is proud to have been part of the creation of a product that is simple to manage and simple to scale, so that IT Directors can invest their valuable resources elsewhere, because I know their lives are complex enough!

In January, Chip Chat continued archiving OpenStack Summit podcasts. We’ve got episodes covering enterprise deployments for OpenStack and key concerns regarding security and trust, as well as software as a service and utilizing OpenStack to streamline compute, network and storage. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:

  • Commercial OpenStack for Enterprises – Intel® Chip Chat episode 362: In this archive of a livecast from the OpenStack Summit, Boris Renski (twitter.com/zer0tweets), the co-founder and CMO of Mirantis stops by to talk about the OpenStack ecosystem and the company’s Mirantis OpenStack distribution. Enterprises are now in the adoption phase for OpenStack, with one particular use case standing out for Boris – OpenStack as a data center wide Web server. For more information, visit www.mirantis.com.
  • OpenStack Maturity and Development – Intel® Chip Chat episode 363: In this archive of a livecast from the OpenStack Summit, Krish Raghuram, the Enterprise Marketing Manager in the Open Source Technology Center at Intel, stops by to talk about working with developers directly to get technologies quickly proven and tested, as well as Intel’s investment and work as an OpenStack Platinum member, the need for developing cloud-aware/stateless apps, and utilizing OpenStack to cut operational and capital expense costs. For more information, visit https://software.intel.com/en-us/articles/open-source-openstack.
  • OpenStack and SaaS Deployments – Intel® Chip Chat episode 364: In this archive of a livecast from the OpenStack Summit, Carmine Rimi, the Director of Cloud Engineering at Workday stops by to talk about the evolution of software as a service, as well as scalability and reliability of apps in a cloud environment. Workday deploys various finance and HR apps for enterprises, government and education and is moving its infrastructure onto OpenStack to deploy software-defined compute, networking, and storage. For more information, visit www.workday.com.
  • OpenStack and Service Assurance for Enterprises – Intel® Chip Chat episode 365: In this archive of a livecast, Kamesh Pemmaraju (www.twitter.com/kpemmaraju), a Sr. Product Manager for OpenStack Solutions at Dell, stops by to talk about a few acute needs when deploying OpenStack for enterprises: Security, trust and SLAs and how enterprises can make sure their workloads are running in a trusted environment via the company’s work with Red Hat and Intel® Service Assurance Administrator. He also discusses the OpenStack maturity roadmap including the upgrade path, networking transitions, and ease of deployment. For more information, visit www.dell.com/learn/us/en/04/solutions/openstack.

I recently got the opportunity to discuss the security and network optimization applications of Intel QuickAssist Technology with Allyson Klein for her “Chip Chat” podcast. I enjoy listening to Chip Chat, so it was a great experience to be a part of the podcast.


The interview was part of our launch of the new Intel® Xeon® Processor E5-2600 v3 and Intel Communications Chipset 8900 featuring QuickAssist Technology.


QuickAssist Technology provides hardware-assisted compression and cryptography for Xeon-based platforms that allows system manufacturers to implement real time compression and encryption algorithms with minimal utilization of the CPU.  Thus, they get the flexible, high performance encryption and compression while preserving processing cores for revenue-generating applications.


On the podcast, I spoke a lot about the security benefits of this technology, ranging from better performance on the ciphers that protect data transmission in 3G and 4G/LTE networks to the use of secure sockets layer (SSL) encryption on a growing range of websites and web services. QuickAssist technology provides performance boost and increased efficiency for a wide range of these applications.


But the podcast lasts only 12 minutes and what I didn’t get to discuss was the growing need for the compression capabilities of QuickAssist Technology in storage and big data analytics applications -- We are living through a time of dramatic growth in data. IDC reports that the data that is created and copied will double annually reaching 44 zettabytes (equivalent to 44 trillion gigabytes) by 2020.**  Where this hits particularly hard is with big data and storage applications and QuickAssist Technology allows system manufacturers to implement real time compression and encryption algorithms that can keep up with network and CPU performance.


Let’s look at how QuickAssist is being used in storage and big data applications:


Storage: Network performance and processor performance have increased dramatically in recent years, but hard disk drive (HDD) read/write performance has not kept pace. Thus more data centers are tiering their storage systems to put their most active data on the solid-state drives (SSDs) and less active on HDDs and using compression to get more space at every tier. Using Intel QuickAssist provides the computing power for to compress data in real-time, making it realistic to use for even the highest-performance storage tiers.


Big Data: 2015 is predicted by many to be a big year for big data, driven by the success of Hadoop, the leading big data framework. Many enterprises are running trials with Hadoop to garner useful analytics from their big data and will convert those trials into production systems in the coming year. Early big data projects focused on batch processing of data at rest, but now capabilities are evolving that enable processing and analyzing of streaming data in real-time. This evolution is unlocking a variety of exciting new use cases and applications in healthcare, telecom and capital markets.


Big data applications need compression for a couple key reasons: Hadoop is designed to run on large data sets residing in compute and storage clusters.  Those clusters rely on compression to preserve network bandwidth and storage capacity, and in turn optimize network utilization and system cost. Real time data compression delivers on these objectives, while also improving the run time of end-to-end Hadoop processing. Intel QuickAssist technology built into the storage devices and Hadoop servers will help the next generation of full scale deployments to succeed.


Compression and encryption are such fundamental processes for some of the key security and data growth challenges of our digital universe.  It’s exciting to see the new use cases for these processes that are enabled when QuickAssist Technology is used to drive new levels of performance.



What do you think? Find me on Twitter @jeni_p and let me know.


In December we finished archiving livecast episodes from the Intel Developer Forum with episodes on security and the cloud, next-gen virtualization, cloud-aware apps, silicon photonics and Intel RealSense Technology. We then moved on to archiving OpenStack Summit podcasts. You can check out episodes on Intel’s OpenStack strategy and intelligent orchestration below. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:

  • Bringing Security to the Cloud – Intel® Chip Chat episode 355: In this archive of a livecast from the Intel Developer Forum, Jim Comfort, the GM of Cloud Services in the Global Technology Services Unit at IBM and Karna Bojjireddy, Lead Security Architect for IBM Cloud Services, stop by to talk about the emerging cloud services market with the company’s acquisition of SoftLayer in 2013. SoftLayer provides cloud IaaS from data centers around the world, offering enterprises the ability to modernize their infrastructure with cloud services. SoftLayer recently announced that it is bringing Intel® Trusted Execution Technology and Trusted Platform Modules to it service, enabling enterprises to maintain security and control over their applications. For more information, visit www.softlayer.com.
  • Hot Topics in Virtualization and Networking – Intel® Chip Chat episode 356: In this archive of a livecast from the Intel Developer Forum, Scott Lowe (@scott_lowe), an industry blogger and engineering architect at VMware, stops by to talk about recent developments in the virtualization and networking industry. He gives a great overview of software-defined networking (SDN) and network functions virtualization (NFV), why the industry should care, and where we are on the adoption curve. He also discusses a few emerging/popular technologies like network hardware and OS disaggregation, containers and rackscale architecture. For more information, visit www.vmware.com or http://blog.scottlowe.org.
  • The Importance of Building Cloud-Aware Apps – Intel® Chip Chat episode 357: In this archive of a livecast from the Intel Developer Forum, Cathy Spence (@cw_spence), a Principal Engineer with Intel IT, stops by to talk about developing cloud-aware applications with specific characteristics like on demand availability and elastic growth with an overall goal of realizing cost savings. Intel IT runs programs for developers to help practice building cloud-aware apps, and also publishes app design patterns and tracks how many and what types of cloud-aware apps are available. For more information, visit http://bit.ly/ODCACloudAware and http://intel.ly/CloudAware.
  • High-Bandwidth Networks Using Silicon Photonics – Intel® Chip Chat episode 358: In this archive of a livecast from the Intel Developer Forum, Mario Paniccia, an Intel Fellow and GM of Silicon Photonics at Intel and celebrity guest Andy Bechtolsheim, the co-founder of Sun Microsystems and current Founder, Chief Development Officer and Chairman of Arista Networks, stop by to talk about the need for 100 Gbps for large networks with massive aggregate throughputs. The biggest challenge to mainstream deployment is the cost of optics, which has been addressed with Intel Silicon Photonics by marrying optics with the silicon manufacturing process. For more information, visit http://intel.ly/SiliconPhotonics.
  • Intel® RealSense Technology for Natural and Immersive Apps – Intel® Chip Chat episode 359: In this archive of a livecast from the Intel Developer Forum, Ben Kepes, a technology analyst and founder Diversity Limited, and Eric Mantion, Evangelist for Intel RealSense Technology, chat about the innovative technology. Intel RealSense Snapshot allows for photos with depth, enabling re-focusing after the shot as well as interactive measurements. Intel RealSense 3D Camera sees like humans do, so it can respond to natural movement in 3 dimensions – you can control devices with a wave or scan your environment in 3D. The use cases for the technology are immense, everything from getting the perfect wedding photo to making sure your newly-purchased sofa will fit in a room to augmented reality schematics for buildings under construction. For more information, visit www.intel.com/realsense.
  • Intel’s Hybrid Cloud Utilizing the OpenStack Platform – Intel® Chip Chat episode 360: In this archive of a livecast from the OpenStack Summit, Ruchi Bhargava (@Ruchi_Bhargava), an engineering manager and hybrid cloud program owner at Intel, stops by to talk about OpenStack and Intel’s ongoing development and deployment of a hybrid cloud using the software. The company started virtualizing services in 2008 and what began as an effort in efficiency has now moved to the automation of enterprise services using cloud computing and a transformed IT work force. For more information, visit http://intel.ly/hybridcloud or www.openstack.org/user-stories/intel/.
  • Evolving Data Center Infrastructure for Intelligent Automation – Intel® Chip Chat episode 361: In this archive of a livecast from the OpenStack Summit, Alex Williams (@alexwilliams), Lead of The New Stack, chats about hot topics at the show: The growth of the OpenStack community and project maturity, container technology, the convergence of big data and private clouds, the evolution of networking and storage for intelligent orchestration, and trust in an open source environment. For more information, visit www.thenewstack.io.

This blog is a summary of a conversation between Uri Elzur, Director of SDN architecture and OpenDaylight Board Member and Chris Buerger, Technologist within Intel’s Software-Defined Networking Division (SDND) marketing team. It outlines the motivation and plans driving Intel’s decision to increase its OpenDaylight Project membership to Platinum.


Chris: Intel has been a member of the OpenDaylight Project since its inception. We are now announcing a significant increase in our membership level to Platinum. Explain the reasoning behind the decision to raise Intel’s investment into ODL.


Uri: At Intel, we have been outlining our vision for Software Defined Infrastructure or SDI. This vision is taking a new approach to developing data center infrastructure to make it more agile so it works in a more automatic fashion to better meet the requirements that shape the data centers of tomorrow.  Some of us fondly call the force shaping it  ‘cloudification. ’


SDI is uniquely meeting customer needs at both the top and the bottom line. Top line refers to greater agility and speed to develop data center scale applications, which in turn allows accelerated revenue generation across a larger number of our customers as well as the introduction of new, cloud-centric business models. At the same time, SDI also uniquely allows for the reduction of total cost of ownership for both service providers and their end-user customers. Service Providers are under intense competitive pressure to reduce cost, be it the cost of a unit of compute or, at a higher level, cost for a unit of application where an application includes compute, network, and storage.


Mapping this back to SDN and OpenDaylight, it is important to Intel to help our customers to quickly and efficiently benefit from this new infrastructure. To do that, we need to support both open and closed source efforts. OpenDaylight represents an open source community that has been very successful in attracting a set of industry contributors and that has also started to attract large end-user customers.


At this point in time, we see our efforts across multiple SDI layers that also include OpenStack and OpenVSwitch in addition to OpenDaylight come together in a coordinated way. This allows us to expose platform capabilities all the way to the top of the SDI stack. For example, by allowing applications to ‘talk back’ to the infrastructure to express their needs and intents, we are leveraging the capabilities of the SDN controller to optimally enable Network Function Virtualization workloads on standard high volume servers. This gives cloud service operators, telecommunication providers and enterprise users’ superior support for these critical services, including SLA, latency and jitter control, and support for higher bandwidths like 40 and 100 Gigabit Ethernet. Among open source SDN controllers, OpenDaylight has shown healthy growth based on the successful application of open source principles such as meritocracy. We are excited about the opportunities to work with the OpenDaylight community as part of our wider SDI vision.


Chris: As Intel’s representative on the Board of the OpenDaylight Project, what do you envision as the key areas of technical engagement for Intel in 2015?


Uri: Keeping our customer needs and the wider SDI vision in mind, our first priority is to really exercise the pieces that the community has put together in OpenDaylight on standard high volume servers to deliver the benefits of SDN to end-users. We are also going to work with our community partners as well as end-user customers to identify, validate, and enhance workloads that are important to them – i.e. optimize the hardware and software on our platform to better support them. For example, take a look at the work being done in the recently announced OPNFV initiative. We are planning to take use cases from there and help the community optimize the low-level mechanisms that are needed in an SDN controller and further to the


Chris:  The enablement of a vibrant ecosystem of contributors and end-users is critical to the success of open source projects. What role do you see Intel playing in further accelerating the proliferation of ODL?


Uri: We think Intel has a lot to bring to the table in terms of making the ODL community even more successful. Intel has relationships with customers in all of the market segments where an SDN controller will be used. We have also demonstrated our ability to create environments where the industry can test drive cutting edge new technologies before they go to market. For SDI, for example we created the Intel® Cloud Builders and Intel® Network Builders ecosystem initiatives to not only test the SDN controller, but couple it with a more complete and realistic software stack (SDI stack) and a set of particular workloads as well as Intel platform enhancements to establish performance, scalability and interoperability best practices for complex data center systems. And bringing this experience to OpenDaylight accelerates the enablement of our SDI vision.


Chris:  Software Defined Networking and Network Function Virtualization capabilities are defined, enabled and commercialized on the basis of a multitude of standards and open source initiatives. How do you see Intel’s ODL engagement fitting within the wider efforts to contribute to SDN- and NFV-driven network transformation?


Uri: Our answer to this question has multiple parts. One change that we have seen over the last few months is a shift in organizations such as ETSI NFV that, while always considering SDN to be reasonably important, never placed much emphasis on the SDN controller. This has changed. The ETSI NFV community has come to terms with the idea that if you want scalability, a rich set of features, automation and service agility, then you need an SDN controller such as OpenDaylight as part of the solution stack. And we believe that ETSI represents a community that wants to use the combination of OpenDaylight, OpenStack and a scalable, high-performing virtual switch on low cost, high volume server platforms.


We have also observed some interesting dynamics between open source and standards developing organizations. What we are witnessing is that open source is becoming the lingua franca, a blueprint of how interested developers demonstrate their ideas to the rest of the industry as well as their customers. Open source promotes interoperability, promotes collaboration between people working together to get to working code and then it is presented to the standard bodies. What excites us about OpenDaylight is that as a project it has also been very successful in working with both OpenStack and OpenVswitch, incorporating standards such as Openflow and OVSDB. Moreover, interesting new work on service chaining and policies is happening in both OpenDaylight as well as OpenStack. And all of these initiatives align with network management modelling schemas coming out of the IETF and TOSCA.


All of these initiatives are creating a working software defined infrastructure that is automated and that helps to achieve the top and bottom line objectives, we mentioned. OpenDaylight is a central component to Intel’s SDI vision and we are excited about the possibilities that we can achieve together.

In November we continued to archive livecast episodes from the Intel Developer Forum with episodes on software-defined storage, the RedFish spec, and the Intel server SoC product line. We’ve also got an episode from SC14 on recent Intel Lustre storage announcements. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!


Intel® Chip Chat:

  • Software-Defined Storage for Agile Enterprises with EMC – Intel® Chip Chat episode 351: Three interviewees from EMC (Danny Cobb, Steve Sardella and Jason Davidson), stop by to chat about developments in the storage industry. Software-defined storage will allow users to create storage pools across large clusters of servers with different levels of service, while NVM Express over Fabrics will give data center operators the distance and robustness of existing standard (Ethernet, Fiber Channel) while also delivering the efficiency of the NVM Express standard. For more information, visit www.emc.com.
  • The Redfish Spec: A Simple and Standard Management Interface – Intel® Chip Chat episode 352: In this archive of a livecast from the Intel Developer Forum, Billy Cox, the GM of SDI Software Development at Intel and Jeff Autor, a Distinguished Technologist in the Servers Business Unit at HP, stop by to talk about the new Redfish specification from Intel, HP, Dell and Emerson Network Power which has recently been submitted to a new forum in the DMTF. Redfish enables devices to be scalable, discoverable, extensible and easy to manage via a simple, script-based programming method. This allows use cases from data center operators to enterprise management consoles to expand data access and analysis. For more information, visit www.refishspecification.org.
  • Fast and Scalable Storage with Lustre* Software – Intel® Chip Chat episode 353: Brent Gorda, the GM of the High Performance Data Division at Intel, stops by to talk about the Lustre* file system, which operates at extreme scale and efficiency and is popular in the HPC industry. The Lustre team at Intel recently announced the Intel Enterprise Edition for Lustre version 2.2, which added some significant maintenance and management tools, support for the underlying OpenZFS file system, and enhancements to use MapReduce* under Lustre (allowing data to be kept in one place). For more information, visit: www.intel.com/lustre.
  • The Intel System on a Chip Product Line – Intel® Chip Chat episode 354: Nidhi Chappell, the Server SoC Product Marketing Manager at Intel stops by to talk system on a chip (SoC). The design philosophy is all about integration – IO, network and NIC on the same package as the microprocessor for use in markets including communications, storage and microservers where scalability and density are critical. Intel is using SoCs in Intel® AtomTM processors and recently released details on the upcoming Intel® Xeon® D processor line, the first true Xeon processor based SoC. For more information, visit http://intel.ly/Xeon-D.

$26,000 awarded to National Center for Women and Information Technology charity


The Coding Illini, a team from NCSA and the University of Illinois at Urbana–Champaign, was declared the winner of the 2014 Intel® Parallel Universe Computing Challenge (PUCC) after a final competition that had plenty of excitement as both the Coding Illini and the Brilliant Dummies met their match with a tough coding round.


The final challenge was more substantial than prior matches and was the only one this year that used Fortran. The larger code was the undoing of both teams, as each made more changes than they were able to debug in their short ten minutes. The Coding Illini added to the drama when their final submission contained an error in their coding which appears to have broken the convergence of a key algorithm in the application. Their modified application continued iterating until long after the victor was declared and the crowds had dispersed. Co-host of the event, James Reinders, suspected both teams were only a few minutes away from success based on their progress and if either team had tried to do a little less they could have won easily by posting a programming result. The Coding Illini were declared the winner of the match based on the strength of their performance in the trivia round. Based on the Illini’s choice for a charitable organization, Intel will award the National Center for Women and Information Technology a donation of $26,000.


The Coding Illini, who were runners-up in the 2013 competition, celebrate the charitable award Intel will make to the National Center for Women and Information Technology on their behalf. The team includes team captain Mike Showerman, Andriy Kot, Omar Padron, Ana Gianaru, Phil Miller, and Simon Garcia de Gonzalo.



James later revealed that all the coding rounds were based on code featured in the new book High Performance Parallelism Pearls (specifically based on code from Chapters 5, 9, 19, 28, 8, 24 and 4, in that order. The original programs, effectively the solutions, are available from http://lotsofcores.com.) The competition problems were created by minimally changing the programs through the deletion of some pragmas, directives, and keywords associated with the parallel execution of the applications.


Complete Recap


This year’s PUCC at SC14 in New Orleans started with broad global participation with three U.S. teams, two teams each from Asia and Europe, and a Latin American team. In recognition of SC’s 26th anniversary, the teams were playing for a $26,000 prize to be awarded to the winning team’s selected charity.


On the opening night of the SC14 exhibition hall, last year’s winners, the Gaussian Elimination Squad from Germany who were playing for World Vision, eliminated their first round opponent, the Invincible Buckeyes from the Ohio Supercomputer Center and the Ohio State University who were playing for CARE. The German team had a slight lead after the first round that included SC conference and HPC trivia. Then their masterful performance in the coding round even amazed James Reinders, Intel’s Software Evangelist and the designer of the parallel coding challenge.


In the second match, The Brilliant Dummies from Korea selected Asia Injury Prevention Foundation as their charity. They faced off against the Linear Scalers from Argonne National Lab who chose Asha for Education. After round one, the Brilliant Dummies were in the lead with their quick and accurate answers to the trivia questions. Then in round two, the Seoul National University students managed to get the best Intel® Xeon™ and Intel® Xeon Phi™ performance with their changes to parallelize the code in the challenge. This performance cemented their lead and sent them on to the next round.


With the first two matches complete, the participants for the initial semi-final round were now identified. The Gaussian Elimination Squad would face The Brilliant Dummies.


Match number three, another preliminary round match, pitted Super Computación y Calculo Cientifico (SC3) representing four Latin American countries against the Coding Illini. The Coding Illini had reached the finals in the 2013 PUCC, and were aiming to improve their performance this year.  This was the first year for SC3, who chose to play for Forum for African Women Educationalists. In a tightly fought match, the Coding Illini came out on top.


In the final preliminary round match, Team Taiji representing four of the top universities in China chose Children and Youth Science Center, China Association for Science and Technology for their charity. They faced the EXAMEN representing the EXA2CT project in Europe and were playing for Room to Read. The team from China employed a rarely used strategy by fielding four different contestants in the trivia and coding rounds of the match and held the lead after the first round. Up until the very last seconds of the match it looked as though Taiji might be victorious. However, the EXAMEN submitted a MAKE at the very last second which improved the code performance significantly. That last second edit proved to be the deciding factor in the victory for the team from Europe.


So the Coding Illini would face the EXAMEN in the other semifinal round.


When the first semifinal match between the Gaussian Elimination Squad and The Brilliant Dummies started, the Germans were pretty confident. After all, they were the defending champions and had performed extraordinarily well in their first match. They built up a slight lead after the trivia round. When the coding round commenced, both teams struggled with what was a fairly difficult coding challenge that Reinders had selected for this match. As he had often reminded the teams, if they were not constrained by the 10 minute time limit, these parallel coding experts could have optimized the code to perform at the same or even better level than the original code had before Reinders deconstructed it for the purposes of the competition. As time ran out, The Brilliant Dummies managed to eke out slightly better performance and thus defeated the defending champions. The Brilliant Dummies would move on to the final round to face the winner of the EXAMEN/Coding Illini semi-final match.


In the other semifinal match, the Coding Illini took on the EXAMEN. At the end of the trivia round, the Coding Illini were in the lead. But as the parallel coding portion of the challenge kicked in, the EXAMEN looked to be the winner…until the Coding Illini submitted multiple MAKE commands at the last second to pull out a victory by just a small margin. They had used the same strategy on the EXAMEN that the EXAMEN had used in their match against Taiji. Coding Illini had once again made it to the final round and set up the final match with The Brilliant Dummies.

SAP TechEd 2014 at Las Vegas was an exciting and enjoyable show, brimming with opportunities to learn about the latest innovations and advances in the SAP ecosystem. Intel had its own highlights, as I explain in this video overview of Intel’s key activities. These included the walk-on appearance of Shannon Poulin, vice president of Intel’s Data Center Group, during SAP President Steve Lucas’s executive keynote. Shannon did his best to upstage the shiny blue Ford Mustang that Steve gave away during the keynote, but that was a hard act to top. Curt Aubley, Intel Data Center Group’s vice president and CTO, took part in an executive summit with Nico Groh, SAP’s data center intelligence project owner, that addressed ongoing Intel and SAP engineering efforts to optimize SAP HANA* power and performance management on Intel® architecture.


I was at the conference filming man-on-the-street interviews with some of Intel’s visiting executives. I had a great conversation with Pauline Nist, general manager of Intel’s Enterprise Software Strategy, on the subject of Cloud: Public, Private, and Hybrid for the Enterprise, and the future of the in-memory data center. I also spoke to Curt Aubley about How Intel is Influencing the Ecosystem Data Center and how sensors and telemetry can provide real-time diagnostics on the health of your data center.


In the Intel booth, we also had the fun of launching our latest animation, Intel and SAP: The Perfect Team for Your Real-Time Business, a light-hearted look at the rich, long-standing alliance between SAP and Intel. In the video, the joint SAP HANA and Intel® Xeon® processor platform has the power of a space rocket—a bit of an exaggeration, perhaps. But SAP HANA is a mighty powerful in-memory database, designed from the ground up for Intel Xeon processors. Dozens of Intel engineers were involved in the development of SAP HANA, working directly with SAP to optimize SAP HANA for Intel architectures.




It’s not too late to catch some of the action from our booth! We filmed a number of our Intel Tech Talks, so click on these links to watch industry experts discussing the latest news and advances in the overlapping orbits of SAP and Intel.



Follow me at @TimIntel and search #TechTim to get the latest on analytics and data center news and trends.

Let’s talk about Fellow travelers at SC14 – companies that Intel is committed to collaborating with in the HPC community. In addition to the end-user demos in the corporate booth, Intel took the opportunity to highlight a few more companies in the channel booth and on the Fellow Traveler tour.


Intel is hosting three different Fellow Traveler tours on Discovery, Innovation, and Vision. A tour guide leads a small group of SC14 attendees through the show floor to visit eight company booths (with a few call outs to additional Fellow Travelers along the way). Yes, you wear an audio headset to hear your tour guide. And yes, you follow a flag around the show floor. On our 30 minute journey around the floor, my Discovery tour visited (official stops are bolded):

  • Supermicro: Green/power efficient supercomputer installation at the San Diego Supercomputer Center
  • Cycle Computing: Simple and secure cloud HPC solutions
  • ACE Computers: ACE builds customized HPC solutions, and customers include scientific research/national labs/large enterprises. The company’s systems handle everything from chemistry to auto racing and are powered by the Intel Xeon processor E5 v3. Fun fact, the company’s CEO is working on the next EPEAT standard for servers.
  • Kitware: ParaView (co-developed by Los Alamos National Laboratory) is an open-source, multi-platform, extensible application designed for visualizing large data sets.
  • NAG: A non-profit working on numerical analysis theory, they also take on private customers and have worked with Intel for decades on tuning algorithms for modern architecture. NAG’s code library is an industry standard.
  • Colfax: Offering training for parallel programming (over 1,000 trained so far).
  • Iceotope: Liquid cooling experts, their solutions offer better performance/watt than liquid and air cooling hybrid.
  • Huawei: Offering servers, clusters (they’re Intel Cluster Ready certified) and Xeon Phi coprocessor solutions.
  • Obsidian Strategics: Showcasing a high-density Lustre installation.
  • AEON: Offering fast and tailored Lustre storage solutions in a variety of industries including research, scientific computing and entertainment; they are currently architecting a Lustre storage system for the San Diego Supercomputer Center.
  • NetApp: Their booth highlighted NetApp’s storage and data management solutions. A current real-world deployment includes 55PB of NetApp E-Series storage that provides over 1TB/sec to a Lustre file system.
  • Rave Computer: The company showcased the RT1251 flagship workstation, featuring dual Intel Xeon processor E5-2600 series with up to 36 cores and up to 90MB of combined cache. It can also make use of the Intel Xeon Phi co-processor for 3D modeling, visualization, simulation, CAD, CFD, numerical analytics, computational chemistry, computational finance, and digital content creation.
  • RAID Inc: Demo included a SAN for use in big data, running the Intel Enterprise Edition of Lustre with OpenZFS support. RAID’s systems accelerate time to results while lowering costs.
  • SGI: Showcased the SGI ICE X supercomputer, the sixth generation in the product line and the most powerful distributed memory system on the market today. It is powered by the Intel Xeon processor E5 v3 and includes warm water cooling technology.
  • NCAR: Is answering the question, how do you refactor an entire climate code. NCAR, in collaboration with the University of Colorado at Boulder is an Intel Parallel Computer Center aiming to develop tools and knowledge to help with the performance improvements of CESM, WRF, and MPAS on Intel Xeon and Intel Xeon Phi processors.

Intel Booth – Fellow Traveler Tours depart from the front right counter


After turning in my headset, I decided to check out the Intel Channel Pavilion next to Intel’s corporate booth. The Channel Pavilion has multiple kiosks (so many that they switched halfway through the show), each showcasing a demo with Intel Xeon and/or Xeon Phi processors, and highlighting a number of products and technologies. Here’s a quick rundown:

  • Aberdeen: Custom servers and storage featuring Intel Xeon processors
  • Acme Micro: Solutions utilizing the Intel Xeon processor and Intel SSD PCIe cards
  • Advanced Clustering Technologies: Clustered solutions in 2U of space
  • AIC: Alternative storage hierarchy to achieve high bandwidth and low latency via Intel Xeon processors
  • AMAX: Many core HPC solutions featuring Intel Xeon processor E5-2600 v3 and Intel Xeon Phi coprocessors
  • ASA Computers: Einstein@Home uses an Intel Xeon processor based server to search for weak astrophysical signals from spinning neutron stars
  • Atipa Technologies: Featuring servers, clustering solutions, workstations and parallel storage
  • Ciara: The Orion HF 620-G3 featuring the Intel Xeon processor E5-2600 v3
  • Colfax: Colfax Developer Training on efficient parallel programming for Xeon Phi coprocessors
  • Exxact Corporation: Accelerating simulation code up to 3X with custom Intel Xeon Phi coprocessor solutions
  • Koi Computers: Ultra Enterprise Class servers with the Intel Xeon processor E5-2600 v3 and a wide range of networking options
  • Nor-Tech: Featuring a range of HPC clusters/configurations and integrated with Intel, ANSYS, Dassault, Simula, NICE and Altair
  • One Stop Systems: The OSS 3U high density compute accelerator can utilize up to 16 Intel Xeon Phi coprocessors and connect to 1-4 servers


The Intel Channel Pavilion


Once completing the booth tours, I decided to head back to the Intel Parallel Computing Theater to listen to a few more presentations on how companies and organizations are putting these systems into action.


Joseph Lombardo, from the National Supercomputing Center for Energy and the Environment stopped by the theater to talk about the new data center they’ve recently put into action, as well as their use of a data center from Switch Communications. The NSCEE has a couple of challenges – massive computing needs (storage and compute power); time sensitive projects (those with governmental and environmental significance) and numerous and complex workloads. In their Alzheimer’s research, the NSCEE compares the genomes of Alzheimer’s patients with those of normal genomes. They worked with Altair and Intel on a system that reduces their runtime from 8 hours to 3 hours, while improving system manageability and extensibility.


Joseph Lombardo from the NSCEE


Then I listed in to Michael Klemm from Intel talking about offloading Python to the Intel Xeon Phi coprocessor. Python is a quick and high productivity language (packages include: iPython, Numpy/SciPy, and Pandas) that can help compose scientific applications. Michael talked through design principles for the pyMIC offload infrastructure: Simple usage, slim API, fast code and keep control in a programmer’s hand.


Michael Klemm from Intel


Wolfgang Gentzsch from UberCloud covered HPC for the Masses via cloud computing. Currently more than 90% of an engineer or scientist’s in-house HPC is completed via workstations and 5% via servers. Less than 1% is completed using HPC Clouds, which offers a ripe opportunity if challenges like security/privacy/trust, control of data (where and how is your data running), software licensing, and the transfer of heavy data can be resolved. There are some hefty benefits – pay per use, easily scaling resources up or down, low risk with a specific cloud provider – that may start to entice more users shortly. UberCloud has 19 providers and 50 products currently in their marketplace.


Wolfgang Gentzsch from UberCloud


The Large Hadron Collider is probably tops on my list of places to see before I die, so I was excited to see Niko Neufeld from LHCb CERN talk about their data acquisition/storage challenge. I know, yet another big data problem. But the LHC generates one petabyte of data EVERY DAY. Nikko talked through how they’re able to use some sophisticated filtering (via ASICS and FPGA) to get that down to storing 30PB a year, but that’s still an enormous challenge. The team at CERN is interested in looking at the Intel OmniPath Architecture to help them move data faster, and then integrating Intel Xeon + FPGA with Intel Xeon and Intel Xeon Phi processors to help them shave off the amount of data stored even more.


Niko Neufeld from LHCb CERN


And finally, the PUCC held matches 4 and 5 today, the last of the initial matches and the first of the playoffs. In the last regular match, Taji took on the Examen and, in a stunning last-second “make” run, the Examen took it by a score of 4763 to 2900. In the afternoon match, the Brilliant Dummies took on the Gaussian Elimination Squad (defending champs). It was a hard fought battle – for many of the questions both teams had answered before the multiple choice possibilities were shown to the audience. In the end, the Brilliant Dummies were able to eliminate the defending champions by a score of 5082 to 2082. Congratulations to the Brilliant Dummies, we’ll see you in the final on Thursday.


We’ll see the Brilliant Dummies in the PUCC finals on Thursday

Thursday, November 20, 2014

Dateline:  New Orleans, LA, USA


This morning at 11:00AM (Central time, New Orleans, LA), the second semi-final match of the 2014 Parallel Universe Computing Challenge will take place at the Intel Parallel Theater (Booth 1315) as the Coding Illini team from NCSA and UIUC, faces off against the EXAMEN from Europe.   Coding Illini earned its spot in is semi-final match by beating the team from Latin America (SC3), and the EXAMEN earned their semi-final slot by beating team Taiji from China.


The winner of this morning’s semi-final match will go on to play the Brilliant Dummies from Korea in the final competition match this afternoon at 1:30PM, live on stage from Intel’s Parallel Universe Theater.


The teams are playing for the grand prize of $26,000 to be donated to a charitable organization of their choice.


Don’t miss the excitement:

  • Match #5 is scheduled at 11:00AM
  • The Final Match is scheduled at 1:30PM


Packed crowd watching the PUCC

Apparently there’s a whole world that exists beyond the SC14 showcase floor…the technical sessions. Intel staffers have been presenting papers (on Lattice Quantum Chromodynamics and Recycled Error Bits), participating in panels (HPC Productivity or Performance) and delivering workshops (covering OpenMP and OpenCL) over the past few days, with a plethora still to come.


To get a flavor for the sessions, I sat in on the ACM Gordon Bell finalist presentation: Petascale High Order Dynamic Rupture Earthquake Simulations on Heterogeneous Supercomputers. It’s one of five papers in the running for the Gordon Bell award and was presented at the conference by Michael Bader from TUM. The team included scientists from TUM, LMU Munich, Leibniz Supercomputing Center, TACC, National University of Defense Technology, and Intel. Their paper details optimization of the seismic software SeisSol via Intel Xeon Phi coprocessor platforms, achieving impressive earthquake model complexity of the propagation of seismic waves. The hope is that we can use optimized software and supercomputing to understand the wave movement of earthquakes, eventually anticipating real-world consequences to help adequately prepare for and minimize after effects. The Gordon Bell prize will be announced on Thursday, so good luck to the team!


Michael Bader from TUM


From there I headed back to the Intel booth to see how the demos are helping to solve additional real-world problems. First up was the GEOS-5/University of Tennessee team, which deployed a workstation with two Intel Xeon processors E5 v3 and two Intel Xeon Phi coprocessors to run the VisIT app for visual compute analysis and rendering. GEOS-5 simulates climate variability on a wide range of time scales, from near-term to multi-century, helping scientists comprehend atmospheric transport patterns that affect climate change. A real climate model (on a workstation!) which could be used to predict something like the spread and concentration of radiation around the world.


Predicting Climate Change with GEOS-5


Next up, the Ayasdi demo on precision medicine – a data analytics platform running on the Intel Xeon processor E5 V3 and a cluster with Intel True Scale Fabric that is looking for similarities in data, rather than using specific queries as searches. The demo shows how the shape of data can be employed to find unknown insights in large and complex data sets, something like “usually three hours after this type of surgery there is a fluctuation in vitals across patients.” The goal is to combine new mathematical approaches (TDA) with big data to identify biomarkers, drug targets, and potential adverse effects to support more successful patient treatment.



Ayasdi Precision Medicine Demo


Since I’m usually on a plane every couple of weeks, I was excited to talk to the Onera team on how they’re using the elsA simulation software to streamline aerospace engineering. The simulation capabilities of elsA enable reductions in ground-based and in-flight testing requirements. The Onera team optimized elsA to run in a highly scalable environment of an Intel Xeon and Xeon Phi processor based cluster with Intel True Scale fabric and SSDs, allowing for large scale modeling of elsA.


Aerospace Design Demo from Onera


Up last, I headed over to the team at the Texas Advanced Computing Center to talk about their demo combining ray tracing (OSPRay) and computing power (Intel Xeon processor E5 v3) to run computational fluid dynamics simulations and assemble flow data from every pore in the rock in Florida’s Biscayne Bay. Understanding how the aquifer transports water and contaminants is critical to providing safe resources, but eventually the researchers hope to move the flow simulation to the human brain.


TACC Demo in Action


One of the areas in the Intel booth I’d yet to visit was the Community Hub, an area to socialize and collaborate on ideas that can help drive discoveries faster. Inside the Hub, Intel and various third parties are on-hand to collaborate and discuss technology directions, best known methods, future use cases, etc. of a wide variety of technologies and topics. Hopefully attendees will create, improve or expand their social network with respect to peers engaged in similar optimization and algorithm development.


One of the community discussions with the highest interest on Tuesday was led by Debra Goldfarb, the Senior Director of Strategy and Pathfinding Technical Computing at Intel. The Hub was packed for a session on encouraging Women in Science and Technology – the stats are pretty dismal and Intel is committed to changing that. The group brainstormed reasons for the gap and how we can begin to address it. A couple of resources for those interested in the topic: www.intel.com/girlsintech and www.womeninhpc.org.uk. Intel also attended in the “Women in HPC: Mentorship and Leadership” BOF and will participate in “Woman in HPC” panel on Friday.



Above and below: Women in Science and Technology Community Hub discussion lead by Debra Goldfarb





Women in HPC BOF


Community Hub discussions coming up on Wednesday include Fortran & Vectorization, OpenMP, MKL, Data Intensive HPC, Life Sciences and HPC, and HPC and the Oil and Gas industry.


At the other end of the booth, the Intel Parallel Universe Theater was hopping all day. I checked out a presentation from Eldon Walker of the Lerner Research Institute at the Cleveland Clinic who discussed their 1.2 petabyte mirrored storage system (DC and server room) and their 270 terabytes of Lustre storage which enables DNA sequence analysis, finite element analysis, natural language processing, image processing and computational fluid dynamics. Dr. Eng Lim Goh from SGI presented the company’s energy efficient supercomputers, innovative cooling systems, and SGI MineSet for machine learning. And Tim Cutts from Wellcome Sanger Trust made it through some audio and visual issues to present his topic on working with genomics and the Lustre file system and how they solved a couple of tricky issues (denial of service issue via samtools and performance issues with concurrent file access).


Eldon Walker, Lerner Research Institute


Dr. Eng Lim Goh, SGI



Tim Cutts, Wellcome Trust Sanger


And lastly, for those following along with the Intel Parallel Universe Computing Challenge – in match two, The Brilliant Dummies from Korea defeated the Linear Scalers from Argonne by a score of 5790 to 3588. And in match three, SC3 (Latin America) fell to the Coding Illini (NCSA and UIUC) with a score of 2359 to 5359, which means both the Brilliant Dummies and Coding Illini move on in the Challenge. Match 4 and 5 will be up on Wednesday. See you in booth 1315!


I felt a little like the lady from the old Mervyn’s commercials chanting, “OPEN, OPEN, OPEN” today while waiting for the Exhibition Gala at SC14. The exhibitor’s showcase is one of the most exciting aspects for Intel – we have a pretty large presence on the floor so we can fully engage and collaborate with the HPC community. But before we delve too deep into the booth activities, I want to step back and talk a little about the opening plenary session from SGI.


Dr. Eng Lim Goh, senior vice president and CTO at SGI, took the stage to talk about the most fundamental of topics: Why HPC Matters. While most of the world thinks of supercomputing as the geekiest of technology (my bus driver asked if I worked on the healthcare.gov site or did some hacking), we as an industry know that much of what is possible today in the world is enabled by HPC in industries as diverse as financial services, advanced/personalized medicine, and manufacturing.


Dr. Goh broke his presentation into a few parts: Basic needs, reducing hardships, commerce, entertainment and profound questions. He then ran through about 25 projects utilizing supercomputing, everything from sequencing and analyzing the wheat genome (7x the size of the human genome!) to checking postage accuracy for the USPS (half a billion pieces of mail sorted every day) to designing/modeling a new swimsuit for Speedo (the one that shattered all those world records in the Beijing Olympics). Dr. Goh was joined on stage by Dr. Piyush Mehrotra, from NASA’s Advanced Supercomputing Division, who was there to discuss some of the ground breaking research that NASA has done in climate modeling and the search for exoplanets (about 4,000 possible planets found so far by the Kepler Mission).


Increasing wheat yield by analyzing the genome


Earthquake simulations can help give advanced warning


The session closed with a call to the industry to make a difference and to remember that it’s great to wow a small group of people to secure funding for supercomputing, but it is also important to, in the simplest terms, “delight the many” when describing why HPC matters.


So why does HPC matter in the oil and gas industry? After Dr. Goh’s presentation, I finally headed into the showcase and to the Intel booth to talk to the folks from DownUnder GeoSolutions. The key to success in the oil and gas industry is minimizing exploration costs while maximizing oil recovery. DownUnder GeoSolutions has invested in modernizing its software—optimizing it to run heterogeneously on Intel Xeon and Intel Xeon Phi coprocessors. As a result, its applications are helping process larger models and explore more options in less time. DUG is the marque demo this year in the Intel booth, showing their software, DUG Insight, running on the full Intel technical computing portfolio, including workstations, Intel Xeon and Xeon Phi processors, Lustre, Intel Solid State Drives and Intel True Scale Fabric.



Above and below: DownUnder GeoSolutions demo



Of course, checking out the DUG demo isn’t the only activity in the Intel booth. There were also a couple of great kick off theater talks from Jack Dongarra discussing the MAGMA project, which aims to develop a dense linear algebra library and improve performance for co-processors and Pierre Lagier from Fujitsu on The 4 Dimensions of HPC Computing. He presented a use case for the running elsA CFD software package on Intel Xeon Phi co-processors and the performance gains they were able to see with some tuning and optimization.


Jack Dongarra on the MAGMA project


Pierre Lagier on elsA CFD


And speaking of optimization, the big draw of the night in the Intel booth was the opening round of the Parallel Universe Computing Challenge, which saw defending champs the Gaussian Elimination Squad from Germany taking on the Invincible Buckeyes from Ohio. After a round of 15 HPC trivia questions (more points the faster teams answer), GES was in the lead. During the coding challenge, each team has 10 minutes to take a piece of code from Intel’s James Reinders and speed up either/both Xeon and Xeon Phi performance with 40 Xeon and 244 Xeon Phi threads available on a duel-socket machine. With a monster speed up of 243.008x on Xeon Phi (James admitted he’d only gotten to 189x), the Gaussian Elimination Squad took home the victory by a final score of 5903 to 3510. A well-played match by both teams!


Crowd watching the PUCC


L to R: Gaussian Elimination Squad, James Reinders and Mike Bernhardt


The PUCC continues on Tuesday, along with the Community Hub discussions, theater talks, fellow traveler tours and technical sessions. Stop by the booth (1315) and tell us why you think HPC matters!



SC14 is officially under way, however the Intel team got in on the action a bit early and I’m not just talking about the set up for our massive booth (1315 – stop by to see the collaboration hub, theater talks and end-user demos). Brent Gorda, GM of the High Performance Data Division, gave a presentation at HP-Cast on Friday on the Intel Lustre roadmap. A number of other Intel staffers gave presentations ranging from big data to fabrics to exascale computing, as well as a session the future of the Intel technical computing portfolio. The Intel team also delivered an all-day workshop on OpenMP on the opening day of SC14.


On Sunday, Intel brought together more than 350 members of the community for an HPC Developer Conference at the Marriott Hotel to discuss key topics including high fidelity visualization, parallel programming techniques/software development tools, hardware and system architecture, and Intel Xeon Phi coprocessor programming.


The HPD Developer Conference kicked off with a keynote from Intel’s Bill Magro discussing the evolution of HPC – helping users gain insight and accelerate innovation – and where he thinks the industry is headed:



Beyond what we think of as traditional research supercomputing (weather modeling, genome research, etc.) there is a world of vertical enterprise segments that can also see massive benefits from HPC. Bill used the manufacturing industry as an example – SMBs of 500 or less employees could see huge benefits from digital manufacturing with HPC but need to get beyond cost (hardware/apps/staff training) and perceived risk (no physical testing is a scary prospect). This might be a perfect use case for HPC in the cloud – pay as you go would lessen the barriers to entry, and as the use case is proved and need grows, users can move to a more traditional HPC system.


Another key theme for the DevCon was code modernization. To truly take advantage of coprocessors, apps need to be parallelized. Intel is working with the industry via more than 40 Intel Parallel Computing Centers around the world to increase parallelism and scalability through optimizations that leverage cores, caches, threads, and vector capabilities of microprocessors and coprocessors. The IPCCs are working in a number of areas to optimize code for Xeon Phi including aerospace, climate/weather modeling, life sciences, molecular dynamics and manufacturing. Intel also recently launched a catalog of more than 100 applications and solutions available for the Intel Xeon Phi coprocessor.


Over in the Programming for Xeon Phi Coprocessor track, Professor Hiroshi Nakashima from Kyoto University gave a presentation on programming for Xeon Phi on a Cray XC30 system. The university has 5 supercomputers (Camphor, Magnolia, Camellia, Laurel, and Cinnamon), with this talk covering programming for Camellia. The main challenges faced by Kyoto University were programming for: Inter-node = tough, intra-node = tougher, and intra-core = toughest (he described having to rewrite innermost kernels and redesign data structure for intra-core programming). He concluded that simple porting or large scale multi-threading may not be sufficient for good performance and SIMD-aware kernel recoding/redesign may be necessary.


Professor Hiroshi Nakashima’s slide on the Camellia supercomputer

Which brings me to possibly the hottest topic of the Developer Conference: the next Intel Xeon Phi processor (codename Knights Landing). Herbert Cornelius and Avinesh Sodani took the stage to give a few more details on the highly-anticipated processor arriving next year:

  • It will be available as a self-boot processor alleviating PCI Express bottlenecks, a self-boot processor + integrated fabric (Intel Omni-Path Architecture), or as an add-in card
  • Binary compatible with Intel Xeon processors (runs all legacy software, no recompiling)
  • The new core is Silvermont microarchitecture-based with many updates for HPC (offering 3x higher ST performance over current generation Intel Xeon Phi coprocessors)
  • Offers improved vector density (3+ teraflops (DP) peak per chip)
  • AVX 512 ISA (new 512-bit vector ISA with Masks)
  • Scatter/Gather engine (enabling hardware support for gather/scatter)
  • New memory technology MCDRAM + DDR (large high bandwidth memory – MCDRAM and huge bulk memory – DDR)
  • New on-die interconnect – MESH (high BW connection between cores and memory)


Next Intel Xeon Phi Processor (codename Knights Landing)


Another big priority for Intel is high fidelity visualization and measuring/modeling as an increasingly complex phenomenon. Jim Jeffers led a track on the subject and gave an overview presentation covering a couple of trends (increasing data size – no surprise there, and increasing shading complexity). He then touched on Intel’s high fidelity visualization solutions including software (Embree, the foundation for ray tracing in use by DreamWorks, Pixar, Autodesk, etc.) and efficient use of compute cluster nodes. Jim wrapped up by discussing an array of technical computing rendering tools developed by Intel and partners, which are all working to enable higher fidelity, higher capability, and better performance to move visualization work to the next level.


Jim Jeffers’s Visualization Tools Roadmap

These are just a few of the more than 20 sessions and topics (Lustre! Fabrics! Intel Math Kernel Library! Intel Xeon processor E5 v3!) at the HPC Developer Conference. The team is planning to post presentations to the Website in the next week, so check back for conference PDFs. If you attended the conference, please fill out your email survey – we want to hear from you on what worked and what didn’t. And if we missed you this year, drop us an email (contact info is at the bottom of the conference homepage) and we’ll make sure you get an invite in the future.

Filter Blog

By date:
By tag: