1 2 3 Previous Next

The Data Stack

1,454 posts

Get ready: SAP SAPPHIRE NOW and ASUG (America’s SAP Users’ Group) Annual Conference comes to Orlando on May 5–7.

 

This is SAP’s premier annual event: an estimated 25,000 people will attend and an additional 80,000 will tune in online. Per usual, the agenda is packed with keynotes, presentations, technical sessions, and demos. It’s a chance for SAP customers to meet with SAP experts and industry partners to learn the latest about mobile, on-demand, and in-memory computing.

 

Intel will be there too, adding to the festivities by hosting sessions, tech talks and demos—and sharing some innovations of its own.

 

I encourage you to join these technical sessions presented by Intel experts.


  • Microforum Discussion: Co-Innovation on the SAP HANA Platform with Intel and SAP (Noon-12:45 p.m., Tues, May 5, room: LB227). New transaction functionality will be at the performance forefront with the latest Intel® Xeon® processor E7 family. Learn how SAP HANA* uses new Intel® technologies to enhance performance for increased scalability, security, performance, and lower cost of ownership.
  • Co-Innovation for the Data-Driven World (11am-Noon, Tues., May 5, Lenovo Booth #254). Come meet me in the Lenovo booth! I’ll discuss how Lenovo and Intel have worked together to create a high performance platform optimized for SAP solutions that helps get the most out of data for better business decisions.


Intel experts will also give presentations in partner booths as part of our Intel Partner Passport program. Pick up your passport and make the rounds of our partners’ booths to learn how Intel helped co-engineer the latest platforms based on the Intel Xeon processor E7 family to help gain top performance, scalability and security for a range of partner solutions. Present your completed passport with scans from each of partners to win a rechargeable Beam Flashlight Power Bank* and to be entered into a daily drawing for a new Lenovo* tablet (must be present to win).


  • Cisco booth #330: Noon, Tues., May 5 and 12:30 p.m. Thurs., May 7
  • Cloudera booth # 438: Stop by anytime to see demos
  • Dell booth #218: 2:30 p.m., Wed., May 6
  • Fujitsu booth #140: 11 a.m., Wed., May 6
  • Hitachi Data Systems booth #228: 2 p.m. Tues., May 5 and 1 p.m. Wed., May 6
  • HP booth #302: 1 p.m., Wed., May 6
  • Lenovo booth #254: 11 a.m. Tues., May 5 and 11 a.m. Wed., May 6
  • SGI booth #168: Stop by anytime to see demos
  • Virtustream booth #413: 3 p.m. Wed., May 6
  • VMware booth #272: 5 p.m. Wed., May 5

 

Be sure to stop by the Intel booth (#260) to check out tech talks by Intel partners such as VMware, Fujitsu, SGI, Dell, NEC, Cisco, Cloudera, AWS, Unisys, HP, and Lenovo, who will reveal how they incorporate the new capabilities of the latest Intel Xeon processors to boost performance and accelerate time to insight.

sap_pinnacle2015_win_rgb_sm.gif

And speaking of partners, Intel is honored to accept two SAP Pinnacle Awards, which recognize commitment by an SAP partner to a joint strategy that delivers unmatched value to customers. Intel came out on top in two separate categories: Platform Co-Innovation Partner of the Year, and Marketing Momentum Partner of the Year. Intel is proud of the recognition, and we look forward to continuing our long collaboration with SAP. Additionally, we are co-sponsoring Tuesday night’s SAP HANA Innovation Award ceremony.


In the meantime, watch these two fun animations learn how to “simplify” and “accelerate” your data analytics experience with Intel and SAP.

 

Follow me at @TimIntel and #TechTim for the latest news on Intel and SAP.


*Other names and brands may be claimed as the property of others.

By Caroline Chan, Wireless access segment manager, Network Platform Group, Intel



No matter where you fit in the wireless food chain, expect the transition to 5G to be exhilarating. The demand for new devices and mobile infrastructure will be incredible, making the coming years a very busy time for telecommunications equipment manufacturers (TEMs). First deployments in 2020 is a realistic objective, according to a panel of industry leaders hosted by Frost & Sullivan.1

 

5G Vision

 

Although final requirements haven’t been ironed out yet, major industry players already have high aspirations for 5G. This includes major performance improvements such as an order of magnitude reduction in latency (both air and end-to-end) and more than a ten times increase in peak data rate. There will also be provisions for critical service assurance for connected cars and very low-rate services for the billions of Internet of Things (IoT) devices that come online.

 

Along these lines, the Next Generation Mobile Networks (NGMN) Alliance recently released a 5G white paper proposing requirements around system performance, user experience, devices, business models, management and operation, and enhanced services.2

 

5G Technology at MWC 2015

 

To no one’s surprise, 5G was a key theme at this year’s Mobile World Congress. “Huawei, Ericsson, and Nokia Networks demonstrated technology that forms the basis of their 5G road maps; and some leading operators, such as Deutsche Telekom, also spoke about how developments, including network functions virtualization (NFV) and software defined networks (SDN), are making 5G possible,” wrote Monica Alleven, editor of FierceWirelessTech.3

 

Sky-High Forecasts

 

Over time, 5G infrastructure is expected to serve around ten thousand times more devices than are currently connected to mobile networks, with IoT devices and cars accounting for a large part of the growth. This trend will ultimately generate a tremendous amount of business for TEMs per Intel’s 5G vision reflected in the following:

 

5g_graphic.png

 

Radio access network (RAN) capacity expands by 1,000 times to increase mobility and coverage for subscribers, IoT devices, and cars. This includes more radio towers, smart cells, and remote radio heads (RHHs) supporting Cloud-RAN (C-RAN) deployments.

 

Mobile core adds 100 times more capacity to meet the growing traffic demand. This is primarily evolved packet core (EPC) equipment, which today is represented by various LTE network elements:

 

  • Serving Gateway (Serving GW)
  • PDN Gateway (PDN GW)
  • Mobility Management Entity (MME)
  • Policy and Charging Rules Function (PCRF) Server
  • Home Subscriber Server (HSS)

 

Backhaul capacity is expected to increase ten-fold. It is the infrastructure, like routers, switches, fiber, and microwave, that connects a cell site to the mobile core.

 

Virtualized Infrastructure

 

The momentum behind virtualized equipment will grow stronger with 5G, as SDN and NFV advancements continue and spread to the RAN, customer-premises equipment (CPE), and other devices. Look for new services based on big data to influence the way networks are being constructed and monetized.

 

5G is looking like a wonderful opportunity for TEMs, perhaps even better than the first four generations of mobile networks. Read more about Intel’s 5G vision at http://iq.intel.com/will-5g-bring-new-dimension-wireless-world.

 

 

 

 

 

1 Source: Jessy Cavazos, Frost & Sullivan, “5 insights about 5G that may surprise you,” March 17, 2105, www.evaluationengineering.com/2015/03/17/5-insights-about-5g-that-may-surprise-you.

2 Source: Next Generation Mobile Networks (NGMN) Alliance, “NGMM 5G White Paper,’ February 17, 2015, https://www.ngmn.org/fileadmin/ngmn/content/images/news/ngmnnews/NGMN5GWhitePaperV10.pdf.

3 Source: Monica Alleven, FierceWirelessTech, “MWC 2015: NGMN Alliance, Huawei, Ericsson, Nokia talk 5G and more,” March 9, 2015, www.fiercewireless.com/tech/story/mwc-2015-ngmn-alliance-huawei-ericsson-nokia-talk-5g-and-more/2015-03-09.

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

With the proliferation of popular software-as-a-service (SaaS) offerings, the scale of compute has changed dramatically. The boundaries of enterprise IT now extend far beyond the walls of the corporate data center.

 

You might even say those boundaries are disappearing altogether. Where we once had strictly on-premises IT, we now have a highly customized and complex IT ecosystem that blurs the lines between the data center and the outside world.

 

When your business units are taking advantage of cloud-based applications, you probably don’t know where your data is, what systems are running the workloads, and what sort of security is in place. You might not even have a view of the delivered application performance, and whether it meets your service-level requirements.

 

This lack of visibility, transparency, and control is at once both unsustainable and unacceptable. And this is where IT analytics enters the picture—on a massive scale.

 

To make a successful transition to the cloud, in a manner that keeps up with the evolving threat landscape, enterprise IT organizations need to leverage sophisticated data analytics platforms that can scale to hundreds of billions of events per day. That’s not a typo—we are talking about moving from analyzing tens of millions of IT events each day to analyzing hundreds of billions of events in the new enterprise IT ecosystem.

 

This isn’t just a vision; this is an inevitable change for the IT organization. To maintain control of data, to meet compliance and performance requirements, and to work proactively to defend the enterprise against security threats, we will need to gain actionable insight from an unfathomable amount of data. We’re talking about data stemming from event logs, network devices, servers, security and performance monitoring tools, and countless other sources.

 

Take the case of security. To defend the enterprise, IT organizations will need to collect and sift through voluminous amounts of two types of contextual information:

 

  • “In the moment” information on devices, networks, operating systems, applications, and locations where information is being accessed. The key here is to provide near-real time actionable information to policy decision and enforcement points (think of the credit card company fraud services).
  • “After the fact” information from event logs, raw security related events, netflow and packet data, along with other indicators of compromise that can be correlated with other observable/collectable information.

 

As we enter this brave new world for IT, it’s clear that we will need an analytics platform that will allow us to store and process data at an unprecedented scale. We will also need new algorithms and new approaches that will allow us to glean near real-time and historical insights from a constant flood of data.

 

In an upcoming post, I will look at some of the requirements for this new-era analytics platform. For now, let’s just say we’re gonna need a bigger boat.

 

 

 

 

Intel and the Intel logo are trademarks of Intel Corporation in the United States and other countries. * Other names and brands may be claimed as the property of others.

Yesterday we celebrated the 50th anniversary of Moore’s Law, the foundational model of computing innovation. While the past half-century of industry innovation based on the advancement of Moore’s Law is astounding, what’s exciting today is that we’re at the beginning of the next generation of information and communication technology architecture, enabling the move to the digital services economy. Nowhere are the opportunities more acute than in the data center.


This evening, at the Code/Enterprise Series in San Francisco, I had the pleasure of sharing Intel’s perspective on the disruptive force the data center transformation will have on businesses and societies alike. Like no time before, the data center stands at the heart of technology innovation connecting billions of people and devices across the globe and delivering services to completely transform businesses, industries, and people’s lives.


Transformed Infrastructure.PNG


To accelerate this vision Intel is delivering a roadmap of products that enable the creation of a software-defined data center – a data center where the application defines the system. One area I’m particularly excited about is our work with the health care community to fundamentally change the experience of a cancer patient. Here, technology is used to speed up and scale the creation and application of precision medicine. 


Orchestration and Management.PNG


Our goal? By 2020, a patient can have her cancerous cells analysed through genome sequencing, compared to countless other sequences through a federated, trusted cloud, and a precision treatment created… all in one day.


We are also expanding our business focus into new areas where our technology can accelerate business transformation, a clear example being the network. Our recent announcements with Ericsson and Huawei highlight deep technical collaborations that will help the telco industry deliver new services to their end users with greater network utilization through virtualization and new business models through the cloud. At the heart of this industry transformation is open, industry standard solutions running on Intel architecture.


Transforming health care and re-architecting the network are just two examples of Intel harnessing the power of Moore’s Law to transform businesses, industries, and the lives of us all. 

Intel has been advancing its strategy to embed Intel® Ethernet into System-on-a-chip (SoC) designs and products that will enable exciting new applications. To enable these SoCs, the Networking Division has been developing Ethernet IP blocks based on our market-leading 10GbE controller.

 

Last month, we had one of our biggest successes to date with the launch of the Intel® Xeon® processor D product family. The Xeon D, as it is known, is the first SoC from Intel that combines the power of a Xeon processor and the performance of Intel 10GbE - in a single chip.

 

In one sense, the integration of this IP into the Xeon chip was very fast – taking only one year from concept to tape in. But you could also say that the revolution was 12 years in the making because that’s how long Intel has been delivering 10GbE technology and perfecting the performance, features, drivers and software that the customers trust.

 

In fact, I like to say that this device has a trifecta of reliability advantages: • Proven Xeon performance • Proven Intel Ethernet • Intel leading edge manufacturing process

 

The Xeon D is developed for applications that will make the most use of this trifecta: emerging micro servers, wireless base stations, routers and switches, security and network appliances, as well as the build-out of Software Defined Networking (SDN) and Network Functions Virtualization (NFV). The opportunities are endless.

 

These applications make use of the high performance and tight integration between the processor and integrated network controller. And all of them have an ongoing need for components that reduce cost, shrink system footprint and reduce power consumption, both in the data center and at the network edge.

 

The road to the Xeon D SoC

 

While Xeon D is certainly a highlight of our Ethernet IP strategy, it’s not the first successful SoC integration of Intel® Ethernet.

 

Our first announced venture in this area was with the Intel Atom® C2000 processor family that is targeted at networking and communications devices and emerging microservers and storage devices. These processors include four ports of Gigabit Ethernet that can also support up to 2.5GbE for backplane implementations, for up to 10GbE of total throughput.

 

It’s great to see Intel® Ethernet play such a big role in a significant product like the Xeon D.  The combination of our proven Ethernet with the performance of a Xeon CPU offers our customers a tremendous value, and will open up new and exciting applications.

Today, 70 percent of US consumer Internet traffic is video, and it’s growing every day with over-the-top (OTT) providers delivering TV and movies to consumers, broadcasters and enterprises streaming live events. Cloud computing is changing the landscape for video production as well. Much of the work that used to require dedicated workstations is being moved to servers in data centers and offered remotely by cloud service providers and private cloud solutions. As a result, the landscape for content creation and delivery is undergoing significant changes. The National Association of Broadcasters (NAB) show in Las Vegas highlights these trends. And Intel will be there highlighting how we help broadcasters, distributors, and video producers step up to the challenges.

 

Intel processors have always been used for video processing, but today's video workloads place new demands on processing hardware. The first new demand is for greater processing performance. As video data volume explodes, encoding schemes become more complex, and processing power becomes more critical. The second demand is for increased data center density. As video processing moves to servers in data centers, service cost is driven by space and power. And the third demand is for openness. Developers want language- and platform-independent APIs like OpenCL* to access CPU and GPU graphics functions. The Intel® Xeon® processor E3 platform with integrated Intel® IrisTM Pro Graphics and Intel® Quick Sync Video transcoding acceleration provides the performance and open development environment required to drive innovation and create the optimized video delivery systems needed by today's content distributors. And does it with unparalleled density and power efficiency.

 

The NAB 2015 show provides an opportunity for attendees to see how these technologies come together in new, more powerful industry solutions  to deliver video content across the content lifecycle—acquire, create, manage, distribute, and experience.

 

We've teamed with some of our key partners at NAB 2015 to create the StudioXperience showcase that demonstrates a complete end-to-end video workflow across the content lifecycle. Waskul TV will generate real time 4k video and pipe it into a live production facility featuring Xeon E3 processors in an HP Moonshot* server and Envivio Muse* Live. The workflow is divided between on air HD production for live streaming and 4K post-production for editorial and on demand delivery. The cloud-based content management and distribution workflow is provided by Intel-powered technologies from technology partners to create a solution that streams our content to the audience via Waskul TV.

 

Other booths at the show let attendees drill down into some of the specific workflows and the technologies that enable them. For example, "Creative Thinking 800 Miles Away—It's Possible" lets attendees experience low latency, remote access for interactive creation and editing of video content in the cloud. You'll see how Intel technology lets you innovate and experiment with modeling, animation, and rendering effects—anywhere, anytime. And because the volume of live video content generated by broadcasters, service providers, and enterprises continues to explode, we need faster and more efficient ways of encoding it for streaming over the Internet. So Haivision's "Powerful Wide-Scale Video Distribution" demo will show how their Intel-based KulaByte* encoders and transcoders can stream secure, low latency HD video at extremely low bitrates over any network, including low cost, readily available, public Internet connections.

 

To learn more about how content owners, service providers, and enterprises are using Intel Xeon processor E3 based platforms with integrated HD Graphics and Intel Quick Sync video to tame the demand for video, check out the interview I did on Intel Chip Chat recently. And even if you're not attending NAB 2015, you can still see it in action. I'll be giving a presentation Tuesday, April 14 at 9:00 a.m. Pacific time. We'll stream it over the very systems I've described, and you can watch it on Waskul.TV. Tune in.

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

By Christian Buerger, Technologist, SDN/NFV Marketing, Intel

 

 

This week I am attending the Intel Developer Forum (IDF) in Shenzhen, China, to promote Intel’s software defined networking (SDN) and network functions virtualization (NFV) software solutions. During this year’s IDF, Intel has made several announcements and our CEO Brian Krzanich has showcased Intel’s innovation leadership across a wide range of technologies with our local partners in China. On the heel of Krzanich’s announcements, Intel Software & Services Group Senior VP Doug Fisher extended Krzanich’s message to stress the importance of open source collaboration to drive industry innovation and transformation, citing OpenStack and Hadoop as prime examples.

 

I participated at the signing event and press briefing for a ground-breaking announcement between Intel and Huawei’s enterprise division to jointly define a next-generation Network as a Service (NaaS) SDN software solution. Under the umbrella of Intel’s Open Network Platform (ONP) server reference platform, Intel and Huawei intend to jointly develop a SDN reference architecture stack. This stack is based on integrating Intel architecture optimized open source ingredients from projects such as Cloud OS/OpenStack, OpenDaylight (ODL), Data Plane Development Kit (DPDK), and Open Virtual Switch (OVS) with virtual network appliances such as a virtual services router and virtual firewall. We are also deepening existing collaboration initiatives in various open source projects such as ODL (on Service Function Chaining and performance testing), OVS (SRIOV-based performance enhancements), and DPDK.

 

In addition to the broad range of open source SDN/NFV collaboration areas this agreement promotes, what makes it so exciting to me personally is the focus on the enterprise sector. Specifically, together with Huawei we are planning to develop reference solutions that target specific enterprise vertical markets such as education, financial services, and government. Together, we are extending our investments into SDN and NFV open source projects to not only accelerate advanced NaaS solutions for early adopters in the telco and cloud service provider space, but also to create broad opportunities to drive massive SDN adoption in the enterprise in 2015. As Swift Liu, President of Huawei’s Switch and Enterprise Communication Products, succinctly put it, Intel and Huawei “are marching from software-hardware collaboration to the entirely new software-defined era in the enterprise.”

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

At a press event on April 9, representatives from the U.S. Department of Energy announced they awarded Intel contracts for two supercomputers totaling (just over) $200 million as part of its CORAL program. Theta, an early production system, will be delivered in 2016 and will scale to 8.5 petaFLOPS and more than 2,500 nodes, while the 180 PetaFLOPS, greater than 50,000 node system called Aurora will be delivered in 2018. This represents a strong collaboration for Argonne National Laboratory, prime contractor Intel, and sub-contractor Cray on a highly scalable and integrated system that will accelerate scientific and engineering breakthroughs.

 

Aurora-Plain-TransparentBackground-LowRes.png

Rendering of Aurora

 

Dave Patterson (President of Intel Federal LLC and VP of the Data Center Group), led the Intel team on the ground in Chicago; he was joined on stage by Peter Littlewood (Director of Argonne National Laboratory), Lynn Orr (Undersecretary for Science and Energy, U.S. Department of Energy), and Barry Bolding (Vice President of Marketing and Business Development for Cray). Also joining the press conference were Dan Lipinski (U.S. Representative, Illinois District 3), Bill Foster (U.S. Representative, Illinois District 11), and Randy Hultgren (U.S. Representative, Illinois District 14).

 

DavePatterson1.jpgDave Patterson at the Aurora Announcement (Photo Courtesy of Argonne National Laboratory)

 

This cavalcade of company representatives disclosed details on the Aurora 180 PetaFLOPS/50,000 node/13 Megawatt system. It utilizes much of the Intel product portfolio via Intel’s HPC scalable system framework, including future Intel Xeon Phi processors (codenamed Knights Hill), second generation Intel Omni-Path Fabric, a new memory hierarchy composed of Intel Lustre, Burst Buffer Storage, and persistent memory through high bandwidth on-package memory. The system will be built using Cray’s next generation Shasta platform.

 

Peter Littlewood kicked off the press conference by welcoming everyone and discussing Argonne National Laboratory – the mid west’s largest federally funded R&D center fostering discoveries in energy, transportation, protecting the nation and more. He handed off to Lynn Orr, who made the announcement of the $200 million contract and the Aurora and Theta supercomputers. He discussed some of the architectural details of Aurora and talked about the need for the U.S. to dedicate funds to build supercomputers to reach the next exascale echelon and how that will fuel scientific discovery, a theme echoed by many of the speakers to come.

 

Dave Patterson took the stage to give background on Intel Federal, a wholly owned subsidiary of Intel Corporation. In this instance, Intel Federal conducted the contract negotiations for CORAL. Dave touched on the robust collaboration with Argonne and Cray needed to bring Aurora on line in 2018, as well as introducing Intel’s HPC scalable system framework – a flexible blueprint for developing high performance, balanced, power-efficient and reliable systems capable of supporting both compute- and data-intensive workloads.

 

Next up, Barry Bolding from Cray talked about the platform system underpinning Aurora – the next generation Shasta platform. He mentioned that when deployed, Aurora has the potential to be one of the largest/most productive supercomputers in the world.

 

And finally, Dan Lipinski, Bill Foster and Randy Hultgren, all representing Illinois (Argonne’s home base) in the U.S. House of Representatives each gave a few short remarks. They echoed Lynn Orr’s previous thoughts that the United States needs to stay committed to building cutting edge supercomputers to stay competitive in a global environment and tackle the next wave of scientific discoveries. Representative Hultgren expressed very succinctly: “[The U.S.] needs big machines that can handle big jobs.”

 

DanLipinski1.jpg

Dan Lipinski (Photo Courtesy of Argonne National Laboratory)

 

BillFoster.jpg

Bill Foster (Photo Courtesy of Argonne National Laboratory)


RandyHultgren.jpg

Randy Hultgren (Photo Courtesy of Argonne National Laboratory)

 

After the press conference, Mark Seager (Intel Fellow, CTO of the Tech Computing Ecosystem) contributed: “We are defining the next era of supercomputing.” While Al Gara (Intel Fellow, Chief Architect of Exascale Systems) took it a step further with: “Intel is not only driving the architecture of the system, but also the new technologies that have emerged (or will be needed) to enable that architecture. We have the expertise to drive silicon, memory, fabric and other technologies forward and bring them together in an advanced system.”

 

Cray_Intel.png

The Intel and Cray teams prepping for the Aurora announcement

 

Aurora’s disruptive technologies are designed to work together to deliver breakthroughs in performance, energy efficiency, overall system throughput and latency, and cost to power. This signals the convergence of traditional supercomputing and the world of big data and analytics that will drive impact for not only the HPC industry, but also more traditional enterprises.

 

Argonne scientists – who have a deep understanding of how to create software applications that maximize available computing resources – will use Aurora to accelerate discoveries surrounding:

  • Materials science: Design of new classes of materials that will lead to more powerful, efficient and durable batteries and solar panels.
  • Biological science: Gaining the ability to understand the capabilities and vulnerabilities of new organisms that can result in improved biofuels and more effective disease control.
  • Transportation efficiency: Collaborating with industry to improve transportation systems to design enhanced aerodynamics features, as well as enable production of better, more highly-efficient and quieter engines.
  • Renewable energy: Wind turbine design and placement to greatly improve efficiency and reduce noise.
  • Alternative programming models: Partitioned Global Address Space (PGAS) as a basis for Coarray Fortran and other unified address space programming models.

 

The Argonne Training Program on Extreme-Scale computing will be a key program for training the next generation of code developers – having them ready to drive science from day one when Aurora is made available to research institutions around the world.

 

For more information on the announcement, you can head to our new Aurora webpage or dig deeper into Intel’s HPC scalable system framework.

 

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

In March, we started off covering the future of next generation Non-Volatile Memory technologies and the Open Compute Project Summit, as well as the recent launch of the Intel® Xeon® Processor D-1500 Product Family. Throughout the second half of March we archived Mobile World Congress podcasts recorded live in Barcelona. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

  • The Future of High Performance Storage with NVM Express – Intel® Chip Chat episode 370: Intel Senior Principal Engineer Amber Huffman stops by to talk about the performance benefits enabled when NVM Express is combined with the Intel® Solid-State Drive Data Center Family for PCIe. She also describes the future of NVMe over fabrics and the coming availability of NVMe on the client side within desktops, laptops, 2-in-1s, and tablets. To learn more visit: http://www.nvmexpress.org/
  • The Intel® Xeon® Processor D-1500 Product Family – Intel® Chip Chat episode 371: John Nguyen, a Senior Product Manager at Supermicro discusses the Intel® Xeon® Processor D-1500 Product Family launch and how Supermicro is integrating this new solution into their products today. He illustrates how the ability to utilize the small footprint and low power capabilities of the Intel Xeon Processor D-1500 Product Family is facilitating the production of small department servers for enterprise, as well as enabling small businesses to take advantage of the Intel Xeon Processor Family performance. To learn more visit: www.supermicro.com/products/embedded/
  • Innovating the Cloud w/ Intel® Xeon® Processor D-1500 Product Family – Intel® Chip Chat episode 372: Nidhi Chappell, Entry Server and SoC Product Marketing Manager at Intel, stops by to announce the launch of the Intel® Xeon® Processor D-1500 Product Family. She illustrates how this is the first Xeon processor in a SoC form factor and outlines how the low power consumption, small form factor, and incredible performance of this solution will greatly benefit the network edge and further enable innovation in the telecommunications industry and the data center in general. To learn more visit: www.intel.com/xeond
  • Making the Open Compute Vision a Reality – Intel® Chip Chat episode 373: Raejeanne Skillern, General Manager of the Cloud Service Provider Organization within the Data Center Group at Intel explains Intel’s involvement in the Open Compute Project and the technologies Intel will be highlighting at the 2015 Open Compute Summit in San Jose California. She discusses the launch of the new Intel® Xeon® Processor D-1500 Product Family, as well as how Intel will be demoing Rack Scale Architecture and other solutions at the Summit that are aligned with OCP specifications.
  • The Current State of Mobile and IoT Security – Intel® Chip Chat episode 374: In this archive of a livecast from Mobile World Congress in Barcelona, Gary Davis (twitter.com/garyjdavis), Chief Consumer Security Evangelist at Intel Security stops by to talk about the current state of security within the mobile and internet of things industry. He emphasizes how vulnerable many wearable devices and smart phones can be to cybercriminal attacks and discusses easy ways to help ensure that your personal information can be protected on your devices. To learn more visit: www.intelsecurity.com or home.mcafee.com
  • Enabling Next Gen Data Center Infrastructure – Intel® Chip Chat episode 375: In this archive of a livecast from Mobile World Congress Howard Wu, Head of Product Line for Cloud Hardware and Infrastructure at Ericsson chats about the newly announced collaboration between Intel and Ericsson to launch a next generation data center infrastructure. He discusses how this collaboration, which is in part enabled by Intel® Rack Scale Architecture, is driving the optimization and scaling of cloud resources across private, public, and enterprise cloud domains for improved operational agility and efficiency. To learn more visit: www.ericsson.com/cloud
  • Rapidly Growing NFV Deployment – Intel® Chip Chat episode 376: In this archive of a livecast from Mobile World Congress John Healy, Intel’s GM of the Software Defined Networking Division, stops by to talk about the current state of Network Functions Virtualization adoption within the telecommunications industry. He outlines how Intel is driving the momentum of NFV deployment through initiatives like Intel Network Builders and how embracing the open source community with projects such as OPNFV is accelerating the ability for vendors to now offer many solutions that are targeted towards function virtualization.

 

Intel, the Intel logo, and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

By Dave Patterson, President, Intel Federal LLC and Vice President, Data Center Group, Intel

 

 

The U.S. Department of Energy’s (DOE) CORAL program (Collaboration of Oak Ridge, Argonne and Lawrence Livermore National Laboratories) is impressive for a number of advanced technical reasons. But the recent award announcement to Intel has shown a spotlight on another topic I am very excited about: Intel Federal LLC.

 

Intel Federal is a subsidiary that enables Intel to contract directly and efficiently with the U.S. Government. Today we work with DOE across a range of programs that address some of the grand scientific and technology challenges that must be solved to achieve extreme scale computing. One such program is Intel’s role as a prime contractor in the Argonne Leadership Computing Facility (ALCF) CORAL program award.

 

Intel Federal is a collaboration center. We’re involved in strategic efforts that need to be orchestrated in direct relationship with the end users. This involves the engagement of diverse sets of expertise from Intel and our partners, ranging from providers of hardware to system software, fabric, memory, storage and tools. The new supercomputer being built for ALCF, Aurora, is a wonderful example of how we bring together talent from all parts of Intel in collaboration with our partners to realize unprecedented technical breakthroughs.

 

Intel’s approach to working with the government is unique – I’ve spent time in the traditional government contracting space, and this is anything but. Our work today is focused on understanding how Intel can best bring value through leadership and technology innovation to programs like CORAL.

 

But what I’m most proud of about helping bring Aurora to life is what this architectural direction with Intel’s HPC scalable system framework represents in terms of close collaboration in innovation and technology. Involving many different groups across Intel, we’ve built excellent relationships with the team at Argonne to gather the competencies we need to support this monumental effort.

 

Breakthroughs in leading technology are built into Intel’s DNA. We’re delighted to be part of CORAL, a great program with far-reaching impact for science and discovery. It stretches us, redefines collaboration, and pushes us to take our game to the next level.  In the process, it will transform the HPC landscape in ways that we can’t even imagine – yet.

 

Stay tuned to CORAL, www.intel.com/hpc

 

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

By Charlie Wuischpard, VP & GM High Performance Computing at Intel

 

Every now and then in business it all really comes together— a valuable program, a great partner, and an outcome that promises to go far beyond just business success. That's what I see in our newly announced partnership with the Supercomputing Center of the Chinese Academy of Sciences. We're collaborating to create an Intel Parallel Computing Center (Intel® PCC) in the People's Republic of China. We expect our partnership with the Chinese Academy of Sciences to pay off in many ways.

 

Through working together to modernize “LAMMPS”, the world’s most broadly adopted Molecular Dynamics application, Intel and the Chinese Academy of Sciences will help researchers and scientists understand everything from physics and semiconductor design to biology, pharmaceuticals, DNA analysis and ultimately aid in identifying cures for diseases, genetics, and more.

 

The establishment of the Intel® PCC with the Chinese Academy of Sciences is an important step. The relationship grows from our ongoing commitment to cultivate our presence in China and to find and engage Chinese businesses and institutions that will collaborate to bring their talents and capabilities to the rest of the world. Their Supercomputing Center has been focused on operating and maintaining supercomputers and exploiting and supporting massively parallel computing since 1996. Their work in high performance computing, scientific computing, computational mathematics, and scientific visualization has earned national and international acclaim. And it has resulted in important advances in the simulation of large-scale systems in fields like computational chemistry and computational material science.

 

We understand, solving the biggest challenges for society, industry, and science requires a dramatic increase in computing efficiency. Many leverage High Performance Computing to solve these challenges, but seldom realize they are only using a small fraction of the compute capability their systems provide. To take advantage of the full potential of current and future hardware (i.e. cores, threads, caches, and SIMD capability), requires what we call “modernization”. We know building Supercomputing Centers is an investment. By ensuring software fully exploits the modern hardware, this will aid in maximizing the impact of these investments. Customers will realize the greatest long-term benefit when they pursue modernization in an open, portable and scalable manner.

 

The goals of the Intel® PCC effort go beyond just creating software that takes advantage of hardware, all the way to delivering value to researchers and other users around the world. Much of our effort is training and equipping students, scientists, and researchers to write modern code that will ultimately accelerate discovery.

 

We look forward to our partnership with the Chinese Academy of Science and the great results to come from this new Intel® Parallel Computing Center. You can find additional information regarding this effort by visiting our Intel® PCC website.

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

It is a very exciting time for the industry of information and communication technology (ICT) as it continues the massive transformation to the digital service, or “on demand”, economy.  Earlier today I had the pleasure of sharing Intel’s perspective and vision of the Data Center market at IDF15 in Shenzhen and I can think of no place better than China to exemplify how the digital services economy is impacting people’s everyday lives.  In 2015 ICT spending in China will exceed $465 Billion, comprising 43% of global ICT spending growth.  ICT is increasingly the means to fulfil business, public sector and consumer needs and the rate at which new services are being launched and existing services are growing is tremendous.  The result is 3 significant areas of growth for data center infrastructure:  continued build out of Cloud computing, HPC and Big Data.

 

Cloud computing provides on-demand, self-serve attributes that enable application developers to deliver new services to the markets in record time.  Software Defined Infrastructure, or SDI, optimizes this rapid creation and delivery of business services, reliably, with a programmable infrastructure.  Intel has been making great strides with our partners towards the adoption of SDI.  Today I was pleased to be joined by Huawei, who shared their efforts to enable the network transformation, and Alibaba, who announced their recent success in powering on Intel’s Rack Scale Architecture (RSA) in their Hangzhou lab.

 

Just as we know the future of the data center is software defined, the future of High Performance Computing is software optimized. IDC predicts that the penalties for neglecting the HPC software stack will grow more severe, making modern, parallel, optimized code essential for continued growth. To this end, today we announced that the first Intel® Parallel Computing Center in China has been established in Beijing to drive the next generation of high performance computing in the country.  Our success is also dependent on strong partnerships, so I was happy to have Lenovo onstage to share details on their new Enterprise Innovation Center focused on enabling our joint success in China.

 

As the next technology disruptor, Big Data has the ability to transform all industries.  For healthcare, through the use of Big Data analytics, precision medicine becomes a possibility providing tremendous opportunities to advance the treatment of life threatening diseases like cancer.  By applying all the latest Cloud, HPC and Big Data analytics technology and products, and working collectively as an industry, we can enable the sequence of a whole genome, identify the fundamental genes that cause the cancer, and the means to block them through the creation of personalized treatment, all in one day by 2020.

 

Through our partnership with China technology leaders we will collective enable the Digital Service Economy and deliver the next decade of discovery, solving the biggest challenges in society, industry and the sciences.

 

 

 

 

 

© 2015, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others.

Demand for efficiency, flexibility, and scalability continues to increase, and the data center must keep pace with movement to our digital business strategies.  Previously, Diane Bryant, Intel’s senior vice president and general manager of Intel’s Data Center Group, recently stated, “We are in the midst of a bold industry transformation as IT evolves from supporting the business to being the business. This transformation and the move to cloud computing calls into question many of the fundamental principles of data center architecture.”

 

Those “fundamental principles of data center architecture” are on a collision course with the direction that virtualization has lead us.  This virtualization in conjunction with automation and orchestration is leading us to the Software Defined Infrastructure (SDI). The demand of SDI is driving new hardware developments, which will open a whole new world of possibilities for running a state-of-the-art data center.  This eventually will leave our legacy infrastructure behind.  While we’re not quite there yet, as different stages need to mature, the process has the power to transform the data center.

 

generic_server_room.jpg

 

Logical Infrastructure

 

SDI rebuilds the data center into a landing zone for new business capabilities. Instead of comprising multiple highly specialized components, it’s a cohesive and comprehensive system that meets all the demands placed on it by highly scalable, completely diversified workloads, from the traditional workloads to cloud-aware applications.

 

This movement to cloud-aware applications will demand the need for SDI; by virtualizing and automating the hardware that powers software platforms, infrastructure will be more powerful, cost-effective, and efficient. This migration away from manual upkeep of individual resources will also allow systems, storage, and network administrators to shift their focus to more important tasks instead of acting as “middleware” to connect these platforms.

 

Organizations will be able to scale their infrastructure in support of the new business services and products, and bring them to market much more quickly with the power of SDI.

 

Hardware Still Matters

 

As the data center moves toward an SDI-driven future, CIOs should be cautious in thinking that hardware does not count anymore. Hardware that works in conjunction with the software to ensure that the security and reliability of the workloads are fully managed and provide telemetry and extensibility that allow specific capabilities to be optimized and controlled within the hardware will be critical.

 

The Future of the Data Center Lies with SDI

 

Data centers must be agile, flexible, and efficient in this era of transformative IT. SDI allows us to achieve greater efficiency and agility by allocating resources according to our organizational needs, applications requirements, and infrastructure capabilities.

 

As Bryant concluded, “Anyone in our industry trying to cling to the legacy world will be left behind. We see the move to cloud services and software defined infrastructure as a tremendous opportunity and we are seizing this opportunity.”

 

To continue the conversation, please follow me on Twitter at @EdLGoldman or use #ITCenter.

I had the opportunity to attend Mobile World Congress and the Open Compute Summit this year where we demonstrated Red Rock Canyon (RRC) at both venues. At Fall IDF in San Francisco last year, we disclosed RRC for the first time. RRC is Intel’s new multi-host Ethernet controller silicon with integrated Ethernet switching resources.


The device contains multiple integrated PCIe interfaces along with Ethernet ports that can operate up to 100G. The target markets include network appliances and rack scale architecture which is why MWC and the OCP summit were ideal venues to demonstrate the performance of RRC in these applications.

 

Mobile World Congress


This was my first time at MWC and it was an eye opener. Eight large exhibit halls in the middle of Barcelona with moving walkways to shuffle you from one hall to the next. Booths the size of two story buildings packed with 93,000 attendees - a record number according to the MWC website.


At the Intel booth, we were one of several demonstrations of technology for network infrastructure. Our demo was entitled “40G/100GbE NSH Service Chaining in Intel ONP” and highlighted service function forwarding using network services headers (NSH) on both the Intel XL-710 40GbE controller and the Intel Ethernet 100Gbps DSI adapter that uses RRC switch silicon. In case you’re not familiar with NSH, it’s a new virtual network overlay industry initiative being driven by Cisco, which allows flows to be identified and forwarded to a set of network functions by creating a virtual network on top of the underlying physical network.


The demo was a collaboration with Cisco. It uses a RRC NIC as a 100GbE traffic generator to send traffic to an Intel Sunrise Trail server that receives the traffic at 100Gbps using another RRC 100GbE NIC card. Sunrise Trail then forwards 40Gbps worth of traffic to a Cisco switch, which, in turn, distributes the traffic to both another Sunrise Trail server and a Cisco UCS server, both of which contain Intel® Ethernet XL710 Converged Network Adapters.


The main point of the demonstration is that the RRC NIC, the XL710 NIC and the Cisco switch can create a wire-speed service chain by forwarding traffic using destination information in the NSH header. For NFV applications, the NIC cards can also forward traffic to the correct VM based on this NSH information.


Network Function Virtualization (NFV) was a hot topic at MWC this year, and we had many customers from leading network service providers and OEMs come by our booth to see the demo. In some cases they were more interested in our 100GbE link, which I was told was one of the only demos of this kind at the show.


Another 100G Intel Ethernet demo was at the Ericsson booth where they announced their project Athena, which demonstrated a 100GbE link using two RRC-based NIC cards. Athena is designed for hyperscale cloud data centers using Intel’s rack scale architecture framework.

 

Open Compute Project Summit


The very next week, I traveled to San Jose to attend the Open Compute Project Summit where RRC was part of a demonstration of Intel’s latest software development platform for its rack scale architecture. OCP was a much smaller show focused on the optimization of rack architectures for hyperscale data centers. At last year’s conference, we demonstrated an RSA switch module using our Intel Ethernet Switch FM6000 along with four Intel 10GbE controller chips.


This year, we showed our new multi-host RSA module that effectively integrates all of these components into a single device while at the same time provides 50Gbps of bandwidth to each server along with multiple 100GbE ports out of the server shelf. This RSA networking topology not only provides a 4:1 cable reduction, it also enables flexible network topologies. We also demonstrated our new open source ONP Linux Kernel driver, which will be up-streamed in 2015 consistent with our Open Network Platform strategy.


We had a consistent stream of visitors to our booth partially due to an excellent bandwidth performance demo.


After first disclosing RRC at IDF last year, it was great to be able to have three demonstrations of its high-performance capabilities at both MWC and the OCP Summit. It doesn’t hurt that these conferences are also targeted at two key market segments for RRC: network function virtualization and rack scale architecture.


We plan to officially launch RRC later this year, so stay tuned for much more information on how RRC can improve performance and/or reduce cost in these new market segments.


Q1: Intel is engaged in a number of SDN and NFV community and standards-developing organizations worldwide. What is the reasoning behind joining the new Korea SDN/NFV Forum?

 

A1: Intel Korea is firmly committed to helping enable our local Korean partner ecosystem to fully participate and benefit from the transformation of the global networking industry towards software-defined networking (SDN) and network functions virtualization (NFV). Incorporating the latest SDN and NFV standards and open source technologies from initiatives such as OpenStack, OpenDaylight, OPNFV, OpenFlow and others into a coherent, value-added and stable software platform is complex. Working with our partners in the Korea SDN/NFV Forum, we are hoping to contribute to reducing this complexity, thereby accelerating the usage of SDN and NFV in Korea itself as well as globally through the export solutions of the Korean ICT industry.

 

Q2: What can Intel contribute to the Korea SDN/NFV Forum?

 

A2: Intel has been at the forefront of SDN and NFV technology for more than five years. During that time, the company has invested in working with a wide range of technology partners to develop cutting-edge SDN/NFV hardware and software, as well the best practices for rapid deployment. This customer-centric expertise in architecting, developing and deploying SDN/NFV in cloud, enterprise data center and telecommunication networks is core to our contribution to the Korea SDN/NFV Forum.

 

Another concrete example of our expertise is a deep level of experience in testing and validating SDN/NFV hardware and software solutions, which is an important component when developing credible proofs-of-concept (PoC) for the next generation of SDN/NFV software. In addition, Intel has been operating Intel Network Builders, an SDN/NFV ecosystem program for solution partners and end-users. Products / solutions developed by Korea SDN/NFV Forum members can be leveraged by the ecosystem members to promote their solutions globally.

 

Q3: Are there any specific working groups within the Forum that Intel will focus on?

 

A3:  Intel plans to contribute to the success of all working groups with a focus on the standard technology, service PoC, policy development, and international relations working groups. Through the global Intel network, we are also aiming to assist in the collaboration of the Korea SDN/NFV Forum with other international organizations.

 

Q4: What is Intel’s main goal for participating in the Korea SDN/NFV Forum in 2015?

 

A4: Primarily, we want add value by helping the SDN/NFV Forum to get established and become a true source of SDN/NFV innovation for our partner ICT ecosystem here in Korea.

Filter Blog

By date:
By tag: