1 2 3 Previous Next

The Data Stack

1,327 posts

By Caroline Chan, Intel

 

I recent attended a fairly eventful Small Cell Summit in London. This is our third year of participation in the largest small cell ecosystem gathering in the industry. However, this is our first year presenting our new small cell system on a chip (SoC) portfolio, enabled in part through our acquisition of Mindspeed. So you can say that we are now a full service small cell silicon vendor, from baseband SoC to intelligent services on the small cell. This uniquely differentiates our offering, as showcased in the three demos we presented: dual mode (WCDMA + FDD LTE) modem; smart small cell based on the Rangeley reference design board, SpiderCloud service node running caching, video analytics, and McAfee IPS software; and a secured small cell gateway from Clavister. Dan Rodriguez, the product marketing director for wireless infrastructure at Intel, gave a well-received keynote on NFV transformation of wireless access, and we had more than 20 meetings in two days. This all adds up to considerable buzz around small cell and Intel's role within the industry.

 

Couple of key take-aways from the conference and meetings with operators, OEMs and ecosystem vendors:

 

  • Small cell ramp up is below expectation primarily due to difficulties in deployment, and the relatively immature small cell ecosystem. We are working hard to address these issues by making our small cell platform easier to integrate and launch.
  • NFV has already made an impact to small cell products – Airvana announced OneCell platform in which they make use of Cloud RAN concept to enable large mobile enterprise small cell deployment. Clearly our progress in C-RAN  has been noted by our friends and competitors.
  • Small Cell Forum announced the kickoff of a work item to study virtualized small cells, championed by Cisco, Intel, and Alcatel-Lucent.
  • Edge Cloud is gaining traction – our Rangeley reference platforms had been welcomed by attendees. We had several productive discussions with credible software developers especially in mobile enterprise spaces, although more needs to be done to promote a healthy and vibrant Edge Cloud application pipeline.

 

Enough from me, let’s hear what Intel's Uri Elzur, our SDN System Architecture Director, has to say about this show. Also of interest are Telecom TV interviews about this topic with the CTO of Telefonica UK and the CMO of SpiderCloud Wireless.

Stu Goldstein is a Market Development Manager in the Communications and Storage Infrastructure Group at Intel

 

What a great way to highlight what Intel® Xeon® Processors are capable of:  the launch of the VMAX3 Enterprise Data Service Platform.

Scale UP and OUT isn’t easy but the VMAX3 makes it look so!  Its Dynamic Virtual Matrix architecture supports up to 384 Intel Xeon processor cores efficiently managing hundreds of ports and terabytes of pooled cache.  VMAX3 works seamlessly with the Intel Xeon processor’s multi-core architecture to run data and application services that ordinarily would be external to the CPU complex.  By architecting a solution that truly takes full advantage of core count, flash, and cache technology, EMC has created a product that knows how to consolidate the largest workloads and keeps capacity costs, transaction costs and cost per virtual machine considerably lower than in the past. 

 

Well, what is so special about Intel Xeon Processors that makes this all possible?  What has EMC taken advantage of here to create a product that is able to dynamically apportion host access, data services and storage resources to meet application service levels that are critical to next generation Hybrid Cloud deployments?

 

It isn’t all about the actual Intel® Xeon® Processor core architecture which at times gets all the attention;  the “uncore” functions that support the core architecture have been well-utilized by the VMAX3.  These are functions of the CPU that are not in the core but which are essential for core performance. Several architectural enhancements were made to achieve significant performance gains over the prior generations.

 

Improvements to memory bandwidth are considerable. A three-ring interconnect has been put in place building on prior QPI architectures to enable low latency and high throughput by establishing more efficient pathways between the Cores and the rest of the CPU complex.  This also accommodates a second “deep buffering” Home Agent/Memory Controller for increased memory bandwidth efficiency, allowing up to 512 memory transactions in flight.  This is particularly important when meeting the throughput requirements in large scale systems like the VMAX3.  This equates to a 60% memory bandwidth improvement over the prior generation which is a big deal, especially considering that I/O bandwidth performance doubles!   The CPU is now plumbed to handle faster I/O via upgrades to PCIe Gen3 and more I/O through a doubling of the number of PCIe lanes to 80 for each two socket CPU.  VMAX3 takes advantage of this increased connectivity to high speed front end servers and back end storage I/O devices.

 

Changes were also made to the coherence protocol, putting in place an in-memory directory that is used across all multi-socket systems.  This approach introduces a speculative snooping mechanism (Opportunistic Snoop Broadcast) to reduce directory overhead and improve latency in DP/MP systems. In addition, an IO Directory Cache has been implemented to offset the directory overhead for remote I/O accesses.

 

Any discussion of the Intel® Xeon® Processor can’t end without mentioning that power has also been optimized.  The improved performance fits within the same power envelope of the prior generation and idle power has been significantly reduced!  This means that compute has more than doubled, using the same power envelope.

 

I’m excited to see what new usage models the VMAX3 enables!  Let me know how you’ve been able to take advantage of the new capabilities by posting your reply.

In June, Intel® Chip Chat finished archiving livecast episodes from Mobile World Congress with an episode from SpiderCloud on delivering scalable cellular and Wi-Fi coverage to enterprises. Also this month, episodes on datacenter storage and disrupting the data center at Intel, as well as an episode on using supercomputing to simulate earthquakes. This month, the Digital Nibbles Podcast pulled one last “best of” episode from the archive with cloud leader Ben Kepes and attorney Deborah Salons on how the internet of things could affect our privacy.  In our second episode, Michael Cradell of RightScale discussed the cloud environment and Dave McCrory of Basho talked about Data Gravity: the idea that as more data is created more applications will be attracted to it. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

 

  • Live from MWC: Providing Enterprises Reliable Cell Coverage – Intel® Chip Chat episode 317: In this archive of a livecast, Art King, the Director of Enterprise Services & Technologies at SpiderCloud Wireless, stops by to talk about providing a Small Cell managed services platform to medium to large enterprises. This allows mobile operators to deliver scalable cellular and Wi-Fi coverage, capacity and cloud services wherever there's a LAN, bringing intelligence to the edge of the network. For more information, visit www.spidercloud.com.

 

  • Datacenter IO Storage Transition – Intel® Chip Chat episode 318: Lynn Comp (@comp_lynn), the Director of Datacenter Solutions and Technologies Initiatives at Intel, is back on Chip Chat to give a software defined infrastructure recap and then talk about datacenter storage and balancing the placement of data with how often it needs to be accessed. Intel® Solid State Drives offer better performance than traditional hard drives, and when coupled with NVM Express can dramatically reduce latency for certain workloads. For more information, visit www.intel.com/itcenter.

 

  • A Vision for Disrupting the Datacenter – Intel® Chip Chat episode 319: Raejeanne Skillern (@RaejeanneS), frequent Chip Chat guest and General Manager of Cloud Service Providers for Intel, stops by to talk about the digital services economy and to lay out Intel’s vision for disrupting the datacenter. The four pillars of the newly architected data center are silicon innovation (think customized silicon), moving away from the box model (pools of compute/storage/network resources with an orchestration layer), expedited service delivery, and monetizing data (realizing insight and value). For more information, visit www.intel.com/itcenter.

 

  • Using Supercomputing to Simulate Earthquakes – Intel® Chip Chat episode 320: Dr. Alexander Heinecke from the Parallel Computing Lab at Intel chats about his PRACE ISC award-winning research into earthquake simulations. The team of computer scientists, mathematicians, and geophysicists simulated vibrations comparable to an earthquake inside the Merapi volcano on the island of Java. They ran the SeiSol simulation software on the SuperMUC high performance computer, which executed 1.09 quadrillion floating point operations per second and used all of the 147,456 processor cores. Research like this, using even more powerful supercomputers, will enable further detailed simulations in a faster time, leading to deeper insights and scientific breakthroughs. For more information, visit www.isc-events.com/isc14/ and www.intel.com/hpc.

 

Digital Nibbles Podcast:

 

  • Best of DNP: The Connected Future and Privacy – Digital Nibbles episode 59: With Allyson on sabbatical and Ruv on the road, we’re pulling a couple of our “best of” episodes from the archive. This week we’re re-posting an episode with two former guests of the show. First up, Ben Kepes (@benkepes), a cloud thought leader, stops by to cover the growth of enterprise cloud, the Cloud 2020 Summit he’s currently organizing, and what “The Internet of Things” means for the future. Then Deborah Salons (@dsalons), an attorney specializing in regulation and privacy, weighs in on if you can ever be truly be private in the cloud.

 

  • Cloud Orchestration and Data Gravity – Digital Nibbles Podcast episode 60: Allyson and Reuven are finally reunited this week for a new episode with a couple of great guests. First up, Michael Crandell (@michaelcrandell), the CEO of RightScale stops by to talk about cloud management in the cloud environment and bridging services (servers/network/storage) to run applications. He also weighs in on the PaaS vs. IaaS debate. Then Dave McCrory (@mccrory), the CTO of Basho, discusses distributed database technology and also the concept of Data Gravity – thinking about data as if it were a planet that builds mass and attracts additional Services and Applications. When data is large enough, it’s virtually impossible to move.

This article originally appeared on ChannelProNetwork


 

IT pros are using built-in server features to speed response time and reduce total cost of ownership. By Jeff Klaus.

 

Compared with their enterprise counterparts, IT teams supporting SMBs are typically more constrained by budget and headcount when asked to innovate around new opportunities. This interplay between doing additional, innovative work with fewer resources requires SMB teams to be smart, agile, and knowledgeable.

 

So SMBs are following in the footsteps of enterprises by fully leveraging built-in server features that speed response time and reduce total cost of ownership. Two of these features are remote management and energy management, which contribute to the extended lifespan of servers and minimize business disruptions due to server failures. Small IT organizations that take advantage of these features can maximize the value of their servers while saving themselves time to focus on supporting services and revenue.

 

Remote KVM Capabilities


Decades have passed since the first keyboard, video, and mouse (KVM) switches were invented. Even so, many IT teams cannot afford the dedicated hardware. A KVM overlay network also raises energy costs and increases complexity. Software KVM solutions have helped smaller organizations centralize support, but such software has traditionally been bundled into best-in-class consoles and platforms that do not fit SMB budgets.

 

Even SMBs without a data center can utilize and benefit from real-time server temperature data. IT can identify hot spots before they lead to server or rack failures or shorten the lifespan of the assets. Since most SMBs cannot afford the same level of redundancy as enterprise data centers, proactive temperature monitoring and automated threshold alerts can avoid service outages that translate to loss of revenue.

 

SMBs with the most limited IT resources can benefit the most from solutions that leverage this server intelligence. The resulting productivity boosts and operating cost savings translate to very short ROI payback periods for the best-in-class offerings.

Graph analysis can help visualize insights from web content, call logs, social media and the Internet of Things. Watch as #TechTim uses graph analytics to chart his LinkedIn* contacts. Go to inmaps.linkedinlabs.com to try it yourself!

 

 

 

 

Read more on Tim's blogs on the Big Data Outpost. Follow Tim on Twitter @TimIntel.

Intel took another huge step in demonstrating an unswerving commitment to High Performance Computing (HPC) leadership when DCG Vice President Raj Hazra, delivered an important special session presentation at the International Supercomputing Conference, taking place in Leipzig, Germany.

 

Addressing a large percentage of the approximately 2,500 attendees at this annual event, Hazra portrayed an emerging picture of HPC direction, discussing new business models, new usages and new access for HPC applications with examples such as 3D printing and HPC cloud services.

 

Intel has been keeping a keen eye on the HPC market’s evolution and anticipating a technically challenging landscape of changing requirements. This announcement indicates that years of development work are now starting to come to fruition – with much more to come.

 

Speaking to what Intel calls the “Technical Computing Transformation”, Hazra’s keynote ran in parallel with a significant announcement titled, “Intel Re-architects the Fundamental Building Block for High-Performance Computing.”

 

The story included highlights of some newly announced features and capabilities of the Knights Landing processor, and the announcement of Intel’s next generation fabric architecture called Intel® Omni Scale Fabric.

 

The newly branded Intel Omni Scale Fabric-- an end-to-end interconnect optimized for fast data transfers, reduced latencies and higher efficiency -- initially available as discreet components in 2015, the Omni Scale Fabric will also be integrated into next-generation Intel Xeon Phi processors (code named Knights Landing) and future 14nm Intel® Xeon® processors.

 

There is already tremendous interest in Knights Landing, which Hazra confirmed will be available as a standalone processor mounted directly on the motherboard socket in addition to the PCIe-based card option. The socketed option removes programming complexities and bandwidth bottlenecks of data transfer over PCIe, common in GPU and accelerator solutions.

 

Hazra also echoed the latest information from the TOP500 list published the same day. Intel continues to lead in the HPC segment with 85 percent of all supercomputers on the latest TOP500* list being powered by Intel Xeon processors.

 

More information on Intel’s announcement can be found at the Intel Newsroom.

 

Readers may also be interested in these videos:

If you are planning on attending Supercomputing 2014 (#SC14) in New Orleans, November 17-20, 2014, you won’t want to miss the second annual Intel® Parallel Universe Computing Challenge (#PUCC14).

 

The highly interactive and entertaining event, which will take place in Intel’s booth #1315 in the SC14 exhibit hall, is much like a fast-paced television trivia game show that also incorporates a parallel computing code challenge to raise awareness of the benefits of code modernization.

 


Winners of last year's challenge (Gaussian Elimination Squad from Germany) and the crowd watching the final match

 

Similar to the inaugural event in 2013, this year’s Intel Parallel Universe Computing Challenge will pit eight teams against each other in a single elimination tournament for a prize that will be awarded to charity.

 

Teams will be comprised of four members who will compete in a first round rapid-fire challenge with questions about technical computing industry trivia. The second round will be a parallel code optimization challenge where contestants will examine a piece of code that has been deconstructed from its optimized, parallel version. As the team applies code changes they believe will improve the overall performance, the audience will be able to watch the results on overhead screens. Points will be awarded based on the team’s ability to execute quickly and how much they are able to improve code performance.

 

“SC14 is thrilled to have the Parallel Universe Computing Challenge come to New Orleans as part of the SC14 Exhibit Hall,” said Dona Crawford, chair emeritus of the SC and associate director for Computation at Lawrence Livermore National Laboratory. “This highly entertaining event will serve to educate our community about the rich legacy of high performance computing, and as a real-time demonstration of the value of code modernization.”

 

According to James Reinders, director and chief evangelist for Intel software tools and parallel programming, “It’s a fun way to educate the technical computing community on the importance of code modernization and the many benefits to be gained immediately – and on future systems.”

 

Information on how interested teams may submit their team information for consideration can be found here.

 

I will be there as the host moderator just like last year. So plan now to be part of the experience!

 

Just look for the guy in the yellow blazer!

Graph analytics is a highly visual form of analysis that reveals patterns and relationships within data. See how data visualization can reveal critical insights to deliver real business value.

 

 

Read more on Tim's blogs on the Big Data Outpost. Follow Tim on Twitter @TimIntel.

For one week in June, Leipzig, Germany will be the gateway to a parallel universe as several thousand HPC nomads gather for the 29th annual International Supercomputing Conference (ISC’14).

 

Intel engineers, developers, performance specialists and scientists will be out in force at ISC’14 as the stakeholders of the global HPC community search for answers, seek wisdom and share experiences on their never ending quest for increased levels of computation in the world of parallel supercomputing.

 

So what’s different from last year at the ISC conference? Some would say not a lot. Same convention center, mostly the same participants, and many of the same discussions on architectures, accelerators, and programming environments.

 

And one other thing that’s definitely the same is Intel’s unswerving commitment to HPC.

 

But what is different from last year’s conference? Led by Intel Vice Presidents, Dr. Rajeeb (Raj) Hazra (Intel Vice President, Data Center Group, and General Manager of the Technical Computing Group), and Charles Wuischpard (Intel Vice President, Data Center Group, and General Manager, Workstation and High Performance Computing), the Intel HPC team seems to be supercharged – knocking down wins, talking about accomplishments with Xeon Phi, dealing with an overwhelming interest in the True Scale fabric, putting early deals in place for Knights Landing, and working with a growing number of Intel Parallel Computing Centers, leading an unprecedented campaign for Code Modernization.

 

ISC’14 should be highly educational and quite entertaining. If you want to keep up with where HPC is going, be sure to catch as many of the Intel presentations as you can fit into your calendar. They’ll be pretty hard to miss.

 

Starting Sunday, June 22nd, Intel speakers will take to the podiums to cover topics ranging from software and application discussions to various forward-looking exascale-related topics. Over the course of five days, exploration of the parallel universe will be guided by seven (7) Intel speakers, five (5) partner demos and more than eleven (11) organizations networking from the collaboration hubs at Intel’s booth.

 

Demos at Intel’s booth (booth #540) will include the following discussions on accelerating discovery to reinforce that parallel is indeed the path forward:

 

  • Overcoming Bacterial Diseases – Max Planck Institute
  • Unleashing Petascale Performance for Earthquake Physics – TU Munchen
  • Enabling Scientific & Engineering Breakthroughs – Fujitsu
  • Enabling High Impact Scientific Discovery & Engineering – Julich / BSC / Coria
  • Building Safer Transportation through Simulation-based Design – Altair

 

Monday afternoon, June 23rd, at 13:00 h, Intel Vice President Charles Wuischpard will represent the Intel team at the ISC Vendor Showdown, and later that evening, at 18:15 h, a widely recognized name throughout the HPC community, Intel’s Raj Hazra, will deliver his keynote, “Accelerating Insights in the Technical Computing Transformation.”


Watch for some very exciting announcements from Intel coming from the International Supercomputing Conference in Leipzig, Germany (ISC’14), June 22-26, 2014. To stay up to date on all the show news, follow IntelHPC on Twitter and visit our Storify page.

Today I had the pleasure of joining Tom Krazit on stage at the Gigaom Structure’14 conference to share Intel’s vision of the data center in support of the growth of the digital services economy.  We are in the midst of a bold industry transformation as IT evolves from supporting the business to being the business. This transformation and the move to cloud computing calls into question many of the fundamental principles of data center architecture.  Two significant changes are the move to software defined infrastructure (SDI) and the move to scale-out, distributed applications. The speed of application development and deployment of new services is rapid.   The infrastructure must keep pace.   It must move from statically configured to dynamic, from manually operated to fully automated, and from fixed function to open standard.

 

As we have many times in our history, Intel is embracing this transformation and driving  technology innovation to re-architect today’s data centers for the future.

 

As a first step, we start with a commitment to deliver the best technology for all data center workloads - spanning servers, network and storage.  We started by augmenting our industry leading, general purpose Xeon processors with workload-optimized products, such as our Atom SoC processors for lightweight web-hosting, Xeon Phi for highly parallel processing, and the new Xeon D SoC series  for hyperscale environments.

 

I have shared details of how we extended our products even beyond these workload-optimized solutions, delivering 15 custom products last year to meet specific needs of the end customers, including Ebay and Facebook.  Today I disclosed that our development pipeline for custom solutions is growing, with more than twice the number of products planned for 2014.

 

But what we find even more exciting is our next innovation in processor design that can dramatically increase application performance through fully custom accelerators.  We are integrating our industry leading Xeon processor with a coherent FPGA in a single package, socket compatible to our standard Xeon E5 processor offerings.

 

Why are we excited by this announcement? The FPGA provides our customers a programmable, high performance coherent acceleration capability to turbo-charge their critical algorithms. And with down-the-wire reprogramability, the algorithms can be changed as new workloads emerge and compute demands fluctuate.   Based on industry benchmarks FPGA-based accelerators can deliver >10X performance gains.  By integrating the FPGA with the Xeon processor, we estimate that customers will see an additional 2X in performance thanks to the low latency, coherent interface.

 

Our new Xeon+FPGA solution provides yet another customized option, one more tool for customers to use to improve their critical data center metric of “Performance/TCO”.   It highlights our commitment to delivering the very best solutions across all data center workloads and our passion to lead in the transformation of the industry to cloud services.

 

Our innovation extends beyond silicon as we collaborate with industry partners to accelerate the move to SDI.  Our early work with Facebook - as part of OCP – to define a rack-scale architecture (RSA) is gaining momentum with cloud service providers, telco service providers and hosters.  The current data center optimization point is a single node and scale comes through deploying more nodes.  The optimization point is moving to the rack – multi-node.   The move to pooled, multi-node solutions – configurable and composable by software -  will deliver the next step function in datacenter efficiency.  And we will lead with the distributed compute, memory and switching technology required to realize the RSA reference architecture.

 

Rapid adopters of SDI are the network operators.  With the continued rise in network capacity demand and the desire to rapidly deploy and monetize new services, the move to network function virtualization (NFV) is compelling.  As a member of the ETSI NFV Forum, Intel contributed to the creation of the NFV spec and the development of the nine use cases.  There are now 20 PoCs in flight with carriers from around the world, running on Intel architecture, up from nine just four months ago.  Historically it has taken years for the service providers to deploy new services from initial concept.  With the decoupling of the hardware and software through virtualization, the industry expectation is that service deployment time can reduce form years to months and eventually to minutes.   Telefonica has announced that they will convert 30 percent of its network to NFV by 2016.  With over 10 years of technology innovation in server virtualization and a legacy in building technology ecosystems, we are leading the way in the network transformation to SDI.

 

Finally, we are placing deep investment in unlocking the vast amounts of structured and unstructured data, working with industry leaders to speed delivery of modern analytics solutions.  Our recent announcement of our collaboration and partnership with Cloudera is a clear example of our objective to enable the economic value enterprises can obtain through big data insight.  We are committed to accelerating enterprise feature integration, ease the deployment and deliver optimized solutions for data analytics.

 

As we have seen many times during our history, transformation creates both opportunities and threats. Anyone in our industry trying to cling to the legacy world will be left behind.  We see the move to cloud services and software defined infrastructure as a tremendous opportunity and we are seizing this opportunity.   We are investing billions of dollars every year in data center R&D to ensure we meet the evolving needs of the customers. We invite others in the industry to join us in delivering to the vision of the data services economy.

Structure is one of my favorite conferences of the year simply because the leading drivers of data center innovation come together for very substantive discussion on how customer use of technology changing.  This year, Intel shared its latest assessment of today’s data centers ability to meet the emergence of what has been coined the data services economy and how today’s data center models are insufficient.  We charted a new vision of data center disruption and committed to lead, starting with ourselves, the disruption needed to plan for next generation data centers.

(LINK TO DIANE’S BLOG).

 

As the general manager of the cloud service provider business, my goal is to make it easier, cheaper and more efficient for CSPs to build new infrastructure and services.  The foundation of our work is ensuring that we’re delivering industry leading products to market that address an expanding set of data center workloads and custom unique use cases.  Our work also extends to working with providers to differentiate services based on underlying infrastructure capabilities and creating transparency in service selection and new revenue streams through our Powered by Intel Cloud Technology program. 

 

This week at Structure we announced SoftLayer, an IBM company and one of the world’s leading IaaS providers, as one of the newest members of the program.  Our collaboration is aimed to expedite cloud service delivery by providing customers the knowledge of underlying Intel platform characteristics running their services to provide insight of the workload performance, along with the ability to access a consistent HW infrastructure for recreating predictable workload performance at any of their datacenters around the world in minutes.

 

We have worked with other providers to optimize the use of our platform features for their targeted workloads leveraging underlying infrastructure capabilities offered by Intel Architecture.  For instance, Intel Cloud Technology partner CenturyLink offers Hyperscale high performance server instances designed for web-scale workloads, big data and cloud-native applications. These instances utilize the latest Xeon E5 2600 v2 processors with Intel SSDs, to deliver a superior experience for web-scale architectures built on Couchbase, MongoDB and other NoSQL technologies. Users of the service will consistently see performance at or above 15,000 I/O operations per second (IOPS), significantly better performance than the standard instances offered by CenturyLink, resulting in a measurable increase in time to results.

 

SoftLayer and CenturyLink are just two examples of Cloud providers in the program offering workload-optimized service delivery that yield excellent value to enterprises and differentiation of services to providers.

 

We are happy to expand our global reach with 9 new service providers who recently joined the program: Atea (Norway), beyond.pl (Poland), Chief Telecom (Taiwan), CloudDynamics(US), IDC Frontier(Japan), iLand (US), iWeb (US), and SoftLayer, an IBM company(US) and Techgate, PL (UK). Since January the program has grown to include 35 participants and represents about 65% of IaaS market share. 

 

We plan to discuss additional collaborations in more detail later this summer.  In the meantime, connect with us for more information about the Powered by Intel Cloud Technology program.

Hadoop* was just the beginning. Learn how today’s analytic solutions can filter and analyze massive data flows in near real-time to reveal meaningful intelligence and empower data-driven decision-making.





For a quick overview of Big Data, see: http://intel.ly/1id8p9S. Read more on Tim's blogs on the Big Data Outpost. Follow Tim on Twitter @TimIntel.

By Jonathan Donaldson, GM Software Defined Infrastructure within Intel’s Data Center Group

 

Intel and HP recently announced a collaboration on OpenStack, critical to the foundation of HP Helion.

 

Today’s IT infrastructure is complicated and costly - applications are deployed in silos, legacy infrastructure and practices remain, and the CIO finds themselves spending the majority of their budget just to keep IT running.  While server virtualization is widely understood and deployed today, IT still finds virtualized servers at times less than 50% utilized.  Compounding these challenges is the time, sometimes measured in weeks, to provision the network for a new service and the sheer amount of storage that is required.  There must be a better way, and this is what HP and Intel discussed this week at the HP Discover event in Las Vegas.

 

To be successful in today’s global, competitive environment, businesses need to view IT as a close partner, and together they must work to deliver new ways to reach and retain customers and suppliers, while fostering a more productive workforce.  IT needs to finds ways to shift their budget to innovation, become a broker of services, and over time evolve from supporting the business, to being the business.

 

IT transformation to a cloud infrastructure is one of the biggest imperatives today, and it couldn’t come at a better time.  Cloud computing – when IT’s virtualized infrastructure becomes fully automated and orchestrated – will help to eliminate the inefficiencies and manual processes that exist today.  Intel remains committed to working with the broad industry to accelerate the adoption of cloud solutions, and I am pleased to be announcing with HP this week our collaboration on OpenStack, which is critical to the foundation of HP Helion.  This will include engineering efforts to advance OpenStack enterprise readiness focusing on availability, security and ease of deployment.

 

The companies have a technology development roadmap to pool our resources to accelerate OpenStack maturity.  Initially, we will collaborate to improve code quality and resiliency on all core projects, and augment the capabilities for live migration and hardware meta-tagging. Our efforts will then be fed back into the OpenStack trunk to benefit the entire ecosystem.

 

Intel will also be a part of HP’s Helion network, and work with like-minded partners to deliver open standards-based hybrid cloud services to meet the full range of enterprises’ in-country and cross-border requirements.  Whether IT chooses to deploy on-premise, utilize a public cloud service, or implement a hybrid model, we expect the benefits to be substantial.   With an OpenStack based solution, IT will be able to deliver varying workloads critical to create business values:

 

  •   Innovative systems – enables workloads incubated in an agile environment to deliver new capabilities that are quick scaling and fast to market, providing a competitive edge.
  •   Competitive systems – maximizes workloads that are the heart of the enterprise’s market differentiation, with optimized systems and configurations to meet workload specific policies.
  •   Legacy systems – optimizes legacy workloads that are critical for business continuity and allows them to run them as efficiently as possible.

 

HP and Intel have a long history of collaboration and our continued partnership will help deliver new capabilities to assist IT in their journey to be a services broker and “being the business”.

 

For more follow Jonathan on Twitter @jdonalds

IT Center

The Titans of HP Discover

Posted by IT Center Jun 12, 2014

Three of the biggest names in tech – HP President and CEO, Meg Whitman, Intel CEO, Brian Krzanich, and Microsoft CEO, Satya Nadella – sat down with New York Times Columnist and Pulitzer Prize winner, Thomas Friedman for the HP Discover keynote on Wednesday, June 11th to discuss the future of technological innovation. Whitman commented that it was the first time all three had been together in a public forum.

 

Whitman started the keynote by showing how consumer and commercial sides of the business are merging before showing off what is, for now, called “The Machine.” It will attempt to bring new computing architecture to market by the end of the decade under a non-volatile or “universal memory” system and allow for storage capacity in the realm of 100TB on a device as small as a smartphone. The long-term plan has many components too detailed to outline in this post, but the processing power of the future is no doubt impressive.

 

After the new product tease, Friedman took the stage and said that history will look back to see the most important early 21st century innovation as the merger of globalization and the IT movement. He sees humanity going from a connected to a hyper-connected world, and inter-connected society to an inter-dependent one. Friedman’s predication follows the tone of his 2005 book The World Is Flat, but he quickly noted how much has changed in the last 8 years since the books publication. Facebook, for example, wasn’t a ubiquitous part of our lives. Trending IT topics such as big data, cloud computing, and the Internet of Things (IoT) weren’t even on the map.

 

IoT dominated much of the discussion and highlighted just how important collaboration between the three companies will continue to be. “Microsoft and Intel are critical partners for HP,” said Whitman. “No company is an island.” Krzanich echoed these sediments, noting that today Intel works much more closely with partners to understand how software works in conjunction with hardware at every level, instead of simply shipping microchips out to be put into PCs.

 

“IoT makes the world flatter,” said Krzanich, in a callback to Friedman’s book. Krzanich talked about how the advancements of tablets and smartphones will seem minuscule compared to the exciting new things that will come out of the IoT over the next 5-10 years. “The unknown unknowns will be truly Earth shattering,” he said, predicting that when all the various data points are connected that we’ll be able to see what things are harming us or slowing us down that we didn’t even know. “We can solve real problems when we pool this data together,” he said.

 

Though the future of IoT is bright, it’s not a completely rosy path. “The existing way we do compute won’t scale when it comes to IoT,” remarked Whitman.

 

Friedman asked how each company saw it fitting into the explosion of the IoT, and Intel’s focus is sharp. “We want to make everything smart,” said Krzanich. “We are trying to find silicon that can go into any device or item that you can think of to raise its intelligence level.”

 

Nadella said he saw Microsoft and society as a whole “moving into the more personal computing era,” talking about how all the devices that collect data will then send that data to be merged in the cloud to be analyzed in order to give relevant feedback to users.

 

When Friedman asked what the next big disruption would be, Krzanich spoke about Intel’s ability to scan his body in less than 2 minutes to be sent to a 3D printer. He was visibly excited when saying that in the future, “everyone can become a manufacturer.”

 

Whitman noted a need for a change in energy efficiency in order for disruptions and innovations to continue, stating that if cloud computing as a whole was measured like a country that it would be fifth in overall energy consumption.

 

Friedman’s final question was about how each company saw their environmental obligations. Whitman considered it to be of paramount importance, saying HP’s obligations are “not only how we run our operations, but how we save space and power.” Krzanich agreed, saying Intel is committed to finding out how to do “more computing with less power.” He hopes Intel is a company with a “large footprint that leaves small footsteps in the community we’re in.”

 

Click here to watch the full keynote.

For the Atlanta 2014 OpenStack Summit, an idea was born: let’s take the tough problems that we all face with OpenStack and put on a fun but challenging competition.  This was a last-minute idea, which meant last-minute rules, but it all came together quite well and we are now sure we have cemented the idea of holding a “Ruler of the Stack” competition at every Summit.  This time, Intel brought in the gear, used our booth, provided the main prize, and Mirantis was gracious enough to throw in training credits for all participants and the prize for the slowest competitor our “Back to School” prize.  Next time we want this to be a community event… this is in reality for all of us in the OpenStack community, so let’s make sure we join together and make this massive in Paris.  OK… now for what happened in Atlanta.Photo_02.JPG

 

 

As I noted, the rules were somewhat last-minute, and I left out a few things that would have made this harder…  First of all, while we provided 8 nodes to each competitor, most realized that in the rules we didn’t actually require you to build more than one node – oops #1.  This did lead to some amazing record time, and the method can be used to scale out across more nodes, but we are definitely going to clarify the usage of an entire cluster in the future.  If you want to read the original rules, take a look at the blog I posted on Ruler of the Stack  the evening before the event started:

 

The competition started out a little slow, partially due to it being hard to find (again, next time we need to put this in the main walking area with flashing lights and cameras to make it more of an event).  However once it kicked into gear, we had the booth busy through most of the open floor time.

 

Our Ruler of the Stack for the Atlanta 2014 is Dirk of SUSE. He was awesome enough to post all of the details and share his OpenStack deployment method exactly so you can try it at home:.  Over the duration of the summit he continued to optimize his method, and ended with an astonishing 3:14 mins to get OpenStack online and launching VMs. Impressive.  The next runner up also did quite impressively: Ryan Moe of Mirantis used three of the nodes in the cluster to get Heat online and VMs live-migrating.  In the rules we deducted time for both of those, which led to a total adjusted time of 7:49 mins.

 

 

 

As promised, we are giving out Mirantis training to all of the contestants that finished. And for the lowest score (still a great score), we are giving our “Back to School” prize to Navdeep, which is a full OpenStack class from Mirantis.

 

All of the contenders that finished with functioning stacks are listed in the table at the end.

 

Dirk walked away with a jaw dropping 15" MacBook Pro with Retina display to continue his development of OpenStack, and to prepare for Ruler of the Stack Paris 2014.

 

Photo_01.JPG
For Paris 2014, we want this to be a community event by and for all of us involved in OpenStack.  This is not a vendor competition; this is for the people of OpenStack.  Therefore I am doing a call out: want to be involved in this for Paris 2014?  What tough problem do you want to showcase through the competition?  Want to provide prizes for the participants?  Get involved, as this is going to continue to just grow and become an even bigger blast!

 

 

We will post the instructions for the next competition at 9am local time the first day of the event – so get ready to compete in Paris in November!

 

- Das

 

#RuletheStack

 

                                                                                   

Time

Name

Comments

3:14

Dirk

Dirk used SUSE    Linux Enterprise Server 11 partnered with the upcoming SUSE Cloud 4 (based on    OpenStack Icehouse). He then used Kiwi to create a compressed, hybrid install    image on a USB stick

7:49

Ryan

Used fuel server running on his laptop; 3 nodes w/ Heat    and live migrated VMs; completed setup in 22:48, deducted 5 mins for VMs    deployed by heat and also deducted 10 mins for live migration.

22:29

James

Installed Folsom, single node

30:39

Andrew

Used fuel running on his laptop; 3 nodes.

31:36

Leandro

Installed Ubuntu 14.04 from USB flash (took ~10    mins), single node OpenStack setup, used devstack with repo cached locally    for install.

39:27

Allessandro

 

44:40

Navdeep

 

 

 

Honorable mention to those who couldn’t complete the  exercise for various reasons: Lukasz, Nicholas, Prashant, Cameron, Jeffery, and  Anthony.

Filter Blog

By date:
By tag: