1 2 3 Previous Next

The Data Stack

1,480 posts

The international melting pot of Vancouver, BC provides a perfect background for the OpenStack Summit, a semi-annual get together of the developer community driving the future of open source in the data center. After all, it takes a melting pot of community engagement to build “the open source operating system for the cloud”. I came to Vancouver to get the latest from that community. This week’s conference has provided an excellent state of the union on where OpenStack is on delivering its vision to be the operating system for the cloud, how both the industry and user community are working to innovate on top of OpenStack to drive the ruggedness required for enterprise and telco deployments, and where gaps still exist between industry vision and deployment reality.  This state of the union is delivered across Summit keynotes, hundreds of track sessions, demos, and endless meetings, meet ups and other social receptions.

rule_the_stack_view.jpg

 

Intel’s investment in OpenStack reflects the importance of open source software innovation to deliver our vision of Software Defined Infrastructure.  Our work extends from our core engagement as a leader in the OpenStack Foundation, projects focused on ensuring that software is taking full advantage of Intel platform features to drive higher levels of security, reliability and performance, and collaborations driven to ensure that demos of today become mainstream deployments tomorrow.

 

So what’s new from Intel this week? Today, Intel announced Clear Containers, a project associated with Intel Clear Linux designed to ensure that container based environments leverage Intel virtualization and security features to both improve speed of deployment and enable a hardware root of trust to container workloads.  We also announced the beta delivery of Cloud Integration Technology 3.0, our latest software aimed delivering workload attestation across cloud environments, and showcased demos ranging from trusted VMs to intelligent workload scheduling to NFV workloads on trusted cloud architecture.


To learn more about Intel’s engagement in the OpenStack community, please check out a conversation with Jonathan Donaldson as well as learn about Intel’s leadership on driving diversity in the data center as seen through the eyes of some leading OpenStack engineers.


Check back tomorrow to hear more about the latest ecosystem and user OpenStack innovation as well as my perspectives on some of the challenges ahead for industry prioritization.  I’d love to hear from you about your perspective on OpenStack and open source in the data center. Continue the conversation here, or reach out @techallyson.

Cloud computing offers what every business wants: the ability to respond instantly to business needs. It also offers what every business fears: loss of control and, potentially, loss of the data and processes that enable the business to work. Our announcement at the OpenStack Summit of Intel® Cloud Integrity Technology 3.0 puts much of that control and assurance back in the hands of enterprises and government agencies that rely on the cloud.

 

Through server virtualization and cloud management software like OpenStack, cloud computing lets you instantly, even automatically, spin up virtual machines and application instances as needed. In hybrid clouds, you can supplement capacity in your own data centers by "bursting" capacity from public cloud service providers to meet unanticipated demand. But this flexibility also brings risk and uncertainly. Where are the application instances actually running? Are they running on trusted servers whose BIOS, operating systems, hypervisors, and configurations have not been tampered with? To assure security, control, and compliance, you must be sure applications run in a trusted environment. That's what Intel Cloud Integrity Technology lets you do.

 

Intel Cloud Integrity Technology 3.0 is software that enhances security features of Intel® Xeon® processors to let you assure applications running in the cloud run on trusted servers and virtual machines whose configurations have not been altered. Working with OpenStack, it ensures when VMs are booted or migrated to new hardware, the integrity of virtualized and non-virtualized Intel x86 servers and workloads is verified remotely using Intel® Trusted Execution Technology (TXT) and Trusted Platform Module (TPM) technology on Intel Xeon processors. If this "remote attestation" finds discrepancies with the server, BIOS, or VM —suggesting the system may have been compromised by cyber attack—the boot process can be halted. Otherwise, the application instance is launched in a verified, trusted environment spanning the hardware and the workload.

 

In addition to assuring the integrity of the workload, Cloud Integrity Technology 3.0 also enables confidentially by encrypting the workload prior to instantiation and storing it securely using OpenStack Glance. An included key management system that you deploy on premise gives the tenant complete ownership and control of the keys used to encrypt and decrypt the workload.

 

Cloud Integrity Technology 3.0 builds on earlier releases to assure a full chain of trust from bare metal up through VMs. It also provides location controls to ensure workloads can only be instantiated in specific data centers or clouds. This helps address the regulatory compliance requirements of some industries (like PCI and HIPAA) and geographical restrictions imposed by some countries.

 

What we announced at OpenStack Summit is a beta availability version of Intel Cloud Integrity Technology 3.0. We'll be working to integrate with an initial set of cloud service providers and security vendor partners before we make the software generally available. And we'll submit extensions to OpenStack for Cloud Integrity Technology 3.0 later this year.

 

Cloud computing is letting businesses slash time to market for new products and services and respond quickly to competitors and market shifts. But to deliver the benefits promised, cloud service providers must assure tenants their workloads are running on trusted platforms and provide the visibility and control they need for business continuity and compliance.

 

Intel Xeon processors and Cloud Integrity Technology are enabling that. And with version 3.0, we're enabling it across the stack from the hardware through the workload. We're continuing to extend Cloud Integrity Technology to storage and networking workloads as well: storage controllers, SDN controllers, and virtual network functions like switches, evolved packet core elements, and security appliances. It's all about giving enterprises the tools they need to capture the full potential of cloud computing.

By Tony Dempsey


I’m here attending the OpenStack Summit in Vancouver, BC and wanted to find out more about OPNFV, a cross industry initiative to develop a reference architecture for operators to use as a reference for their NFV deployments. Intel is a leading contributor to OPNFV, and I was keen to find out more, so I attended a special event being held as part of the conference.

 

Heather Kirksey (OPNFV Director) kicked off today’s event by describing what OPNFV is all about, including the history around why OPNFV was formed as well as an overview on what areas OPNFV is focused on. OPNFV is a carrier-grade integrated open source platform to accelerate the introduction of new NFV products and services, which was an initiative coming out of the ETSI SIG group and its initial focus is on the NFVI layer.

 

OPNFV’s first release will be called Arno (naming is themed on names of rivers) and will include OpenStack, OpenDaylight, and Open vSwitch.  No date for the release is available just yet but is thought to be soon. Notably, Arno is expected to be used in lab environments initially, versus a commercial deployment. High Availability (HA) will be part of the first release (control and deployment side is supported). The plan is to make OpenStack Telco-Grade instead of trying to make a Telco-Grade version of OpenStack. AT&T gave an example as to how they were going to use the initial Arno release.  As an example of how this release will be implemented, AT&T indicated they going to bring the Arno release into their lab, add additional elements to it, and test for performance and security. They see this release very much as a means to uncover gaps in open source projects, help identify fixes and upstream these fixes. OPNFV is committed to working with the upstream communities to ensure a good relationship.  Down the road it might be possible for OPNFV releases to be deployed by service providers but currently this is a development tool.

 

An overview on OPNFV’s Continuous Integration (CI) activities was given along with a demo. The aim of the CI activity is to give fast feedback to developers in order to increase and improve the rate at, which software is developed. Chris Price (TSC Chair) spoke about requirements for the projects and working with upstream communities. According to Chris, OPNFV’s focus is working with the open source projects to define the issues, understand which open source community can likely solve the problem, work with that community to find a solution, and then upstream that solution. Mark Shuttleworth (founder of Canonical) gave an auto-scaling demo showing a live VIMS core (from Metaswitch) with CSCF auto-scaling running on top of Arno. 

 

I will be on the lookout for more OPNFV news throughout the Summit to share. In the meantime, check out Intel Network Builders for more information on Intel’s support of OPNFV and solutions delivery from the networking ecosystem.

By Suzi Jewett, Diversity & Inclusion Manager, Data Center Group, Intel

 

I have the fantastic job of driving diversity and inclusion strategy for the Data Center Group at Intel.  For me it is the perfect opportunity to align my skills, passions, and business imperatives in a full time role.  I have always had the skills and passions, but it was not until recently that the business imperative portion grew within the company to a point that we needed a full time person to fill this role and many similar throughout Intel.  Being a female mechanical engineer I have always known I am one of the few and at times that was awkward, but even I didn’t know the business impact of not having diverse teams.IMG_2216.JPG


Over the last 2-3 years the information on the bottom line results to the business of having diverse persons on teams and in leadership positions has become clear, and has provided overwhelming evidence that says that we can no longer be okay with having a flat or dwindling diverse persons representation in our teams.  We also know that all employees actually have more passion for their work and are able to bring their whole-selves to work when we have an inclusive environment.  Therefore, we will not achieve the business imperatives we need to unless we embrace diverse backgrounds, experiences, and thoughts in our culture and in our every decision.

 

Within the Data Center Group one area that we recognize as well below where we need it to be is female participation in open source technologies. So, I decided that we should host a networking event for women at the OpenStack Summit this year and really start making our mark in increasing the number of women in the field.

 

Today I had my first opportunity to interact with people working in OpenStack at the Women of OpenStack Event. We had a beautiful cruise around the Vancouver Harbor and then chatted the night away at Black + Blue Steakhouse. About 125 women attended and a handful of male allies (yeah!). The event was put on by the OpenStack foundation and sponsored by Intel & IBM. The excitement of women there and the non-stop conversation was so energizing to be a part of and it was obvious that the women loved having some kindred spirits to talk tech and talk life with. I was able to learn more about how OpenStack works, why it’s important, and see the passion of everyone in the room to work together to make it better. I learned that many of the companies design features together, meeting weekly and assigning ownership to divvy up the work between the companies to complete feature delivery to the code…being new to open source software I was amazed that this is even possible and excited at the same to see the opportunities to really have diversity in our teams because the collaborative design has the opportunity to bring in a vast amount of diversity and create a better end product.

 

IMG_2218.JPG

 

A month or so ago I got asked to help create a video to be used today to highlight the work Intel is doing in OpenStack and the importance to Intel and the industry of having women as contributors. The video was shown tonight along with a great video from IBM and got lots of applause and support throughout the venue as different Intel women appeared to talk about their experiences. Our Intel ‘stars’ were a hit and it was great to have them be recognized for their technical contributions to the code and leadership efforts for Women of OpenStack. What’s even more exciting is that this video will play at a keynote this week for all 5000 attendees to highlight what Intel is doing to foster inclusiveness and diversity in OpenStack!

 

By Mike Pearce, Ph.D. Intel Developer Evangelist for the IDZ Server Community

 

 

On May 5, 2015, Intel Corporation announced the release of its highly anticipated Intel® Xeon® processor E7 v3 family.  One key area of focus for the new processor family is that it is designed to accelerate business insight and optimize business operations—in healthcare, financial, enterprise data center, and telecommunications environments—through real-time analytics. The new Xeon processor is a game-changer for those organizations seeking better decision-making, improved operational efficiency, and a competitive edge.

 

The Intel Xeon processor E7 v3 family’s performance, memory capacity, and advanced reliability now make mainstream adoption of real-time analytics possible. The rise of the digital service economy, and the recognized potential of "big data," open new opportunities for organizations to process, analyze, and extract real-time insights. The Intel Xeon processor E7 v3 family tames large volumes of data accumulated by cloud-based services, social media networks, and intelligent sensors, and enable data analytics insights, aided by optimized software solutions.

 

A key enhancement to the new processor family is its increased memory capacity – the industry’s largest per socket1 - enabling entire datasets to be analyzed directly in high-performance, low-latency memory rather than traditional disk-based storage. For software solutions running on and/or optimized for the new Xeon processor family, this means businesses can now obtain real-time analytics to accelerate decision-making—such as analyzing and reacting to complex global sales data in minutes, not hours.  Retailers can personalize a customer’s shopping experience based on real-time activity, so they can capitalize on opportunities to up-sell and cross-sell.  Healthcare organizations can instantly monitor clinical data from electronic health records and other medical systems to improve treatment plans and patient outcomes.

 

By automatically analyzing very large amounts of data streaming in from various sources (e.g., utility monitors, global weather readings, and transportation systems data, among others), organizations can deliver real-time, business-critical services to optimize operations and unleash new business opportunities. With the latest Xeon processors, businesses can expect improved performance from their applications, and realize greater ROI from their software investments.

 

 

Real Time Analytics: Intelligence Begins with Intel

 

Today, organizations like IBM, SAS, and Software AG are placing increased emphasis on business-intelligence (BI) strategies. The ability to extract insights from data is a something customers expect from their software to maintain a competitive edge.  Below are just a few examples of how these firms are able to use the new Intel Xeon processor E7 v3 family to meet and exceed customer expectations.

 

Intel and IBM have collaborated closely on a hardware/software big data analytics combination that can accommodate any size workload. IBM DB2* with BLU Acceleration is a next-generation database technology and a game-changer for in-memory computing. When run on servers with Intel’s latest processors, IBM DB2 with BLU Acceleration optimizes CPU cache and system memory to deliver breakthrough performance for speed-of-thought analytics. Notably, the same workload can be processed 246 times faster3 running on the latest processor than the previous version of IBM DB2 10.1 running on the Intel Xeon processor E7-4870.

 

By running IBM DB2 with BLU Acceleration on servers powered by the new generation of Intel processors, users can quickly and easily transform a torrent of data into valuable, contextualized business insights. Complex queries that once took hours or days to yield insights can now be analyzed as fast as the data is gathered.  See how to capture and capitalize on business intelligence with Intel and IBM.

 

From a performance speed perspective, Apama* streaming analytics have proven to be equally impressive. Apama (a division of Software AG) is an extremely complex event process engine that looks at streams of incoming data, then filters, analyzes, and takes automated action on that fast-moving big data. Benchmarking tests have shown huge performance gains with the newest Intel Xeon processors. Test results show 59 percent higher throughput with Apama running on a server powered by the Intel Xeon processor E7 v3 family compared to the previous-generation processor.4

 

Drawing on this level of processing power, the Apama platform can tap the value hidden in streaming data to uncover critical events and trends in real time. Users can take real-time action on customer behaviors, instantly identify unusual behavior or possible fraud, and rapidly detect faulty market trades, among other real-world applications. For more information, watch the video on Driving Big Data Insight from Software AG. This infographic shows Apama performance gains achieved when running its software on the newest Intel Xeon processors.

 

SAS applications provide a unified and scalable platform for predictive modeling, data mining, text analytics, forecasting, and other advanced analytics and business intelligence solutions. Running SAS applications on the latest Xeon processors provides an advanced platform that can help increase performance and headroom, while dramatically reducing infrastructure cost and complexity. It also helps make analytics more approachable for end customers. This video illustrates how the combination of SAS and Intel® technologies delivers the performance and scale to enable self-service tools for analytics, with optimized support for new, transformative applications. Further, by combining SAS* Analytics 9.4 with the Intel Xeon processor E7 v3 family and the Intel® Solid-State Drive Data Center Family for PCIe*, customers can experience throughput gains of up to 72 percent. 5

 

The new Intel Xeon processor E7 v3 processor’s ability to drive new levels of application performance also extends to healthcare. To accelerate Epic* EMR’s data-driven healthcare workloads and deliver reliable, affordable performance and scalability for other healthcare applications, the company needed a very robust, high-throughput foundation for data-intensive computing. Epic’s engineers benchmark-tested a new generation of key technologies, including a high performance data platform from InterSystem*, new virtualization tools from VMware*, and the Intel Xeon processor E7 v3 family. The result was an increase in database scalability of 60 percent,6, 7 a level of performance that can keep pace with the rising data access demands in the healthcare enterprise while creating a more reliable, cost-effective, and agile data center. With this kind of performance improvement, healthcare organizations can deliver increasingly sophisticated analytics and turn clinical data into actionable insight to improve treatment plans and ultimately, patient outcomes.

 

These are only a handful of the optimized software solutions that, when powered by the latest generation of Intel processors, are enabling tremendous business benefits and competitive advantage. From the highly improved performance, memory capacity, and scalability, the Intel Xeon E7 v3 processor family helps deliver more sockets, heightened security, increased data center efficiency and the critical reliability to handle any workload, across a range of industries, so that your data center can bring your business’s best ideas to life. To learn more, visit our software solutions page and take a look at our Enabled Applications Marketing Guide.

 

 

 

 

 

 

1 Intel Xeon processor E7 v3 family provides the largest memory footprint of 1.5 TB per socket compared to up to 1TB per socket delivered by alternative architectures, based on published specs.

2 Up to 6x business processing application performance improvement claim based on SAP* OLTP internal in-memory workload measuring transactions per minute (tpm) on SuSE* Linux* Enterprise Server 11 SP3. Configurations: 1) Baseline 1.0: 4S Intel® Xeon® processor E7-4890 v2, 512 GB memory, SAP HANA* 1 SPS08. 2) Up to 6x more tpm: 4S Intel® Xeon® processor E7-8890 v3, 512 GB memory, SAP HANA* 1 SPS09, which includes 1.8x improvement from general software tuning, 1.5x generational scaling, and an additional boost of 2.2x for enabling Intel TSX.

3 Software and workloads used in the performance test may have been optimized for performance only on Intel® microprocessors. Previous generation baseline configuration: SuSE Linux Enterprise Server 11 SP3 x86-64, IBM DB2* 10.1 + 4-socket Intel® Xeon® processor E7-4870 using IBM Gen3 XIV FC SAN solution completing the queries in about 3.58 hours.  ‘New Generation’ new configuration: Red Hat* Enterprise LINUX 6.5, IBM DB2 10.5 with BLU Acceleration + 4-socket Intel® Xeon® processor E7-8890 v3 using tables in-memory (1 TB total) completing the same queries in about 52.3 seconds.  For more complete information visit http://www.intel.com/performance/datacenter

4 One server was powered by a four-socket Intel® Xeon® processor E7-8890 v3 and another server with a four-socket Intel Xeon processor E7-4890 v2. Each server was configured with 512 GB DDR4 DRAM, Red Hat Enterprise Linux 6.5*, and Apama 5.2*. Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

5 Up to 1.72x generational claim based on SAS* Mixed Analytics workload measuring sessions per hour using SAS* Business Analytics 9.4 M2 on Red Hat* Enterprise Linux* 7. Configurations: 1) Baseline: 4S Intel® Xeon® processor E7-4890 v2, 512 GB DDR3-1066 memory, 16x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.11 sessions/hour. 2) Up to 1.72x more sessions per hour: 4S Intel® Xeon® processor E7-8890 v3, 512 GB DDR4-1600 memory, 4x 2.0 TB Intel® Solid-State Drive Data Center P3700 + 8x 800 GB Intel® Solid-State Drive Data Center S3700, scoring 0.19 sessions/hour.

6 Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more information go to www.intel.com/performance

7 Intel does not control or audit the design or implementation of third party benchmark data or Web sites referenced in this document. Intel encourages all of its customers to visit the referenced Web sites or others where similar performance benchmark data are reported and confirm whether the referenced benchmark data are accurate and reflect performance of systems available for purchase.

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

Certainly one of the miracles of technology is that Ethernet continues to be a fast growing technology 40 years after its initial definition.  That was May 23, 1973 when Bob Metcalf wrote his memo to his Xerox PARC managers proposing “Ethernet.”  To put things in perspective, 1973 was the year a signed ceasefire ended the Vietnam War.  The U.S. Supreme Court issued its Roe v. Wade decision. Pink Floyd released “Dark Side of the Moon.”

 

In New York City, Motorola made the first handheld mobile phone call (and, no, it would not fit in your pocket).  1973 was four years before the first Apple II computer became available, and eight years before the launch of the first IBM PC. In 1973, all consumer music was analog: vinyl LPs and tape.  It would be nine more years before consumer digital audio arrived in the form of the compact disc—which, ironically, has long since been eclipsed by Ethernet packets as the primary way digital audio gets to consumers.

 

motophone.jpg

 

The key reason for Ethernet’s longevity, IMHO, is its uncanny, Darwinian ability to evolve to adapt to ever-changing technology landscapes.  A tome could be written about the many technological challenges to Ethernet and its evolutionary response, but I want to focus here on just one of these: the emergence of multi-core processors in the first decade of this century.

 

The problem Bob Metcalf was trying to solve was how to get packets of data from computers to computers, and, of course, to Xerox laser printers.  But multi-core challenges that paradigm because Ethernet’s job as Bob defined it, is done when data gets to a computer’s processor, before it gets to the correct core in that processor waiting to consume that data.

 

Intel developed a technology to help address that problem, and we call it Intel® Ethernet Flow Director.  We implemented it in all of Intel’s most current 10GbE and 40GbE controllers. What Intel® Ethernet Flow Director does, in a nutshell, is establish an affinity between a flow of Ethernet traffic and the specific core in a processor waiting to consume that traffic.

 

I encourage you to watch a two and a half minute video explanation of how Intel® Ethernet Flow Director works.  If that, as I hope, whets your appetite to learn more about this Intel technology, we also have a white paper that delves into deeper details with an illustration of what Intel® Ethernet Flow Director does for a “network stress test” application like Memcached.  I hope you find both the video and white paper enjoyable and illuminating.

 

Intel, the Intel logo, and Intel Ethernet Flow Director are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

In today’s world, engineering teams can be located just about anywhere in the world, and the engineers themselves can work from just about any location, including home offices. This geographic dispersion creates a dilemma for corporations that need to arm engineers with tools that make them more productive while simultaneously protecting valuable intellectual property—and doing it all in an affordable manner.

 

Those goals are at the heart of hosted workstations that leverage new combinations of technologies from Intel and Citrix*. These solutions, unveiled this week at the Citrix Synergy 2015 show in Orlando, allow engineers to work with demanding 3D graphics applications from virtually anywhere in the world, with all data and applications hosted in a secure data center. Remote users can work from the same data set, with no need for high-volume data transfers, while enjoying the benefits of fast, clear graphics running on a dense, cost-effective infrastructure.

 

These solutions are in the spotlight at Citrix Synergy. Event participants had the opportunity to see demos of remote workstations capitalizing on the capabilities of the Intel® Xeon® processor E3-1200 product family and Citrix XenApp*, XenServer*, XenDesktop*, and HDX 3D Pro* software.

 

Show participants also had a chance to see demos of graphics passthrough with Intel® GVT-d in Citrix XenServer* 6.5, running Autodesk* Inventor*, SOLIDWORKS*, and Autodesk Revit* software. Other highlights included a technology preview of Intel GVT-g with Citrix HDX 3D Pro running Autodesk AutoCAD*, Adobe* Photoshop*, and Google* Earth.

 

Intel GVT-d and Intel GVT-g are two of the variants of Intel® Graphics Virtualization Technology. Intel GVT-d allows direct assignment of an entire GPU’s capabilities to a single user—it passes all of the native driver capabilities through the hypervisor. Intel GVT-g allows multiple concurrent users to share the resources of a single GPU.

 

The new remote workstation solutions showcased at Citrix Synergy build on a long, collaborative relationship between engineers at Intel and Citrix. Our teams have worked together for many years to help our mutual customers deliver a seamless mobile and remote workspace experience to a distributed workforce. Users and enterprises both benefit from the secure and cost-effective delivery of desktops, apps, and data from the data center to the latest Intel Architecture-based endpoints.

 

For a closer look at the Intel Xeon processor E3-1200 product family and hosted workstation infrastructure, visit intel.com/workstation.

 

 

Intel, the Intel logo, Intel inside, and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Citrix, the Citrix logo, XenDesktop, XenApp, XenServer, and HDX are trademarks of Citrix Systems, Inc. and/or one of its subsidiaries, and may be registered in the U.S. and other countries. * Other names and brands may be claimed as the property of others.

The “Intel Ethernet” brand symbolizes the decades of hard work we’ve put into improving performance, features, and ease of use of our Ethernet products.

 

What Intel Ethernet doesn’t stand for, however, is any use of proprietary technology. In fact, Intel has been a driving force for Ethernet standards since we co-authored the original specification more than 40 years ago.

 

At Interop Las Vegas last week, we again demonstrated our commitment to open standards by taking part in the NBASE-T Alliance public multi-vendor interoperability demonstration. The demo leveraged our next generation single-chip 10GBASE-T controller supporting the NBASE-T intermediate speeds of 2.5Gbps and 5Gbps (see a video of that demonstration here).

IMG_0254.JPG.jpeg

 

Intel joined the NBASE-T Alliance in December 2014 at the highest level of membership, which allows us to fully participate in the technology development process including sitting on the board and voting for changes in the specification.

 

The alliance, and its 33 members, is an industry-driven consortium that has developed a working 2.5GbE / 5GbE specification that is the basis of multiple recent product announcements. Based on this experience, our engineers are working diligently now to develop the IEEE standard for 2.5G/5GBASE-T.

 

By first developing the technology in an industry alliance, vendors can have a working specification to develop products, and customers can be assured of interoperability.

 

The reason Ethernet has been so widely adopted over the past 40 years is its ability to adapt to new usage models. 10GBASE-T was originally defined to be backwards compatible to 1GbE and 100Mbs, and required category 6a or category 7 cabling to get 10GbE. Adoption of 10GBASE-T is growing very rapidly in the datacenter, and now we are seeing the need for more bandwidth in enterprise and campus networks to support the next generation 802.11AC access points, local servers, workstations, and high-end PCs.

 

Copper twisted pair has long been the cabling preference for enterprise data centers and campus networks, and most enterprises have miles and miles of this cable already installed throughout their buildings. In the past 10 years alone, about 70 billion meters of category 5e and category 6 cabling have been sold worldwide.


Supporting higher bandwidth connections over this installed cabling is a huge win for our customers. Industry alliances can be a useful tool to help Ethernet adapt, and the NBASE-T alliance enables the industry to address the need for higher bandwidth connections over installed cables.


Intel is the technology and market leader in 10GBASE-T network connectivity. I spoke about Intel's investment in the technology in an earlier blog about Ethernet’s ubiquity.

 

We are seeing rapid adoption of our 10GBASE-T products in the data center, and now through the NBASE-T Alliance we have a clear path to address enterprise customers with the need for more than 1GbE. Customers are thrilled to hear that they can get 2.5GbE/ 5GbE over their installed Cat 5e copper cabling—making higher speed networking between bandwidth-constrained endpoints achievable.

 

Ethernet is a rare technology in that it is both mature (more than 40 years old since its original definition in 1973) and constantly evolving to meet new network demands. Thus, it has created an expectation by users that the products will work the first time, even if they are based on brand new specifications. Our focus with Intel Ethernet products is to ensure that we implement solutions that are based on open standards and that these products seamlessly interoperate with products from the rest of the industry.

 

If you missed the NBASE-T demonstration at Interop, come see how it works at Cisco Live in June in San Diego.

stylized_city_photo-s.jpg

When I started my career in IT, infrastructure provisioning involved a lot of manual labor. I installed the hardware, installed the operating systems, connected the terminals, and loaded the software and data, to create a single stack to support a specific application. It was common to have one person who carried out all of these tasks on a single system with very few systems in an Enterprise.

 

Now let’s fast forward to the present. In today’s world, thanks to the dynamics of Moore’s Law and the falling cost of compute, storage, and networking, enterprises now have hundreds of applications that support the business. Infrastructure and applications are typically provisioned by teams of domain specialists—networking admins, system admins, storage admins, and software folks—each of whom puts together a few pieces of a complex technology puzzle to enable the business.

 

While it works, this approach to infrastructure provisioning has some obvious drawbacks. For starters, it’s labor-intensive with too many hands in order to support, it’s costly in both people and software, and it can be rather slow from start to finish. While the first two are important for TCO, it is the third that I have heard the most about… Just too slow for the pace of business in the era of fast-moving cloud services.

 

How do you solve this problem? That is what the Software Defined Infrastructure is all about. With SDI, compute, network, and storage resources are deployed as services, potentially reducing deployment times from weeks to minutes. Once services are up and running, hardware is managed as a set of resources, and software has the intelligence to manage the hardware to the advantage of the supported workloads. The SDI environment automatically corrects issues and optimizes performance to ensure you can meet your service levels and security controls that your business demands.

 

So how do you get to SDI? My current response is that SDI is a destination that sits at the summit for most organizations. At the simplest level, there are two routes to this IT nirvana—a “buy it” high road and a “build-it-yourself” low road. I call the former a high road because it’s the easiest way forward—it’s always easier to go downhill than uphill. The low road has lots of curves and uphill stretches on it to bring you to the higher plateau of SDI.  Each of these approaches has its advantages and disadvantages.

 

The high road, or the buy-the-packaged-solution route, is defined by system architectures that bring together all the components for an SDI into a single deployable unit. Service providers who take you on the high road leverage products like Microsoft Cloud Platform System (CPS) and VMware EVO: RAIL to create standalone platform units with virtualized compute, storage, and networking resources.

 

On the plus side, the high road offers faster time to market for your SDI environment, a tested and certified solution, and the 24x7 support most enterprises are looking for in a path.  These are also the things you can expect in a solution delivered by a single vendor. On the downside, the high road locks you into certain choices in the hardware and software components and forces you to rely on the vendor for system upgrades and technology enhancements, which might happen faster with other solutions, but take place in their timelines. This approach, of course, can be both Opex and Capex heavy, depending on the solution.

 

The low road, or the build-it-yourself route, gives you the flexibility to design your environment and select your solution components from the portfolio of various hardware, software vendors and open source. You gain the agility and technology choices that come with an environment that is not defined by a single vendor. You can pick your own components and add new technologies on your timelines—not your vendor’s timelines—and probably enjoy lower Capex along the way, although at the expense of more internal technical resources.

 

Those advantages, of course, come with a price. The low road can be a slower route to SDI, and it can be a drain on your staff resources as you engage in all the heavy lifting that comes with a self-engineered solution set.  Also, it is quite possible with the pace of innovations that you see today in this area, that you never really achieve the vision of SDI due to all the new choices. You have to design your solution; procure, install, and configure the hardware and software; and add the platform-as-a-service (PaaS) layer. All of that just gets to a place where you can start using the environment. You still haven’t optimized the system for your targeted workloads.

 

In practice, most enterprises will take what amounts to a middle road. This hybrid route takes the high road to SDI with various detours onto the low road to meet specific business requirements. For example, an organization might adopt key parts of a packaged solution but then add its own storage or networking components or decide to use containers to implement code faster.

 

Similarly, most organizations will get to SDI in stepwise manner. That’s to say they will put elements of SDI in place over time—such as storage and network virtualization and IT automation—to gain some of the agility that comes with an SDI strategy. I will look at these concepts in an upcoming post that explores an SDI maturity model.

Management practices from the HPC world can get even bigger results in smaller-scale operations.

 

In 2014, industry watchers have seen a major rise in hyperscale computing. Hadoop and other cluster architectures that originated in academic and research circles have become almost commonplace in the industry. Big data and business analytics are driving huge demand for computing power, and 2015 should be another big year in the datacenter world.

 

What would you do if you had the same operating budget as one of the hyperscale data centers? It might sound like winning the lottery, or entering a world without limitations, but any datacenter manager knows that infrastructure scaling requires tackling even bigger technology challenges -- which is why it makes sense to watch and learn from the pioneers who are pushing the limits.

 

Lesson 1: Don't lose sight of the "little" data

 

When the datacenter scales up, most IT teams look for a management console that can provide an intuitive, holistic view that simplifies common administrative tasks. When managing the largest-scale datacenters, the IT teams have also learned to look for a console that taps into the fine-grained data made available by today's datacenter platforms. This includes real-time power usage and temperature for every server, rack, row, or room full of computing equipment.

 

Management consoles that integrate energy management middleware can aggregate these datacenter data points into at-a-glance thermal and power maps, and log all of the data for trend analysis and capacity planning. The data can be leveraged for a variety of cost-cutting practices. For example, datacenter teams can more efficiently provision racks based on actual power consumption. Without an understanding of real-time patterns, datacenter teams must rely on power supply ratings and static lab measurements.

 

A sample use case illustrates the significant differences between real-time monitoring and static calculations. When provisioning a rack with 4,000 watts capacity, traditional calculations resulted in one datacenter team installing approximately 10 servers per rack. (In this example, the server power supplies are rated at 650 watts, and lab testing has shown that 400 watts is a safe bet for expected configurations.)

 

The same team carried out real-time monitoring of power consumption, and found that servers rarely exceeded 250 watts. This knowledge led them to increase rack provisioning to 16 servers -- a 60% increase in capacity. To prevent damage in the event that servers in any particular rack create demand that would push the total power above the rack threshold, the datacenter team simultaneously introduced protective power capping for each rack, which is explained in more detail in Lesson 5 below.

 

Lesson 2: Get rid of your ghosts

 

Once a datacenter team is equipped to monitor real-time power consumption, it becomes a simple exercise to evaluate workload distribution across the datacenter. Servers and racks that are routinely under-utilized can be easily spotted. Over time, datacenter managers can determine which servers can be consolidated or eliminated. Ghost servers, the systems that are powered up but idle, can be put into power-conserving sleep modes. These and other energy-conserving steps can be taken to avoid energy waste and therefore trim the utility budget. Real-world cases have shown that the average datacenter, regardless of size, can trim 15 to 20 percent by tackling ghost servers.

 

Lesson 3: Choose software over hardware

 

Hyperscale operations often span multiple geographically distributed datacenters, making remote management vital for day-to-day continuity of services. The current global economy has put many businesses and organizations into the same situation, with IT trying to efficiently manage multiple sites without duplicating staff or wasting time traveling between locations.

 

Remote keyboard, video, and mouse (KVM) technology has evolved over the past decades, helping IT teams keep up, but hardware KVM solutions have as a result become increasingly complex. To avoid managing the management overlay itself, the operators of many of the world's largest and most complex infrastructures are adopting software KVM solutions and more recently virtualized KVM solutions.

 

Even for the average datacenter, the cost savings add up quickly. IT teams should add up the costs of any existing KVM switches, dongles, and related licensing costs (switch software, in-band and out-of-band licenses, etc.). A typical hardware KVM switching solution can cost more than $500K for the switch, $125K for switch software, and another $500K for in-band and out-of-band node licenses. Even the dongles can add up to more than $250K. Alternatively, software KVM solutions can avoid more than $1M in hardware KVM costs.

 

Lesson 4: Turn up the heat

 

With many years of experience monitoring and managing energy and thermal patterns, some of the largest datacenters in the world have pioneered high ambient temperature operation. Published numbers show that raising the ambient temperature in the datacenter by 1°C results in a 2% decrease in the site power bill.

 

hyperscale-column-image.jpg

 

It is important to regularly check for hot spots and monitor datacenter devices in real time for temperature-related issues when raising ambient temperature of a datacenter. With effective monitoring, the operating temperature can be adjusted gradually and the savings evaluated against the budget and capacity plans.

 

Lesson 5: Don't fry your racks

 

Since IT is expected -- mandated -- to identify and avoid failures that would otherwise disrupt critical business operations, any proactive management techniques that have been proven in hyperscale datacenters should be evaluated for potential application in smaller datacenters. High operating temperatures can have a devastating effect on hardware, and it is important to keep a close eye on how this can impact equipment uptime and life cycles.

 

Many HPC clusters, such as Hadoop, build in redundancy and dynamic load balancing to seamlessly recover from failures. The same foundational monitoring, alerts, and automated controls that help minimize hyperscale energy requirements can help smaller sites identify and eliminate hot spots that have a long-term impact on equipment health. The holistic approach to power and temperature also helps maintain a more consistent environment in the datacenter, which ultimately avoids equipment-damaging temperatures and power spikes.

 

Besides environment control, IT teams can also take advantage of leading-edge energy management solutions that offer power-capping capabilities. By setting power thresholds, racks can be liberally provisioned without the risk of power spikes. In some regions, power capping is crucial for protecting datacenters from noisy, unreliable power sources.

 

Following the leaders

 

Thankfully, most datacenters operate on a scale with much lower risks compared to the largest datacenters and hyperscale computing environments. However, datacenters of any size should make it a priority to reduce energy costs and avoid service disruptions. By adopting proven approaches and taking advantage of all the real-time data throughout the datacenter, IT and facilities can follow the lead of hyperscale sites and get big results with relatively small initial investments.

By David Fair, Unified Networking Marketing Manager, Intel Networking Division

 

iWARP was on display recently in multiple contexts.  If you’re not familiar with iWARP, it is an enhancement to Ethernet based on an Internet Engineering Task Force (IETF) standard that delivers Remote Direct Memory Access (RDMA).

 

In a nutshell, RDMA allows an application to read or write a block of data from or to the memory space of another application that can be in another virtual machine or even a server on the other side of the planet.  It delivers high bandwidth and low latency by bypassing the kernel of system software and avoiding the interrupts and making of extra copies of data that accompany kernel processing.

 

A secondary benefit of kernel bypass is reduced CPU utilization, which is particularly important in cloud deployments. More information about iWARP has recently been posted to Intel’s website if you’d like to dig deeper.

 

Intel® is planning to incorporate iWARP technology in future server chipsets and systems-on-a-chip (SOCs).  To emphasize our commitment and show how far along we are, Intel showed a demo using the RTL from that future chipset in FPGAs running Windows* Server 2012 SMB Direct and doing a boot and virtual machine migration over iWARP.  Naturally it was slow – about 1 Gbps - since it was FPGA-based, but Intel demonstrated that our iWARP design is already very far along and robust.  (That’s Julie Cummings, the engineer who built the demo, in the photo with me.)

 

pastedImage_17.png

 

Jim Pinkerton, Windows Server Architect, from Microsoft joined me in a poster chat on iWARP and Microsoft’s SMB Direct technology, which scans the network for RDMA-capable resources and uses RDMA pathways to automatically accelerate SMB-aware applications.  With SMB Direct, no new software and no system configuration changes are required for system administrators to take advantage of iWARP.

 

pastedImage_1.png

 

Jim Pinkerton also co-taught the “Virtualizing the Network to Enable a Software Defined Infrastructure” session with Brian Johnson of Intel’s Networking Division.  Jim presented specific iWARP performance results in that session that Microsoft has measured with SMB Direct.

 

Lastly, the Non-Volatile Memory Express* (NVMe*) community demonstrated “remote NVMe,” made possible by iWARP.  NVMe is a specification for efficient communication to non-volatile memory like flash over PCI Express.  NVMe is many times faster than SATA or SAS, but like those technologies, targets local communication with storage devices.  iWARP makes it possible to securely and efficiently access NVM across an Ethernet network.  The demo showed remote access occurring with the same bandwidth (~550k IOPS) with a latency penalty of less than 10 µs.**

 

pastedImage_27.png

 

Intel is supporting iWARP because it is layered on top of the TCP/IP industry standards.  iWARP goes anywhere the Internet goes and does it with all the benefits of TCP/IP, including reliable delivery and congestion management. iWARP works with all existing switches and routers and requires no special datacenter configurations to work. Intel believes the future is bright for iWARP.

 

Intel, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

*Other names and brands may be claimed as the property of others.

**Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com.

May 5th, 2015 was an exciting day for Big Data analytics. Intel hosted an event focused on data analytics, announcing the next generation of the Intel® Xeon® Processor E7 family and sharing an update on Cloudera one year after investing in the company.

 


At the event, I had the pleasure of hosting a panel discussion among three very interesting data science experts:

 

  • David Edwards, VP and Engineering Fellow at Cerner, a healthcare IT and electronic medical records company, has overseen the development of a Cloudera-based Big Data analytics system for patient medical data that has enabled the creation of a number of highly effective predictive models that have already saved the lives of hundreds of patients.

 

  • Don Fraynd, CEO of TeacherMatch, an analytics company that has developed models that correlate a broad variety of school teacher attributes with actual student performance measures to increase the effectiveness of the teacher hiring process. These models are used to identify the most promising candidates for each teaching position, given the individual circumstances of the teaching opportunity.

 

  • Andreas Weigend, Director of the Social Data Lab, professor at Stanford and UC Berkeley, and past Chief Scientist at Amazon, has been a leader in data science since before data science was a “thing.” His insights into measuring customer behavior and predicting how they make decisions has changed the way we experience the Internet.

 

My guests have all distinguished themselves by creating analytics solutions that provide actionable insights into individual human behavior in the areas of education, healthcare and retail.  Over the course of the discussion a major theme that emerged was that data analytics must empower individuals to take action in real time.

 

David described how Cerner’s algorithms are analyzing a variety of patient monitoring data in the hospital to identify patients who are going into septic shock, a life threatening toxic reaction to infection. “If you don’t close that loop and provide that immediate feedback in real time, it’s very difficult to change the outcome.”

 

Don explained how TeacherMatch is “using hot data, dashboards, and performance management practices in our schools to effect decisions in real time…What are the precursors to a student failing a course? What are the precursors to a student having a major trauma event?”

 

Andreas advanced the concept of a dashboard one step further and postulated that a solution analogous to a navigation system is what’s needed, because it can improve the quality of the data over time. “Instead of building complicated models, build incentives so that people share with you…I call this a data refinery…that takes data of the people, data by the people and makes it data to be useful for the people.”

 

Clearly, impactful analytics are as much about timeliness and responsivity as they are about data volume and variety, and they drive actions, not just insights.

 

In his final comments, David articulated one of my own goals for data science: “To make Big Data boring and uninteresting.” In other words, our goal is to make it commonplace for companies to utilize all of their data, both structured and unstructured, to provide better customer experiences, superior student performance or improved patient outcomes. As a data scientist, I can think of no better outcome for the work I do every day.

 

Thanks to our panelists and the audience for making this an engaging and informative event. Check out the full panel to get all of the great insights.

By Aruna Kumar, HPC Solutions Architect Life Science, Intel

 

 

15,000 to 20,000 variants per exome (33 Million bases) vs. 3 million single nucleotide polymorphisms per genome. HPC a clearly welcome solution to deal with the computational and storage challenges of genomics at the cross roads of clinical deployment.

 

At the High performance Computing User Forum held at Norfolk in mid-April, it was clear that the face of HPC is changing. The main theme was Bio-Informatics – a relatively newcomer to the user base of HPC. Bioinformatics including high throughput sequencing have introduced computing to entire new fields that have not utilized computing in the past. Just as in social sciences, these fields appear to share a thirst for large amounts of data that is still largely a search for incidental findings but seeking architectural, algorithmic optimizations and usage based abstractions simultaneously. This is a unique challenge for HPC and one that is challenging HPC systems solutions.

 

What does this mean for the care of our health?

 

Health outcomes are increasingly tied to the real time usage of vast amounts of both structured and unstructured data. Sequencing of the genome or targeted exome is distinguished by its breadth. Clinical diagnostics such as blood work for renal failure, diabetes, or aneamia that are characterized by depth of testing, genomics is characterized by breadth of testing.

 

As aptly stated by Dr. Leslie G. Biesecker and Dr. Douglas R. Green in 2014 New England Journal of Medicine paper, “The interrogation of variation in about 20,000 genes simultaneously can be a powerful and effective diagnostics method.”

 

However, it is amply clear from the work presented by Dr. Barbara Brandom, Director of Global Rare Diseases Patient Registry Data Repository (GRDR) at NIH, that the common data elements that need to be curated to improve therapeutic development and quality of life for many people with rare diseases is an relatively complex blend of structured and unstructured data.

 

GRDR Common Data Elements table includes contact information, socio-demographic information, diagnosis, family history, birth and reproductive history, Anthropometric information, patient-reported outcome, medications/devices/health services, clinical research and biospecimen, and communication preferences.

 

Now to some sizing of data and compute needs to appropriately scale the problem from a clinical perspective. Current sequencing sampling is at 30x from the Illumina HiSeqX systems. That is 46 thousand files that are generated in a three day sequencing run adding up to a 1.3 terabyte (TB) of data. This data is converted to variant calling referred to by Dr. Green earlier in the article. This analysis to the point of generating variant calling files accumulates an additional 0.5 TB of data per human genome. In order for clinicians and physicians to identify stratified subpopulation segments with specific variants, it is often necessary to sequence complex targeted regions at much higher sampling rates with longer read lengths than that generated by current 30x sampling. This will undoubtedly exacerbate an already significant challenge.

 

So how does Intel’s solutions fit in?

 

Intel Genomics Solutions together with the Intel Cluster Ready program are providing much needed sizing guidance to enable the clinicians and their associated IT data center to provide personalized medicine in the most efficient manner to scale with growing needs.

 

The needs broadly from a compute perspective, are to handle the volume of genomics data in a real time manner to generate alignment mapping files.  These mapping files contain the entire sequence information, the quality and position information, resulting from a largely single threaded process of converting FASTQ files into alignment mapping files. The alignment mapping files are generated as text files and converted to a more compressed binary format often known as BAM (binary alignment map) files. The difference between a reference genome and the aligned sample file (BAM) is what is contained in a variant calling files. Variants come in many forms, although the most common form is the presence or absence in a corresponding position of a single base or nucleotide. This is known as single nucleotide polymorphism (SNP). The process of research and diagnostics involves generation and visualization of BAM, SNPs and entire VCF files.

 

Given the lack of penetrance of incidental findings across a large numbers of diseases, the final step to impacting patient outcomes unstructured data and meta data, requires the use of parallel file systems such as Lustre and object storage technologies that provide the ability to scale-out and support personalized medicine use cases.

 

More details on how Intel Genomics Solutions aid the scale out to directly impact personalized medicine in a clinical environment in a future blog!

 

 

For more resources you can find out Intel’s role in Health and Life Sciences here and learn more about Intel in HPC at intel.com/go/hpc or learn more about Intel’s boards and systems products at http://www.intelserveredge.com/

In April, we continued to share Mobile World Congress podcasts recorded live in Barcelona, as well as the announcement from the U.S. Department of Energy that it had selected Intel to be part of its CORAL program, in collaboration with Cray and Argonne National Laboratory, to create two new revolutionary supercomputers. We also covered interesting topics like Intel Security’s True Key™ technology and Intel’s software defined infrastructure (SDI) maturity model. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel Chip Chat:

In this archive of a livecast from Mobile World Congress John Healy, Intel’s GM of the Software Defined Networking Division, stops by to talk about the current state of Network Functions Virtualization (NFV) adoption within the telecommunications industry. He outlines how Intel is driving the momentum of NFV deployment through initiatives like Intel Network Builders and how embracing the open source community with projects such as Open Platform for NFV (OPNFV) is accelerating the ability for vendors to now offer many solutions that are targeted towards function virtualization.

Paul Messina, the Director of Science at Argonne Leadership Computing Facility at Argonne National Laboratory, stops by to talk about the leading edge scientific research taking place at the Argonne National Laboratory. He announces how the Aurora system, in collaboration with Intel and Cray, will enable new waves of scientific discovery in areas like wind turbine simulation, weather prediction, and aeronautical design. Aurora will employ an integrated system design that will drive high performance computing possibilities to new heights.

Barry Bolding, VP of Marketing and Business Development at Cray, announces that Intel, Cray, and the Department of Energy are collaborating on the delivery and installation of one of the biggest supercomputers in the world. He discusses how Cray is working to help its customers tackle the most challenging supercomputing, data analytics, storage, and data management problems possible. The 180 PetaFLOPS Aurora system will help solve some of the most complex challenges that the Department of Energy faces today from material science and fluid dynamics to modeling more efficient solar cells and reactors.

Ed Goldman, CTO of the Enterprise Datacenter Group at Intel, discusses the maturity model that Intel is using to help enable enterprises lay the groundwork to move their data centers to SDI. Ed explains that the SDI Enterprise Maturity Model has five stages in the progression from traditional hard-wired architecture to SDI and how Intel is providing intelligence in the hardware platforms that include security, workload acceleration, and intelligent resource orchestration that help data centers become more enabled for SDI.

In this archive of a livecast from Mobile World Congress Mark Hocking, Vice President & General Manager of Safe Identity at Intel Security, stops by to talk about the new True Key product by Intel Security that is providing a solution to address a universal pain point for computing users around the globe: passwords. Mark discusses how True Key is innovating the way people log in to websites and applications by using personal biometrics and password storage so that you can automatically and securely sign in to your digital life without having to struggle with numerous passwords. To learn more visit www.truekey.com.

In this archive of a livecast from Mobile World Congress Sandra Rivera, VP and General Manager of the Network Platforms Group at Intel, chats about new innovative service capabilities that are solving business challenges in the telecommunications industry. She outlines how NFV has transformed the industry and highlights the work that Intel is doing to help enable the telecommunications industry ecosystem through the Intel Network Builders program, which now has over 125 community members. For more information, visit https://networkbuilders.intel.com/.

 

Intel, the Intel logo, and True Key are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Network functions virtualization (NFV) is generally viewed as a revolutionary concept because of the positive economic impact it is having on service provider networks.  NFV takes cost out of the network by replacing proprietary hardware and software with industry standard servers running open standards-based solutions.  By deploying network applications and services built on these servers, service providers can achieve service agility for their customers.

 

NFV also ushers in another significant business process revolution: the deep involvement of service providers in the definition and development of NFV standards and solutions. I was reminded of that as I looked over the list of NFV demos that Intel is showing at the NFV World Congress, in San Jose on May 5-7.

 

In the pre-NFV days, key service providers worked very intimately with equipment vendors on the building of the telecom network. Strict adherence to industry standard specs was paramount, but innovation sometimes got stifled. The approach was well suited to a more closed technology environment.

 

Now, service providers are getting much more involved at the ground level with significant contributions to the solutions and the underlying technology. One case in point is Telefonica, which spent a year developing software interfaces for NFV management and organization (MANO) and then just recently made it available by delivering open source called OpenMANO. The code strengthens the connection between the virtual infrastructure manager (VIM) and the NFV orchestrator.

 

Other service providers will be showcasing their contributions in the Intel booth at NFV World Congress. Here's a list of all of the demos that we’ll feature at the show:

 

End-to-End NFV Implementation: This demo will use MANO technology to help carriers guarantee performance during the on-boarding of new virtual network functions. The exhibit will explore and understand a simplified approach to ingesting the SDN/VNFD and performing the intelligent VNF placement through an intelligent orchestration engine. Participating partners: Telefonica, Intel, Cyan, Red Hat and Brocade.

 

Carrier Grade Service Function Chain: China Telecom will showcase the centralized software-defined networking (SDN) controller with an integrated service chaining scheduler to support dynamic service chaining for data centers, IP edge network and mobile network in a very flexible and scalable fashion. China Telecom’s work using the open source Data Plane Development Kit enhancements improves performance of VM-to-VM communication. Participating partners: China Telecom, Intel.

 

Multi-vendor NFV for Real-Time OSS/BSS: This live demonstration shows how NFV concepts can be applied to OSS/BSS functions.  With support of deep packet inspection (DPI), policy, charging and analytics, and OSS/BSS - all in an NFV implementation – this demonstration will deliver increased system agility, elasticity, and greater service availability. Participating partners: Vodafone, Red Hat, Intel, Vodafone Openet, Procera Networks, Amartus and Cobham Wireless.

 

Nanocell: The nanocell is a next-generation small wireless base station running on an Intel-powered blade server.  Developed by the China Mobile Research Institute the nanocell supports GSM / TD-SCDMA / TD-LTE standards and WLAN (WiFi) network connections. In most applications, the nanocell will have a range of between 100m and 500m, making it ideal for deployment in enterprise, home and high-capacity hotspot locations. Participating partners: China Mobile Research Institute, Intel.

 

Service Function Chaining: In addition to these carrier demos, Intel and Cisco will reprise one of the most popular demos from the recent Mobile World Congress: the first network service header (NSH)-based service function chaining demo. The demo presents a chance to see Intel's new 100GbE technology and Cisco's Open Daylight implementation which provides advanced user controls.

 

If you are planning to attend the NFV World Congress, stop by the Intel booth and take some time with these demonstrations. I look forward to seeing you there.

Filter Blog

By date:
By tag: