1 2 3 Previous Next

The Data Stack

1,429 posts

We’ve worked with SAP HANA* for more than a decade to deliver better performance for SAP* applications running on Intel® architecture. And the results just keep getting better. The latest Intel® Xeon® processor E7 v2 family can help IT get even more insights from SAP HANA, faster. When you add VMWare VSphere* to the mix, you’ll see a huge boost in efficiency without adding more servers.


Why virtualize? Data centers running mission-critical apps are pushing toward more virtualization because it can help reduce costs and labor, simplify management, and save energy and space. In response to this push, Intel, SAP, and VMware have collaborated to make a robust solution for data center virtualization with SAP HANA.


What does this mean for IT managers? Your data center can grow with more scalable memory. You can have the peace of mind your data is protected with greater reliability. And, you’ll see big gains in efficiency, even when virtualized.


Grow with scalable memory


The Intel Xeon processor E7 v2 family offers 3x more memory capacity than previous generations, This not only dramatically increases the speed of SAP HANA performance, but it also gives you plenty of room as your data grows. Our Intel Xeon processor E7 v2 family also provides up to 6 terabytes of memory in four-socket servers and 64 GB dual in-line memory modules.[

 

Relax, your mission-critical data is protected


We designed the Intel Xeon processor E7 v2 family to support improved reliability, accessibility, and serviceability (RAS) features, which means solid performance all day, every day with 99.999% uptime[4]. Intel® Run Sure Technology adds even more system uptime and increases data integrity for business-critical workloads. Whether you run SAP HANA on physical machines, virtualized, or on a private cloud, your data and mission-critical apps are in good hands.

 

Be amazed at the gains in efficiency


When Intel, SAP HANA, and VMWare join forces in a virtualized environment, efficiencies abound. Data processing can be twice as fast with a PCI Express* (PCle) solid-state drive, You can get up to 4x more I/O bandwidth, which equates to 4x more capacity for data circulation.5,6 Hyper-threading doubles the number of execution threads, increasing overall performance for complex workloads.

.

In summary, the Intel Xeon processor E7 v2 family unleashes the full range of SAP HANA capabilities with the simplicity and flexibility of virtualization on vSphere. Read more about this solution in our recent white paper, “Go Big or Go Home with Raw Processing Power for Virtualized SAP HANA.”


Follow #TechTim on Twitter and his growing #analytics @TimIntel community.

By Juan F. Roche

 

Oracle’s new generation of X5 Engineered Systems, announced in late January, are powered by the latest top-of-the-line Intel® Xeon® E5 v3 processors to deliver businesses significantly improved benefits in performance, power efficiency, virtualization, and security.

 

Oracle Engineered Systems, which are purpose-built converged systems with pre-integrated stacks of software and hardware engineered to work together, are a cost-effective, simple-to-deploy alternative to data center complexity. With Intel performance, security technologies, and flexibility co-engineered directly into the system, these integrated systems are built with business value in mind.

 

These cost-effective Oracle systems perfectly demonstrate the tight, ongoing collaboration between Oracle and Intel. For over twenty years, the two companies have worked closely together, and the relationship is built on much more than simply tuning CPUs for performance. The collaboration extends from silicon to systems, with both companies working to optimize architectures, operating systems, software, tools and designs for each other’s technologies. Intel and Oracle work together to co-engineer the entire software, middleware, and hardware stack to ensure that the Engineered Systems take maximum advantage of the power built into Intel® Xeon® processors.

 

How does this translate into business value for our customers? Here’s a real-world example. Intel worked closely with Oracle in developing the customized Intel® Xeon® E7-8895 v2 processor, creating an elastic enterprise workload SKU that allows users to vary core counts and frequencies to meet the needs of differing workloads. Because of optimizations between the Intel Xeon processors and Oracle operating system kernels and system BIOS, this single processor SKU can behave like a variety of other SKUs at runtime.

 

That means faster processing and more flexibility for addressing business computing requirements: Oracle Exalytics* workloads that would take nearly 12 hours to run on an a non-optimized Intel Xeon-based platform now drop to a 6.83 hour runtime on the customized Intel Xeon platform—a speed-up of 1.72x. That’s the difference between being able to run analytical workloads overnight and having to wait until a weekend to get business critical analytics. With on-demand analytics like this, businesses have timely, precise intelligence for real-time decision-making.

 

In addition, the Exalytics platform is flexible: not all workloads require heavy per-thread concurrency, and this elastic SKU can be tuned and balanced to vary core counts and frequency levels to meet the needs of different computing requirements.

 

Oracle Engineered Systems are also optimized to take advantage of Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI), which is built into advanced Intel Xeon processors. Intel AES-NI helps boost data security by eliminating the performance penalty usually associated with software-based encryption by running encryption technology on hardware. Intel AES-NI speeds up the execution of encryption algorithms by as much as 300 percent, enhancing encryption performance so businesses don’t have to pay a performance overhead to keep data more secure.

 

To learn more about Oracle Exadata Engineered Systems powered by Intel Xeon processors, download our ebook What Will It Take to Manage Your Database Infrastructure?

By John Healy, General Manager, Software Defined Networking Division, Network Platforms Group, Intel


Mobile World Congress is upon us and there is plenty of buzz again about the progress of network functions virtualization (NFV). I’m looking forward many new NFV demos, product announcements and presentations on how mobile operators are solving problems using the technology.


I’m very bullish on the future of the NFV market. In this last year, the industry has successfully passed from a normative phase where specifications and use cases were determined, applications developed and proofs of concept and demos were successfully conducted.




Now we are moving into the next phase where NFV applications move into operation in production networks.  I am excited at the progress that our partners have achieved in translating trials into deployments and the benefits that they are beginning to achieve and measure.


But at the same time, I realize that as an industry there is still significant work to do to accelerate the technology to a point where carriers can consider full deployment and scaled implementations.  I believe there are two significant themes that need to be addressed in the coming year.

 

Challenge 1 - Technology Maturity


There have been plenty of successful NFV demos over the last 18 months proving the capability of virtualized services and the performance of standards-based computing platforms.  Now, we need to achieve mass scale and ruggedized implementations and for that the various building block technologies need to be hardened and matured.


Through this work the many virtual network functions (VNFs) will be “ruggedized” in order to provide the same service and reliability levels as today’s fixed-function counterparts.  This need for “carrier-grade reliability“ is the necessary maturing that will occur.


Much of this ruggedization will happen as operators test these VNFs in practical demonstrations; those that feature traffic types, patterns and volumes found in production networks.  Several announcements at MWC have highlighted the deployments into live networks that mark this new phase. We are actively involved in this critical activity with our partners and their customers.


But there’s also a need for more orchestration functionality to be developed and proven so that service providers can scale their networks through the automation of the implementation composition of network functions and services.


The intelligent placement of network functions mapped to the best capabilities of the computing platforms enables network services orchestration (NSO) to achieve the best performance. Exciting demos of this NSO in practice in a multi-vendor environment are on show at MWC.


Many of our ecosystem partners are tackling the orchestration of lower-level functions such as inventory, security, faults, application settings and other infrastructure elements following the ETSI management and network operations (MANO) model.  Others still have focused on the service orchestration based on models of the networks resources and policy definition schemes.


The open source community is also a key enabler of the maturing phase, including projects such as the Nova and Neutron software developments that are building orchestration functionality into OpenStack. The Open Platform for NFV (OPNFV) project is focused on hardening the NFV infrastructure and improving infrastructure management, which should improve the performance predictability of NFV services.


All of these initiatives are important and must be tested through implementation into carrier networks and stressed so that operators can be confident that services will perform predictably.

I’ve seen this performance evolution take place at Intel as we tackled the challenge of consolidating multiple processing workloads on our general purpose Intel Architecture CPUs while growing performance for packet processing to enable replacement of fixed function packet processors.


In the mid-2000s - packet processing performance on Intel processors was not where we wanted it to be, so we made modifications to the microarchitecture and at the same time we developed a series of acceleration libraries and algorithms that became the Data Plane Development Kit.


After several product generations, we can now provide wire-speed packet processing performance delivering 160Gbps of layer-three forwarding on a single core. This is made possible through our innovations and through deep collaborations with our partners, a concept we have extended to the world of NFV and from which many of the announcements at MWC have originated.

 

Challenge 2 – Interoperability


Interoperability on a grand scale is what will make widespread NFV possible. That means specification, standardization and interoperability are a major requirement for this phase of NFV.


The Open source dimension to NFV creates the community driven and supported approach that speeds innovation but it needs to be married to the world of specification definition and standardization that has traditionally moved at a much slower pace. Too slow for the new world that NFV enables.


This is a significant opportunity and challenge for the industry - we need to collectively find the bridge between both worlds. This is new territory for many of the parties involved and many of the projects are just starting on the path.

 

Intel’s Four Phase Approach to NFV


Intel is leading efforts to accelerate the maturity of the NFV market and we have outlined four key ways to do that.

First, we’re very active in developing and promoting open source components and standards. We are doing this by contributing engineering and management talent and our own technology to open source efforts. The goal is to ensure that standards evolve in an open and interoperable way.


Next, we have developed the Open Network Platform to integrate open source and Intel technologies into a set of server and networking reference designs that VNF developers can use to shorten their time to market.


Working with the industry is important, which is why we have developed Intel Network Builders, a very active ecosystem of ISVs, hardware vendors, operating system vendors and VNF developers. Network Builders gives these companies opportunities to work together and with Intel, and gives operators and others in the industry a place to find solutions and keep a pulse on the industry.


And lastly, we are working closely with service providers to support them in converting POCs into full deployments in their networks. It was at last year’s MWC that Telefonica announced its virtual CPE implementation, which Intel contributed to, and this year there are several more and we have many other similar projects that we’re working on now.


While these engineering challenges are significant, they are the growing pains that NFV must pass through to be a mature and tested solution. The key will be to keep openness and interoperability at the forefront and to keep the testing and development programs active so that they can scale to meet the needs of today’s carriers. If MWC is an indicator of the future it is definitely very bright.

By Dana Nehama, Sr. Product Marketing Manager, Network Platforms Group (NPG), Intel


It's a busy time for the Intel Open Network Platform Server team and our Intel Network Builder partners. This week at Mobile World Congress in Barcelona, there are no less than six SDN/NFV demos that are based on Intel ONP Server and are developed by our Intel Network Builder ecosystem partners. Back home, we are releasing Intel ONP Server release 1.3 with updates to the open source software as well as the addition of real-time Linux kernel support and 40GbE NIC support.


The Intel ONP Server is a reference-architecture that brings together hardware and open source software building blocks used in SDN/NFV. It helps drive development of optimized SDN/NFV products in telecom, cloud and enterprise IT markets


The MWC demos illustrate this perfectly as they all involve Intel Network Builders partners showcasing cutting-edge SDN/NFV solutions.


The ONP software stack comprises Intel and community-developed open source released software such as Fedora Linux, DPDK, Open vSwitch, OpenStack, OpenDaylight and others. The key is that we address the integration gap of multiple open source projects and bring it all together into a single software release.


Here’s what’s in release 1.3:


  • OpenStack Juno 2014.2.2 release
  • OpenDaylight Helium.1 release
  • Open vSwitch 2.3.90 release
  • DPDK 1.7.1 release
  • Fedora 21 release
  • Real-Time Linux Kernel
  • Integration with 4x10 Gigabit Intel® Ethernet Controller XL710 (Fortville)
  • Validation with a server platform that incorporates the Intel® Xeon® Processor E5-2600 v3 product family


Developers who go to www.01.org to generate the software will see the value of this bundle because it all works together.  In addition, the reference architecture guide available on 01.org is a “cook book” that provides guidelines on how to test ONP servers or build products that are based on Intel ONP Server software and hardware ingredients.


A first for this release is the support of Real-Time Linux Kernel, which makes ONP Server an option for new applications.


Another important aspect to the new release is the support for the 4x10GbE Intel Ethernet Controller XL710. This adapter delivers high performance with low power consumption. For applications like a vEPC, having the data throughput of the XL710 is a significant advance.


If you are an NFV / SDN developer who wants to get to market quickly, I hope you will take a closer look at the latest release of ONP Server and consider it as a reference for your NFV/SDN development.


If you can’t make it to Barcelona to see the demos, you can find more information at: www.intel.com/ONP or at www.01.org.

Renu Navale, Director of Intel Network Builders Program, Network Platforms Group, Intel



As a die-hard Carl Sagan fan, I love his quote – “Imagination will often carry us to worlds that never were, but without it we go nowhere.” There was a lot of imagination and strategic vision behind the beginnings of network function virtualization (NFV) and software defined networking (SDN). Now network transformation is an unstoppable force that has encompassed an entire industry ecosystem. The need for services agility, reduction in operational and capital expenses and the rapid growth in the Internet of Things is driving a transformation of network infrastructure. Both telco and cloud service providers aim to accelerate delivery of new services and capabilities for consumers and businesses, improve their operational efficiencies, and use cloud computing to meet their customers’ demand for more connectivity and delivery of real-time data.

 

 


With proven server, cloud, and virtualization technologies, Intel is an excellent position to apply these same technologies to the network infrastructure. Intel is working closely with the industry to drive this transformation by offering building blocks of standardized hardware and software, as well as server reference designs with supporting software, that address the performance, power, and security needs of the industry.  Intel also actively participates in open source and open standards development, invests in building strong ecosystems, and brings a breadth of experience in enterprise and cloud computing innovation.



Execution is an integral facet of any strategy. I consider the Intel Network Builders program part of the required execution for Intel’s NFV and SDN strategy. First – what is the Intel Network Builders program? It is an Intel led initiative to work with the larger industry ecosystem to accelerate network transformation on Intel architecture, products and technologies. Since the inception of the Intel Network Builders program, our ecosystem of partner companies has seen tremendous growth. We now have about 130 members who are hardware and software vendors, system integrators, and equipment manufacturers. The key value proposition for the members is increased visibility and market awareness, technology enabling via POCs and reference architectures using Intel products and ingredients, and increased business opportunities via various tools, workshops, and summits.



The tremendous increase in membership over this past year has resulted in the upgrade of our website and other engagement tools to meet our ecosystem partners’ needs. Most recently, we have launched a revamped member portal, where Intel Network Builders members have the opportunity to directly engage with one another, foster new business relationships, learn about upcoming events and webinars, and highlight their solutions to other community members. If you are already an Intel Network Builders ecosystem partner, you are invited to start engaging with us today, and if you are in the industry seeking resources and general news, please check out our site at networkbuilders.intel.com.

 

IMG_3127.JPG

 


It takes a whole village to raise a child. In a similar manner, it will take the whole networking industry ecosystem to accomplish this transformation. Hence Intel Network Builders as a program to connect and collaborate with the ecosystem is absolutely essential to deliver on the promise of NFV and SDN. I am in the midst of this amazing transformation. There are moments, as when writing this blog, that I am humbled to be part of this journey and transformation.



I hope to see you in Barcelona!

Network transformation is taking off like a rocket … with the SDN, NFV, and network virtualization market accounting for nearly $10 Billion (USD) in 2015, according to SNS Research.(1) This momentum will take front stage this week at Mobile World Congress (MWC) 2015, including dozens of solutions and demos that spotlight Intel technology.

 

New Ways to Speed up Packet Processing


Packet processing workloads are continuously evolving and becoming more complex, as seen by progressing SDN/Network-overlay standards and signature-based DPI, just to name a few examples. One requires highly flexible software and silicon ingredients to deliver cost-effective solutions to cater to these workloads. NFV solutions are all judged on how fast they can move packets on virtualized, general-purpose hardware. This is why the Data Plane Development Kit (DPDK) is seen as a critical capability, delivering packet processing performance improvements in the range of 25 to 50 times( 2, 3) on Intel® processors.


Building upon the DPDK, Intel will demonstrate at MWC how equipment manufacturers can boost performance further while making NFV more reliable. One way is to greatly reduce cache trashing by pinning L3 cache memory to high-priority applications using Intel Cache Allocation Technology. Another is to use a DPDK-based pipeline to process packets instead of distributing the load across multiples cores, which can result in bottlenecks if the flows cannot be uniformly distributed.

 

Intel Cache Allocation Technology


It’s no secret that virtualization inherently introduces overheads that lead to some level of application performance degradation compared to a non-virtualized environment. Most are aware of the more obvious speed bumps, like virtual machine (VM) enters/exits and memory address translations.


A lesser known performance degrader is caused by various VMs competing for the same cache space, called cache contention. When the hypervisor switches context to a VM that is a cache hog, cache entries for the other VMs get evicted, only to be reloaded when those VMs start up again. This can result in an endless cycle of cache reloads that can cut performance in half, as shown in the figure. (2, 3)

 

 

DPDK MWC Blog Graphic.jpg

 

 

On the left side, the guest VM implementing a three-stage packet processing pipeline (classify, L3 forward, and traffic shaper) has the L3 cache to itself, so it can forward packets at 11 Mpps. The middle pane introduces an aggressor VM that consumes more than half the cache, and the throughput of the guest VM drops to 4 Mpps. The right side implements Intel Cache Allocation Technology, which pins the majority of the cache to the guest VM, thus restoring the packet forwarding throughput to 11 Mpps. (2, 3)

 

IP Pipeline Using DPDK


There are two common models for processing packets on multi-core platforms:

 

  • Run-to-completion: A distributor divides incoming traffic flows among multiple processor cores, each processing their assigned flows to completion.
  • Pipeline: All traffic is processed by a pipeline constructed of several processor cores, each performing a different packet processing function in series.


At MWC 2015, Intel will have a live demonstration of high-performance NFV running on industry standard high volume server, where copies of packet processing pipelines are implemented in multiple VMs, and the performance of these VMs is governed using state-of-the-art Cache Monitoring and Allocation Technologies.


Want to know more? Get more information on Intel in Packet Processing.

 

Are you at MWC 2015?


Check out the high-performance NFV demo at the Intel Booth and see the new Intel technologies developed to drive even higher levels of performance in SDN and NFV! Visit us at MWC 2015 - App Planet, hall 8.1, stand #8.1E41.

 

 

 

 

1 Source: PR Newswire, “The SDN, NFV & Network Virtualization Bible: 2015 - 2020 - Opportunities, Challenges, Strategies & Forecasts.” Nov 27, 2014, http://www.prnewswire.com/news-releases/the-sdn-nfv--network-virtualization-bible-2015--2020--opportunities-challenges-strategies--forecasts-300002078.html.

 

2 Performance estimates are based on L2/L3 packet forwarding measurements.

 

3 Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel® products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations.

The excitement this week in Barcelona would make you think that Messi is in town for a match against computing and the networks that feed the billions of devices that dot our globe.  Mobile World Congress is in full photo 1.JPGswing with the who’s who of the tech industry sharing their latest wares and meeting to discuss the next generation of innovation.


I cannot underscore how struck I’ve been at the rate of telco equipment industry innovation at MWC.  It was only two years ago that I attended MWC and learned about the new NFV specifications moving through ETSI, and today I was fortunate to hear from network leaders Openet, Procera Networks, and Amartus on real solutions for telco billing solutions based on NFV powered service delivery.  This solution is a microcosm of the networking landscape today as groups of companies work together to deliver application, orchestration and infrastructure solutions together to solve point business challenges, in this case innovating billing solutions that historically were designed for voice only accounts.  With new NFV based solutions, telco operators will be better able to accurately bill for different types of data consumption along with voice usage and more rapidly deploy solutions to market.  Martin Morgan, VP of Marketing at Openet stated that initial solutions are already being deployed by select customers with range of scale from 50K to 50M customer bases.


Sandra Rivera, Intel’s VP and GM of the Network Platform Group, called out this type of ecosystem collaboration at the core of Intel’s heritage.  Her group’s Network Builder’s program has grown from 30 to 125 vendors in the 18 months since its inception and has begun adding telco operators such as Telefonica and Korea Telecom to its member roles.  Sandra explained that collaboration between providers and operators will help accelerate adoption of NFV solutions to the marketplace as providers can prioritize focus on use cases that provide the best opportunity for financial reward and operators can more quickly evaluate solutions coming to market.  She highlighted shepherding this broad collaboration as critical to Intel’s efforts in driving NFV adoption in 2015, and given the momentum behind the effort there’s little reason to expect anything other than continued growth in POC results and deployments in 2015.  To keep track of the latest developments in network ecosystem innovation visit the Intel Network Builders site.

photo 2.JPG
A blog about MWC would not be complete without mention of mobile device innovation, and one topic that has risen to the surface once again this year is the focus on mobile security.  I was fortunate to chat with executives from the Intel Security group to get the latest on Intel’s security solutions.  Mark Hocking, VP & GM of Safe Identity and Security at Intel Security discussed Intel’s latest innovation, TrueKey.  This cool technology enables a central resource for password management integrating facial recognition, biometrics, encryption technologies, and physical password entry to make the management of passwords manageable and more secure for the user.  I have to admit that as a person who has invented at least 50 ways to describe my dog to form different iterations of the seemingly endless permutations of passwords required to navigate today’s web, I was delighted to learn that soon simply smiling at my PC would provide a baseline of secure engagement with popular sites.  When Mark explained that TrueKey could add levels of security based on my requirements, I felt even better about the technology.


With the growth in wearable devices, the landscape of mobile security is evolving.  Intel’s Chief Consumer Security Evangelist, Gary Davis, caught up with me to share Intel’s strategy for addressing this new area for consumer vulnerability.  With over 780 million wearables expected to be live by 2019, users will be increasingly using mobile devices such as smart phones and tablets as aggregators of personal data.  Today’s reality is far from pretty in terms of secure use of mobile devices with <35% of mobile users not utilizing a phone PIN and even less employing mobile security or encryption technology for data.  Intel is working on this challenge, Gary explained, by bringing security technology to mobile devices through integration in silicon as well as working with device manufacturers to design and deliver security enabled solutions to market.


Come back tomorrow for my final update from Barcelona, and please listen to my interviews with these execs and more.

The world of mobile has descended on Barcelona with an expected 90K+ executives assembled at MWC 2015.  Many conversations here focus on the latest mobile gadgets and the advent of 5G, the 5th generation mobile network expected to reach final specifications in 2019 in advance of the 2020 Tokyo Olympics.  What 5G will bring to our devices is still not fully understood, but what is known today is that users’ insatiable demand for data-rich experiences continues its ascent.  Today, the average mobile user is consuming 2GB of data monthly, a doubling of usage within the last 12 months alone, and this data usage is pushing back-end networks from the network core to edge to innovate at an unprecedented pace. With Netflix already representing over a third of downstream internet traffic in the US, 2015 will represent the first year where we’ll stream more content from the Internet than consume from broadcast television.  Telco providers are facing this scaling user demand as well as new network traffic driven by the Internet of Things and new competition as the arenas of telecom and cloud services become further blurred. The pressure to innovate the core network to keep pace with demand has never been more acute.


Networking equipment vendors used to gather at the edges of MWC vs. their mobile device and provider customers, but since the industry began buzzing with the concepts of Software Defined Network (SDN) and Network Function Virtualization (NFV) a few years ago, the innovation in core networking equipment has taken its rightful place at center stage.  At focus is virtualization of the telecommunications network enabling telco providers to deliver new service capabilities far more nimbly than they can with traditional telecom solutions.  Where 2014’s MWC event was focused on the first proof of concept tests of NFV solutions, 2015 is focused on delivery of initial products for implementation across the network.  Today, I spoke to Steve Shaw, Director of Service Provider Marketing at Juniper Networks, about their vMX 3D Universal Edge Routers, and he pointed out that NFV enables telcos to deploy routing technology in places that the sophistication of routing would historically not reach.  This embeds greater intelligence across the network and provides more insight to the provider on real time network traffic data helping to improve service capability.  In talking to Steve, I also heard what would become a continual refrain from all of my conversations today – a demonstrated commitment for broad industry collaboration to bring these solutions to market.  Steve noted the critical importance for both east west and north south interoperability and the strategic role that orchestration software plays in connecting solutions from across the industry.


NFV is also driving broad industry innovation within virtual base stations (vBS), the devices that connect mobile users to the network edge.  By virtualizing a base station, providers are better able to address frequency issues and improve network performance and coverage capabilities to users while providing infrastructure on efficient, Intel architecture based platforms.  While vBS solutions were initially targeted for Cloud Radio Access Network (C-RAN) environments, many vendors are looking at in-building and distributed antenna system (DAS) solutions as initial beach head markets.  Imagine the rich media experience of the 21st century stadium environment, today often limited by access overload of too many mobile users in a confined physical space.  vBS promises to address this issue ensuring that coverage can more efficiently scale based on real time usage demand.  Here, broad industry innovation is also present.  I spoke to Eran Bello, VP of Products and Marketing for Israeli startup Asocs, today about their vBS solutions and he highlighted the acute interest in urban deployments for Asocs products and the importance of broad industry collaboration embracing of open standards to ensure delivery in the market.


And talking about blurred lines, Ericsson shook up the tech industry today with the announcement of new NFV fueled platforms to help telcos take their infrastructure hyperscale.  Based on Intel’s RackScale Architecture and integrated open orchestration capabilities, Ericsson’s offering will help telco operators utilize software defined infrastructure to help them compete with their cloud provider counterparts.  I spoke with Howard Wu, Head of Product Line for Cloud Hardware and Infrastructure at Ericsson, and he stated it was the company’s 139 year history in building relationships with customers that will make this venture a success given that technology innovation takes partnership and trust to result in deployment.


May sure to checkout  all my interviews from MWC Day 1 and check back tomorrow for more insights from Barcelona.

By Caroline Chan, Wireless access segment manager, Network Platform Group, Intel



Slated for field trials this year and production next year, virtualized radio access network (vRAN) technology could be key to delivering better network performance, lower TCO, and additional revenue streams. Demonstrating this innovative RAN for a new era of mobility, Alcatel-Lucent, China Mobile, Intel, and Telefónica* are showcasing four usage models at Mobile World Congress (MWC) 2015. Come see this TD-LTE live demo.

 

What’s vRAN?

 

The vRAN moves baseband processing from cell sites to a pool of virtualized servers running in a data center, with the goal of making the RAN more open and flexible, while supporting both new and existing functions. Also called Cloud-RAN (C-RAN), vRAN enables service providers to dynamically scale capacity and more easily deploy value-added mobile services at the network edge to generate incremental revenue and improve the user experience. Following the ETSI network functions virtualization (NFV) model, today’s vRAN for LTE is the launching pad to 5G, where compute + communication is one of the hallmark features.

 

Key Benefits

 

Centralized broadband units (BBUs) and network resources enable: - Better network performance - Network resources that are easier to scale and load balance, and that improve interference management, particularly for heterogeneous networks.

 

  • Lower TCO

    • Cell sites that cost less due to reduced complexity, and simpler network operations (upgrades/repair), thanks to easily accessible, shared platforms.
  • Differentiation and new revenue streams

    • New service opportunities that could be more RAN-aware (location-based caching) and generate additional revenue.

What you’ll see at MWC’15

 

Our demonstration is based on an evolved platform with advanced capabilities that address two application domains for vRAN:

 

vRAN

 

  1. Dynamically scale BBU capacity With mobility on the rise, it becomes increasingly more difficult to project demand, leading many service providers to over-provision baseband processing at cell sites. Alternatively, the vRAN increases/decreases BBU capacity by creating/destroying virtual machine (VMs) on-the-fly.

  2. Reduce outage impact If a cell site's baseband processing fails, service will be interrupted until a field service engineer is dispatched on-site to fix the equipment. Avoiding a truckroll, the vRAN implements local failover, through which baseband processing is migrated to another server without dropping the call.

Content/Application Delivery Optimization

 

  1. Video Streaming (consumer use case) Traditional cell sites don’t have the capability to run a variety of applications. On the other hand, vRAN is designed to host different virtualized functions/applications. Our demo shows how the vRAN can perform content delivery network (CDN) functions at the network edge, thereby pushing video content at higher bit rate and improving the user experience over cloud-based CDNs.

  2. Video Conferencing (enterprise use case) The signals for mobile users on a conference call must go through the whole operator network, creating delay and consuming bandwidth. But since the vRAN can run in the same data center as a video conference application, signals just pass through that vRAN node when the attendees are within the same serving area.

Come Visit Us!

 

Let us show you our live demo and answer your questions. Visit us at MWC 2015 - Hall 3, stand # 3D30.

In February, we finished archiving Chip Chat episodes from the OpenStack Summit and moved onto a few hot topics in the data center: Graphics, big data analytics and Non-Volatile Memory technologies. If you have a topic you’d like to see covered in an upcoming podcast, feel free to leave a comment on this post!

 

Intel® Chip Chat:

  • Accelerating OpenStack Adoption in the Data Center – Intel® Chip Chat episode 366: In this archive of a livecast from the OpenStack Summit, David Brown, Director of Data Center Software Planning at Intel, stops by to talk about the current explosion of OpenStack adoption within telecommunication and enterprise industries, as well as the expectations for the future of OpenStack development and deployment. David also highlights the Win the Enterprise effort that Intel recently initiated which facilitates the collaboration of 75 different organizations working to drive adoption of OpenStack in the enterprise industry. For more information, visit https://software.intel.com/en-us/articles/open-source-openstack.
  • Driving Next Gen Data Centers with Intel® Cache Acceleration Software – Intel® Chip Chat episode 367: Jake Smith, the Director of Strategic Planning for the NSG Storage and Software Division at Intel, discusses Intel® Cache Acceleration Software (CAS) and how it is accelerating the next generation of data centers. He illustrates how performance can be greatly increased for I/O bound and read intensive applications when CAS is combined with Intel® Data Center Family SSDs. Jake explains how utilizing Intel® Cache Acceleration Software can even enable a whole new set of application environments in the data center through tiered storage, tiered memory, cold storage SSDs, and hybrid environments. To learn more, visit www.intel.com and search for Intel® Cache Acceleration Software.
  • Unlocking Big Data with Open Source Solutions – Intel® Chip Chat episode 368: Ziya Ma, Director of Big Data Technologies at Intel, stops by to talk about how open source solutions are enabling enterprise to take advantage of the new concepts coming out of big data. She highlights how Intel is a leading contributor within the overall open source community and is working to accelerate the delivery of different vertical analytics solutions. Ziya also illustrates how Intel® Architecture (IA) based big data solutions provide some of the most complete and easiest big data experiences available.
  • Integrated Graphics in the Data Center – Intel® Chip Chat episode 369: Jim Blakley, General Manager of Visual Cloud Computing at Intel chats about the benefit of having integrated graphics in the data center. He highlights how online gaming, high definition video processing, and visual understanding are all applications that use graphics based technologies and are putting increasing demand for processing and acceleration within the data center. Jim discusses technologies like Intel® Quick Sync Video and Intel® Iris™ Pro graphics and how the industry rapidly moving towards the adoption of these innovative data center graphic processing solutions.

By Deirdré Straughan, Technology Adoption Strategist, Business Unit Cloud & IP, Ericsson

 

To keep up with today’s fast-changing workloads and storage demands, datacenters need distributed computing environments which can scale rapidly and seamlessly, yet remain cost-effective. This is known as hyperscale computing, and it is currently happening within a few giants (Amazon, Facebook, Google) who design their own systems – from data centers to cooling and electrical systems to hardware to firmware to software to operational methodologies – in order to achieve economics that drive down capex and opex, enabling them to be profitable providers of cloud and other massive-scale online services.

 

All business are becoming software companies, and we all need – but often can’t afford – this kind of “webscale information and communications technology (webscale ICT).” Most companies don’t have the in-house resources to design and manufacture bespoke systems as Amazon, Facebook, and Google are doing. Legacy IT vendors, content to maintain their current margins as long as possible, have not stepped up to fill this gap.

 

At the same time, we are all beginning to recognize the limitations in data security and governance that have so far deterred 74% of companies* from moving critical operations into the cloud. With daily news of data breaches across all sectors, and rapidly changing global laws on privacy and customer data, it is clear that traditional IT architectures and approaches, even when used strictly in-house, have become inadequate to today’s security needs.


Putting the C into ICT

 

Ericsson, which has been providing telecommunications equipment and services since 1876, approaches this problem from a different angle. We bring long, global experience in building and maintaining the real-time, reliable, predictable, and secure communications network infrastructure that our operator customers – and their billions of subscribers in the remotest corners of the globe – demand and rely upon.

 

We set out to analyze from first principles what it actually means to run the world’s most efficient data centers, and how their practices can be applied in every datacenter and telco central office. We have recognized a cycle in the industrialization of IT, a continuous loop of:

 

•   Standardization of hardware, software, operational methodologies, and economic strategy.

•   Combination and consolidation to drive highest possible occupancy, utilization, and density.

•   Abstraction, for complete programmability of all functionality and capabilities.

•   Automation of anything that is done more than three times.

•   Governance of performance, scalability, quality, economics, compliance, and security.

 

How is this to be achieved for hyperscale computing?


Hardware Standardization, Combination, and Consolidation

 

We first need an off-the-shelf system that can be designed, purchased, and managed in a completely customized fashion, able to integrate with legacy hardware and fit into existing data centers, yet capable of evolving rapidly as needs and workloads change: a software-defined, workload-optimized infrastructure, architected for hyperscale efficiency.

This is made possible by Intel’s Rack Scale architecture, which introduces features such as hardware disaggregation, silicon photonics, and software that pools disaggregated hardware over a rack scale fabric for higher utilization and performance.

 

Monitoring and lifecycle management software provide full awareness of every detail of hardware infrastructure and workloads – the knowledge needed to achieve new levels of capex and opex savings.


Abstraction

 

Another way to use hardware efficiently is to abstract formerly hard-wired features into software. Ericsson telco customers are already familiar with SDN (software-defined networking) and NFV (network functions virtualization), technologies that are enabling new efficiencies in telco systems. Software itself can be further abstracted and modularized via APIs.


Governance

 

Compelling economics, efficiencies, and ease-of-use, however, will not be enough in today’s increasingly insecure yet regulated world of data. The requirements for true data security and compliance go well beyond today’s RBAC, public-key encryption, and so on. On the front end, systems must set and enforce policy as software is deployed. Then, once data is moving through a system, its integrity must be independently verifiable wherever it goes, whenever anyone touches it, throughout its lifetime.


Conclusion

 

In the last 200 years, the telecommunications industry has brought the power of communication to an ever-larger number of the world’s peoples, at ever lower cost, resulting in unimaginable technical and social changes. The next step is to similarly democratize data compute and storage, bringing the power of IT to everyone, while maintaining the security and reliability that we expect from our telecommunications systems.

 

The Ericsson HDS 8000 hardware system announced today is a first step – a big step! – taken together with Intel, towards webscale ICT: massive-scale systems, reliable and secure, available to all. We don’t yet know what changes this will enable in the world – but it’s going to be fun finding out!

 

*Cloud Connect and Everest Group “Enterprise Cloud Adoption Survey 2014,” page 6. Also see: Cloud Connect and Everest Group “Enterprise Cloud Adoption Survey 2013”

Deirdré Straughan, Technology Adoption Strategist for Ericsson, has been communicating online since 1982 and has worked in technology nearly as long. She operates at the interfaces between companies and customers, technologists and non-technologists, marketers and engineers, and anywhere else that people need help communicating with each other about technology. Learn more at beginningwithi.com.

When Bob Metcalfe proposed Ethernet to his managers at Xerox PARC in the early 1970s, he unwittingly started a technology phenomenon that is still growing and evolving today.

 

Ethernet has become the dominant connection technology for most market segments such as enterprise, telecom, and public cloud. In fact, it’s everywhere—you see it in automobiles, trains, beverage dispensers, DVD rental machines, and so on.

 

The technology is more than 40 years old and while few technologies are still alive after that period of time, Ethernet is still going strong.

 

From a business perspective, the Ethernet market hit an all time high in revenues and in port shipments in Q3 2014, and is poised to grow to more than $25 billion in sales by 2019, according to industry analysts Dell’Oro Group.

 

The new industry push towards software-defined networks (SDN) and the evolving deployments of smarter endpoints with the Internet of Things (IoT), are driving requirements for the development of even higher-speed Ethernet standards as well as a new ‘mid-speed’ development that makes Ethernet more ubiquitous and useful than ever.

 

Ethernet’s Latest Evolution

 

Several industry consortia have recently formed to develop new standards to bridge the gaps in Ethernet data rates as newer high-speed networking protocols are defined. For example, 2.5GbE and 5GbE are proposals to provide speeds between 1GbE -10GbE; and 25GbE and 50GbE are intermediate speeds to support 100GbE switches.

 

Intel is the technology and market leader in 10GBASE-T network connectivity. We’ve invested heavily in the technology and are now seeing significant market growth and customer adoption in datacenter markets because the performance is very good over Cat7e copper cabling and the cost is dramatically lower than fiber-optic cable alternatives.

 

But, there’s a growing demand for higher-speed Ethernet over Cat5e cabling, which is the most widely installed cabling in Fast Ethernet and Gigabit Ethernet access networks.

 

We spoke to one customer recently who told us about high-performance, network-connected microscopes that need more than a 1GbE pipe, and yet rewiring to support 10GbE is extremely cost prohibitive.

 

And we’ve also spoken with integrators that support hospitals who have shown us how they can easily fill a 1GbE pipe when transmitting video images from diagnostic machines. But many hospitals are older buildings, similar to universities, where rewiring is even more expensive, making adoption of faster connectivity ‘realistically unachievable’ today.

 

Much more applicable to IT managers is using 2.5GbE or 5GbE to connect new 802.11AC Wave 2 WiFi access points that are coming to market later this year.

 

Wave 2 supports speeds exceeding 3Gb/s, which means they need faster uplinks to the network, but in many cases moving to 10GbE means installing new Cat 7a network cabling and that can mean $300 to $1,000 in additional costs per connection on top of the hardware allocation.

 

Customers are wary of the costs of rewiring their existing buildings, and so they get very excited to hear that 2.5GbE and 5GbE standards will work over Cat 5e copper cabling—making adopting this new technology much easier.  With this technology, higher speed networking between bandwidth-constrained endpoints can now become realistically achievable.

 

Intel is playing an active role in the NBASE-T Alliance, the group that is coordinating the technology development and the hand off of a specification to official IEEE standards organizations. As the market leader in 10GbE, we can help get these solutions to the market quickly.

 

Once the technology specifications are set, expect products from Intel and others that auto-sense all of the connection speeds from 1GbE to 10GbE.

 

Ethernet has found a way to change and grow to meet market needs even when faced with more elegant or higher speed competitive technologies. Thanks to the work of the NBASE-T Alliance and other groups working to make Ethernet better, I predict Ethernet will become more useful and continue its ubiquity in the networking market, maybe even for another 40 years.

I’m excited to be a part of several very pioneering NFV technology demonstrations on the Intel stand (#3D30) at Mobile World Congress 2015, which started this week in Barcelona.

 

The six demos highlight the incredible innovation and value that comes from combining the Intel Open Network Platform server reference design with the technology and talent of Intel Network Builders companies. There are three first time demos of NFV technology.

 

In some cases, the innovation stems from a different approach to the same challenge. Three of the demos, for example, will focus on NFV orchestration, but they each present a different type of solution.

 

Telefonica designed a purpose-built orchestration solution (working with Brocade and Cyan) to solve its specific needs, while Oracle and HP are showing commercial solutions developed for any service providers.

 

Two demos, one from China Telecom and one from HP, are focused on new ways to deliver high performance service chaining.

In addition, Dell will show off the flexibility, value and performance it can deliver for mobile broadband.

 

Here’s a brief description of what each collaborator will be demonstrating:

 

Telefonica/Cyan/Brocade: Last year at MWC 2014, Telefonica showed its NFV leadership with the very first NFV-based virtual CPE implementation. This year it is reinforcing its NFV industry leadership with an orchestration demonstration. The service provider selected Cyan to provide key orchestration technologies and selected Brocade to provide the industry leading virtual router. The demo shows an NFV orchestrator that is one of the first to make intelligent workload assignments using VNF descriptors. When NFV orchestrators make intelligent workload assignments, CSPs can conform to service level agreements of NFV-based services.

 

Oracle: This demo provides a sneak peak of Oracle’s new NFV platform that features Enhanced Platform Awareness (EPA) technology.  EPA is a new feature in the next release of OpenStack that gives the orchestration software the ability to determine the capabilities and capacity available in each node in a server pool. Then the orchestrator uses that information to manage the workloads on those servers. For example, EPA can identify under-loaded servers and move a workload to that machine. This is an important tool for managing virtualized cloud servers in a service provider environment.  The demo can be seen at the Oracle Booth.

 

Hewlett Packard: HP will be demonstrating its OpenNFV ecosystem, which it has combined with its Helion enterprise cloud server platform to develop NFV solutions. For carrier-grade deployments, Wind River offers a high-reliability, high-availability option. The demonstration, based on the Intel ONP architecture, will show service providers how they can use this platform to evaluate new and emerging virtual network functions for their own trials and then for deployment.

 

China Telecom: From this service provider we get another technology sneak peak in a service-chaining demo. Using the latest Data Plane Development Kit enhancements contributed to Open vSwitch, China Telecom has improved the performance of VM-to-VM communication. The DPDK packet processing performance enhancements are available on Intel’s open source portal 01.org, and a future release of Open vSwitch.

 

Cisco: Here’s a tremendous sneak peak: Intel and Cisco are demonstrating Network Service Header support for service acceleration in NFV, using Intel’s new 100Gb and 40Gb Ethernet offerings in standard servers.  This is the first Intel 100GbE demonstration and will first to use NSH technology for service chaining on Cisco UCS.

 

Dell:  Operators are looking to deliver differentiated services. One such option is Dell’s NFV Platform leveraging the Intel ONP architecture for an optimized mobile broadband solution. This integrated offering helps operators deliver seamless mobile broadband coverage even in sparse cellular coverage areas by using a multitude of cellular and WiFi providers in that specific area.

 

The NFV transformation is moving at a breakneck pace. It’s easy to see why when one looks at the amazing solutions that will be on display during Mobile World Congress. If you are going to the show, I hope you will stop by and take a look.

By Mike Bursell, Architect and Strategic Planner for the Software Defined Network Division at Intel and the chair of the ETSI Security Workgroup.

 

Telecom operators are getting very excited about network function virtualization (NFV). The basic premise is that operators get to leverage the virtualization technology created and perfected in the cloud to reduce their own CAPEX and OPEX. So great has been the enthusiasm, in fact, that NFV has crossed over and is now being used to also describe non-telecom deployments of network functions such as firewalls and routers.

 

A great deal of work and research is going on in areas where a telecom operator’s needs are different to those of other service providers. One such area is security. The ETSI NFV workgroup is a forum where operators, Telecom Equipment Manufacturers (TEMs), and other vendors have been meeting over the past two years to drive a consensus of understanding around the required NFV architecture and infrastructure. Within ETSI NFV, the “SEC” (Security) working group, of which I am the chair, is focusing on various NFV security architectural challenges. So far, the working group has published two documents:

 

 

The various issues that they address are worth discussing in more depth, and I plan to write some separate pieces in the future, but they can all be categorized as follows:

 

  • Host security
  • Infrastructure security
  • Virtualization network function (VNF)/tenant security
  • Trust management
  • Regulatory concerns

 

Let’s cover those briefly, one by one.

 

Host security

For NFV, the host is the hypervisor host (an older term is the VMM or Virtual Machine Manager), which runs the Virtual Machines. In the future, hypervisor solutions are unlikely to be the only way of providing virtualization, but the type of security issues they raise are likely to be replicated with other technologies. There are two sub-categories – multi-tenant isolation and host compromise.

 

Infrastructure

The host is not the only component of the NFV infrastructure. There may be routers, switches, storage, and other elements which need to be considered, and sometimes the infrastructure requirements, including the host and any virtual switches, local storage, etc. should be included as part of the overall picture.

 

VNF/tenant security

The VNFs, and how they are protected from external threats, including the operator and each other in a multi-tenant environment, would fall under this point.

 

Trust management

Whenever a new component is incorporated into an NFV deployment, issues of how much – or whether – it should trust the other components arise.  Sometimes trust is simple, and does not need complex processes and technologies, but a number of the use cases which operators are interested in may require significant and complex trust relationships to be created, managed, and destroyed.

 

Regulatory concerns

In most markets, governments place regulatory constraints on telecom operators before they are allowed to offer services.  These concerns may have security implementations.

 

Although the ETSI NFV workgroup didn’t set out to specifically focus on problem areas, these categories have turned out to be useful for generating possible concerns which need to be considered.  In future blogs, I will consider these questions and a number of the possible answers.

By Tim Allen and Prabha Ganapathy

 

Strata + Hadoop World is coming right up, but for hackers dedicated to humanitarian causes, there’s another big data-related event that’s even more imminent.

 

AtrocityWatch Hackathon for Humanity

 

In support of AtrocityWatch, an organization that works to provide an early warning of crimes against humanity through crowd sourcing and big data, Intel, Cloudera, Viral Heat, Amazon AWS, and O’Reilly Media are sponsoring a hackathon to apply data science to the cause of preventing atrocities. The event, AtrocityWatch for Humanity, will focus on using big data and mobile technologies to build a Geo-Fencing app based on a “sentiment” API that monitors social media channels to identify and track potential human rights abuses. The app would help the world spot atrocities before they occur by analyzing social media and raising awareness when atrocities do happen. The AtrocityWatch hackathon will be held 5pm to midnight on Feb. 12, 2015, at Cloudera’s offices, 1001 Page Mill Road, Palo Alto, CA. Registration is free; just bring your laptop. Click here for more information, and be sure to follow and tweet with the #AtrocityWatch hashtag to join the hackathon conversation online.

 

Strata + Hadoop World – Make Data Work

 

Just down the road in San Jose, Strata + Hadoop World, the world’s largest big data and data science conference, will take place Feb 17-20. It’s where leading developers, strategists, analysts and business decision-makers gather to discuss emerging big data techniques and technologies, and where cutting-edge science and new business fundamentals intersect. As an innovator in big data platforms, Intel has queued up a full program of keynote speakers, sponsored sessions, booth demonstrations and other events to showcase its latest advances in big data technologies.

 

As the focus of big data computing broadens beyond the data center to storage, networking and toward the network’s edge and the cloud, there’s more and more strain placed on a business’s underlying computing infrastructure. To deliver high performance big data solutions requires a flexible, distributed and open computing environment, and Intel is leading innovation into key platform technologies, such as advances in the software-defined infrastructure that can optimize your platform for data intensive workloads.

 

Here’s your chance to learn more about Intel’s approach to big data platform innovation. Attend the keynote address Intel and the Role of Open Source in Delivering on the Promise of Big Data by Michael Greene (@DadGreene), VP and general manager for Intel Software and Services, System Technologies and Optimizations Group. This presentation, held on Friday, Feb. 20 from 9am-10am in the Keynote Hall, discusses Intel’s vision for a horizontal, reusable and extensible architectural framework for big data.

 

For more insights into big data architectures, plan to attend From Domain-Specific Solutions to an Open Platform Architecture for Big Data Analytics Based on Hadoop and Spark, a tech session presented by Vin Sharma (@ciphr), Big Data Analytics Strategist at Intel, and Jason Dai, Intel’s Chief Architect of Big Data Technologies, on Thursday, Feb. 19, 11:30am-12:10pm in room 230 B.

 

Intel experts are also taking part in the following presentations:

 

 

Stop by the Intel booth #415 to say hello and to attend one of the ongoing presentations from Intel and its partners at our in-booth theater. We’re looking forward to seeing you, or tweet us at #Intel #StrataHadoop!

 

Follow Intel’s big data team at @PrabhaGana and keep up with Intel advances in analytics at @TimIntel and #TechTim.  

 

Filter Blog

By date:
By tag: