Skip navigation
1 2 3 Previous Next

The Data Stack

1,665 posts

By James Hsu, Director of Technical Marketing at Citrix

 

One of the great experiences in our industry is to see  products from different vendors—hardware and software—come together to solve  real customer problems. That’s what’s been happening with Citrix and Intel for  the last two years as we worked together to apply Intel Graphics  Virtualization Technology (Intel GVT) to the Citrix  XenServer virtualization platform. The result of that effort is Citrix  XenServer 7.0, which we are announcing at Citrix Synergy 2016 in Las Vegas. It’s  the first commercial hypervisor product to leverage Intel GVT-g, Intel’s virtual  graphics processing unit that can power multiple VMs with one physical GPU. As  well as announcing XenServer 7.0, Citrix is also announcing XenDesktop 7.9  offering industry-leading remote graphics delivery supported by Intel.  Let me tell you what that does for users  running graphics-intensive virtualized desktop applications, and then I’ll tell  you how we used Intel GVT-g to do it.

 

hdx-technical-and-training-materials-header-1010x464.jpg

 

Citrix XenApp and  XenDesktop lets you deliver virtualized desktop and applications hosted on  a server to remote workstations. Many desktop applications—like computer-aided  design and manufacturing apps and even accelerated Microsoft Office—require the  high-performance graphics capabilities of a graphics processing unit (GPU). In  XenDesktop 7.9 Citrix also added support for Intel Iris Pro graphics in the HDX  3D Pro remote display protocol.

 

Earlier versions of XenServer enabled Intel GPU capabilities  on virtualized desktops in a pass-through mode that allocated the GPU to a  single workstation. Now, XenServer 7.0 expands our customers’ options by using  Intel  GVT-g to virtualize access to the Intel Iris Pro Graphics GPU integrated onto select  Intel Xeon processor E3  family products , allowing it to be shared by as many as seven  virtual workstations.

 

With Intel GVT-g, each virtual desktop machine has its own copy  of Intel’s native graphics driver, and the hypervisor directly assigns the full  GPU resource to each virtual machine on a time-sliced basis. During its time  slice, each virtual machine gets a dedicated GPU, but the overall effect is that  a number of virtual machines share a single GPU. It’s an ideal solution in  applications where high-end graphics are required but shared access is  sufficient to meet needs. Using the Intel Xeon processor E3 family, small single-socket servers can pack a big  graphics punch. It’s an efficient, compact design that enables a new scale-out  approach to virtual application delivery. And it’s a cost-effective  alternative to high-end workstations and servers with add-on GPU cards.

 

The advantages go beyond just cost efficiency. Providing  shared access by remote users to server-based data and applications enhances  worker productivity and improves collaboration. It also tightens security and  enables compliance, because critical intellectual property, financial data, and  customer information stays in the data center rather than drifting out to  individual workstations and mobile devices. And security is further enhanced,  because Intel Xeon processors contain Intel Trusted Execution Technology  (Intel TXT) to let you create trusted computing pools. Intel TXT attests to  the integrity and trust of the platform, assures nothing has been tampered with,  and verifies that the platform is running the authorized versions of firmware  and software when booting up.

 

At Citrix, our goal is to provide our customers with the  computing experience they need to innovate and be productive—on a range of platforms  and usage models and in a way that enhances the security of their business. And  we want to give them the flexibility to access the computing resources they  need anywhere, any time, and from any device. Our collaboration with Intel has let  us deliver on that promise, and it lets us provide even more options for  platform choice and deployment configurations. It’s been a great experience for  us, and now it will enable a great experience for our mutual customers.

Developers –  your HPC Ninja Platform is here! HPC developers  worldwide have begun to participate in the Developer Access Program (DAP) - a  bootstrap effort for early access to code development and optimization on the  next generation Intel Xeon Phi processor. A key part of the program is the  Ninja Developer Platform.

hpc-graphic.jpg

 

Several  supercomputing-class systems are currently powered by the Intel Xeon Phi processor  (code name Knights Landing (KNL))—a powerful many core, highly parallel  processor. KNL delivers  massive thread parallelism, data parallelism, and memory bandwidth with  improved single-thread performance and Intel Xeon processor  binary-compatibility in a standard CPU form factor.

 

In anticipation of KNL’s general availability,  we, along with our partners, are bringing to market a developer access program,  which provides an ideal, platform for code developers. Colfax, a valued Intel  partner, is handling the program, which is already underway.

 

The Ninja Platform

 

Think of the  Ninja Developer Platform as a stand-alone box that has a single bootable next-generation  Intel Xeon Phi processor. Developers can start kicking the tires and getting a  feel for the processor’s capabilities. They can begin developing the highly  parallel codes needed to optimize existing and new applications.

 

As part of  Intel’s Developer Access Program, the Ninja platform has everything you need in  the way of hardware, software, tools, education and support.  It comes fully configured with memory, local  storage, CentOS 7.2 and also includes a one-year license for Intel Parallel  Studio XE tools and libraries.  You can  get to work immediately whether you're a developer experienced with previous  generations of Intel Xeon Phi coprocessors or if you are new to the Intel Xeon  Phi processor family.

 

Colfax has  pulled out all the stops in designing the education and support resources  including white papers, webinars, and how-to and optimization guides. Currently  underway are a series of KNL webinars and hands-on workshops – see details at http://dap.xeonphi.com/#trg

 

Here is a  quick look at the two platform options that are being offered by the Developer Access  Program – both are customizable to meet your application needs.

 

             

Pedestal    Platform

Rack    Platform

  • Developer Edition of Intel Xeon Phi Processor: 16GB MCDRAM, 6         Channels of DDR4, AVX 512
  • MEMORY: 6x DIMM slots
  • EXPANSION: 2x PCIe 3.0 x16 (unavailable with KNL-F), 1x PCIe 3.0 x4 (in a x8 mechanical slot)
  • LAN: 2x Intel i350 Gigabit Ethernet
  • STORAGE: 8x SATA ports, 2x SATADOM support
  • POWER SUPPLY: 1x 750W 80 Plus Gold
  • CentOS 7.2
  • Intel Parallel Studio XE Professional Edition Named User 1-year license
  
      
  • 2U 4x Hot-Swap Nodes
  •   
  • Developer Edition of Intel Xeon Phi Processor: 16GB MCDRAM, 6         Channels of DDR4, AVX 512
  •   
  • MEMORY: 6x DIMM slots / Node
  •   
  • EXPANSION: Riser 1: 1x PCIe 3.0 x16, Riser 2: 1x PCIe Gen3 x 20         (x16 or x4) / Node
  •   
  • LAN: 2x Intel i210 Gigabit Ethernet / Node
  •   
  • STORAGE: 12x 3.5" Hot-Swap Drives
  •   
  • POWER SUPPLY: 2x 2130W Common Redundant 80 Plus Platinum
  •   
  • CentOS 7.2
  •   
  • Intel Parallel Studio XE Cluster Edition Named User 1-year         license
  

 

Given the richness of the technology and the  tools being offered along with the training and support resources, developers should  find the process of transitioning to the latest Intel Xeon Phi processor  greatly accelerated.

 

The Ninja Development Platform is particularly  well suited to meet the needs of code developers in such disciplines as  academia, engineering, physics, big data analytics, modeling and simulation,  visualization and a wide variety of scientific applications.

 

The platform  will cost ~$5,000 USD for the single node pedestal server with additional costs  for customization.  On the horizon is our  effort to take this program global with Colfax and partners. Stay tuned for  details in my next blog.

 

You can pre-order  the Ninja Developer Platform now at http://www.xeonphideveloper.com.

Graphics virtualization and design collaboration took a step  forward this week with the announcement of support for Intel Graphics  Virtualization Technology-g (Intel® GVT-g) on the Citrix XenServer* platform.

 

Intel GVT-g running on the current generation graphics-enabled  Intel Xeon processor E3 family, and future generations of Intel Xeon®  processors with integrated graphics capabilities, will enable up to seven Citrix  users to share a single GPU without significant performance penalties. This new  support for Intel GVT-g in the Citrix virtualization environment was unveiled  this week at the Citrix Synergy conference in Las Vegas.

 

A little bit of background on the technology: With Intel  GVT-g, a virtual GPU instance is maintained for each virtual machine, with a  share of performance-critical resources directly assigned to each VM. Running a  native graphics driver inside a VM, without hypervisor intervention in  performance-critical paths, optimizes the end-user experience in terms of features,  performance and sharing capabilities.

 

All of this means that multiple users who need to work with  and share design files can now collaborate more easily on the XenServer  integrated virtualization platform, while gaining the economies that come with  sharing a single system and benefiting from the security of working from a  trusted compute pool enabled by Intel  Trusted Execution Technology (Intel® TXT).

 

Intel GVT-g is an ideal solution for users who need access  to GPU resources to work with graphically oriented applications but don’t  require a dedicated GPU system. These users might be anyone from sales reps and  product managers to engineers and component designers. With Intel GVT-g on the  Citrix virtualization platform, each user has access to separate OSs and apps  while sharing a single processor – a cost-effective solution that increases  platform flexibility.

 

The back side of this story is one of close collaboration  among Intel, Citrix, and the Xen open source community to develop and refine a  software-based approach to virtualization in an Intel GPU and XenServer  environment. It took a lot of people working together to get us to this point.

 

And now we’ve arrived at our destination. With the  combination of Intel GVT-g, Intel  Xeon processor-based servers with Intel Iris Pro Graphics, and Citrix  XenServer, anywhere, anytime design collaboration just a got a lot easier.

For a closer look at Intel GVT-g, including a technical  demo, visit our Intel Graphics Virtualization  Technology site or visit our booth #870 at Citrix  Synergy 2016.

One of the most rewarding aspects of my work at Intel is seeing the new capabilities built in to Intel silicon that are then brought to life on an ISV partner’s product. It is this synergy between Intel and partner technologies where I see the industry and customers really benefit.

 

Two of the newer examples of this kind of synergy are made possible with Citrix XenServer 7.0—Supervisor Mode Access Prevention (SMAP) and Page Modification Logging (PML). Both capabilities are built in to the Intel Xeon processor E5 v4 family, but can only benefit customers when a server-virtualization platform is engineered to use them. Citrix XenServer 7.0 is one of  the first server-virtualization platforms to do that with SMAP and PML.

 

Enhancing Security with Supervisor Mode Access Prevention (SMAP)

 

SMAP is not new in and of itself. Intel introduced SMAP for Linux on 3rd generation Xeon processors, SMAP is new to virtualization though. Intel added SMAP code to the Citrix Xen hypervisor in Xen Project. Citrix then worked with the code in Xen, and XenServer 7.0 makes SMAP a reality for server virtualization.

 

citrix-blog-graphic.jpg

Figure 1:  SMAP prevents the hypervisor from accessing the guests’ memory space other than when needed for a specific function

 

SMAP helps prevent malware from diverting operating-system access to malware-controlled user data, which helps enhance security in virtualized server environments. SMAP aligns with the Intel and Citrix partnership where Intel and Citrix regularly collaborate to help make a seamless, secure mobile-workspace experience a reality.

 

Improving Performance with Page Modification Logging (PML)

 

PML improves performance during live migrations between virtual server hosts. As with SMAP, PML capabilities are built in to the Intel Xeon processor E5 v4 family, and XenServer 7.0 is one of the first server-virtualization platforms to actually enable PML in a virtualized server environment.

 

citrix-blog-graphic-2.jpg

Figure 2:  With PML, CPU cycles previously used to track guest memory-page writes during live migration are available for guest use instead

 

Read More

 

I haven’t gone into detail on SMAP or PML or how they work. Instead, I invite you to read about them and how they add to the already strong XenServer virtualization platform and Intel Xeon processor E5 family in the Intel and Citrix solution brief, “New Capabilities with Citrix XenServer and the Intel Xeon Processor E5 v4 Family.” I also invite you to follow me and my growing #TechTim community on Twitter: @TimIntel.

Las-Vegas-Strip-1036x691[1].jpgBy Steve Sieron, Senior Alliance Marketing Manager at CItrix

 

 

Intel will be highly visible next week at Synergy as a Platinum Sponsor. They’ll be featuring a number of new solutions that showcase the broad technical, product and marketing partnership with Citrix across networking, cloud, security and graphics virtualization. And? There’ll be an array of innovative Intel-based endpoint devices running XenApp and XenDesktop across Win10, Linux and Chrome OS.

 

You won’t want to miss SYN121 on Wednesday May 25 from 4:30-5:15pm PDT in Murano 3204 for “Mobilize your Design Workforce: Delivering Graphical Applications on Both Private and Public Clouds.” This informative panel, hosted by Jim Blakley, Intel GM Visual Cloud Computing, will feature graphics industry experts, including Thomas Poppelgaard, Jason Dacanay from Gensler, Adam Jull from IMSCAD. and Citrix own “Mr. HDX,” Derek Thorslund.

 

Be sure to take advantage of Intel’s Ask the Experts Bar and daily tech talks, where you can network with a variety of industry experts. The tech talks will feature customers and industry experts along with Intel and Citrix product owners. Intel health care implementations will also be featured in customer presentations at the Citrix Booth Theatre from both LifeSpan and Allegro Pediatrics.

 

Visit these Interactive Demos and More in Intel Booth #870

 

Enhancing Netscaler Security and Performance with Intel Inside. Showcasing performance scaling and new security enhancements on Intel® Xeon® Processor based Netscaler MPX and SDX product families.

 

Intel® Solid State Drives (SSD) Enable a Secure Client. New endpoint security, storage technologies and capabilities with Citrix core product solutions.

 

Scaling XenDesktop with Atlantis USX and Intel SSD.  Featuring Atlantis USX as a storage layer with Intel SSDs for XenDesktop. Offering a robust performance architecture and high density with lower implementation costs and ongoing maintenance OPEX compared to traditional VDI Solutions.

 

Intel® Graphics Virtualization on Citrix (Intel® GVT). Learn about the new Intel Xeon Processor E3 family with Intel® Iris™ Pro Graphics in the cloud and new graphics virtualization technologies and solutions powered by Citrix from leading OEM partners. Interact with ISV-certified rich and brilliant 3D apps on the Intel remote cloud and learn how integrated graphics offer a compelling alternative to add-in graphics cards. The technologies highlighted will include Intel GVT-d – direct deployment of Intel processor graphics running 3D apps and media as well as Intel GVT-g – shared deployment in a cloud-based environment, hosted remotely in a data center running Citrix on latest-gen Intel Xeon processor servers.

 

Intel Ecosystem Enables Citrix Across Synergy16

 

Of course, the broader Intel ecosystem will be on full display at Synergy, including the latest HP Moonshot m710 Series and Cisco M-Series offerings. These tools bring unmatched levels of price, performance and density in delivering graphics and rich apps to a wide range of professional users requiring access to apps with ever-increasing graphics capabilities. There will also be a broad array of Intel Xeon-based Netscalers running in the IBM Softlayer Cloud and across booths and learning labs throughout the event. Explore exciting Intel-based Storage solutions on Citrix with new offerings from partners such as Nutanix, Pure Storage and Atlantis. As always, Intel end points will be ubiquitous throughout Synergy and featured in many sponsor pavilions, including HPE, Google, Dell and Samsung.

 

Beyond being a technology leader and strategic partner, Intel will be supplying Intel Arduino boards for the Simply Serve program at Synergy. Promoting STEM programs for Title 1 middle school students. A big thanks to Intel on behalf of both Citrix and the Southern Nevada United Way!

 

Citrix is pleased to welcome Intel to Synergy 2016. We encourage all attendees to stop by Booth #870 to meet the Intel team, watch customer presentations at the Intel Theatre and interact with innovative technology demos. Don’t forget to pull up your Synergy Mobile App to mark your calendar for SYN121, the Industry Expert Graphics Panel on Wed May 25 at 4:30pm in Murano 3204.

path-02.jpgIntel was founded on a deep commitment to innovation, especially open standards driven innovation, that results in  acceleration only seen when  whole ecosystems come together to deliver solutions.  Today’s investment in CoreOS is reflective of this commitment, as data centers face an inflection point with the delivery of software defined infrastructure (SDI).  As we have at many times in our industry’s history, we are all piecing together many technology alternatives to form an open, standard path for SDI stack delivery.  At Intel, we understand the value that OpenStack has brought to delivery of IaaS, but also see the additive value of containerized architectures found in many of the largest cloud providers today.  We view these two approaches as complimentary, and the integration and adoption of these are critical to broad proliferation of SDI.


This is why we announced a technology collaboration with CoreOS and Mirantis earlier this year to integrate OpenStack and Kubernetes, enabling OpenStack to run as a containerized pod within a Kubernetes environment. Inherent in this collaboration is a strong commitment across all parties to contribute the results of this collaboration directly upstream so that both communities may benefit. The collaboration brings the broad workload support, and vendor capabilities of OpenStack and the application lifecycle management and automation of Kubernetes into a single solution that provides an efficient path to solving many of the issues gating OpenStack proliferation today – stack complexity and convoluted upgrade paths.  Best of all, this work is being driven in a fully open source environment reducing any risk of vendor lock in.

 

Because software development and innovation like this is a critical part of Intel’s Cloud for All initiative, we tasked our best SDI engineers to work together with CoreOS to deliver the first ever live demonstration of OpenStack running as a service within Kubernetes at the OpenStack Summit.  To put this into perspective, our joint engineers were able to deliver a unified “Stackanetes” configuration in approximately three weeks’ time after our initial collaboration was announced. Three weeks is a short timeframe to deliver such a major demo, but highlights the power of using the right tools together. To say that this captured the attention of the OpenStack community would be an understatement, and we expect to integrate this workflow into the Foundation’s priorities moving forward.

 

The next natural step in our advancement of the Kubernetes ecosystem was our investment in CoreOS that we announced today.  CoreOS was founded on a principle of delivering GIFEE, or “Google Infrastructure for Everyone Else”, and their Tectonic solution integrates Kubernetes with the CoreOS Linux platform. CoreOS’s Tectonic is an easy to consume Hyperscale SDI Stack. We’ve been working with CoreOS for more than a year on various software optimization efforts focused at optimization of Tectonic for underlying Intel Architecture features. Our collaboration on Kubernetes reflects a common viewpoint on the evolution of SDI software to support a wide range of cloud workloads that are efficient, open and highly scalable.  We’re pleased with this latest chapter in our collaboration and look forward to delivering more of our vision in the months ahead.

According to the National Resources Defense Council (NRDC), data  center electricity consumption is projected to increase to approximately 140  billion kilowatt-hours annually by 2020, the equivalent annual output of 50  power plants. The cost to American businesses? A tidy $13 billion annually.

 

Make no mistake, many enterprises and data center providers are  striving to reduce their carbon footprint. Switch recently announced that, as  of the first of this year, all of its SUPERNAP data centers are powered by 100%  renewable energy through its new solar facilities operating in Nevada.   Across the pond, Apple is developing two new 100% renewable energy data centers  in Ireland and Denmark.  And Facebook just launched a massive new data  center in Lulea, a town located in a remote corner of northern Sweden, that  requires 70% less mechanical cooling capacity than the average data center  because of the cool climate.

 

But what if your data center is located in Houston or Rio de  Janeiro? Fortunately there exists a viable solution to achieve improved Power  Usage Effectiveness (PUE), and reduce costs associated with cooling and power  while mitigating a facility’s carbon footprint. Data Center Infrastructure  Management (DCIM) are software and technology products that converge IT and  building facilities functions to provide engineers and administrators with a  holistic view of a data center's performance to ensure that energy, equipment  and floor space are used as efficiently as possible.


In large data centers, where electrical energy billing comprises  a large portion of the cost of operation, the insight these software platforms  provide into power and thermal management accrue directly to an organization’s  bottom line.


In order to take appropriate actions, data center managers need  accurate intel concerning power consumption, thermals, airflow and utilization.  One wouldn’t think this is the realm of MS Excel spreadsheets and Stanley tape  measures. However, a recent study by Intel DCM and Redshift Research found that  four in 10 data center managers in 200 facilities surveyed in the U.S. and the  UK still rely on these Dark Age tools to initiate expansion or layout changes.


The good news is that DCIM provides increased levels of  automated control that empowers data center managers to receive timely information  to manage capacity planning and allocations, as well as cooling efficiency. By  deploying thermal-management middleware, for example, improvements in airflow  management can reduce energy consumption by 40%. Data center managers can  also drive a stake through the problem of zombie servers by consolidating  servers to reduce energy consumption from 10% to 40%.


Modern data centers maintain a stable operating environment for  servers by implementing stringent temperature controls, which, paradoxically,  also makes it possible to apply various energy-saving and eco-friendly measures  in a centralized manner. A DCIM system that offers simulations integrating  real-time monitoring information to allow for continuous improvements and  validation of cooling strategy and air handling choices can have a direct  impact on the bottom line.


Somewhat counter-intuitively, raising internal temperatures in  data centers can save annually upwards of 100K per temperature degree  without degrading service levels or reducing hardware lifespan. And by  deploying various other innovative cooling technologies, facilities can expend  up to 95% less energy.


Utilizing DCIM real-time data analysis tools, along with  maintaining an active server refresh schedule, can effectively combat runaway  energy consumption. The combination of processor improvement with feature rich  intuitive dashboards that recognize imbalances in cooling and identify  underutilized servers, can sometimes reveal a profligate energy consumer right  under an administrator’s nose.


Replacing an older server with today’s advanced technology and  using DCIM to identify underutilized systems can reduce energy need by 30%.  Considering the four-year life expectancy of a server, this will save up to  $480. While that figure might not seem too significant, the numbers get  significant if you have thousands of servers.

SAP SAPPHIRE NOW and ASUG (America’s SAP Users’ Group)  Annual Conference is coming to Orlando on May 17–19—and Intel will be there  too, adding to the festivities with keynote addresses, tech talks, demos and  plenty of presentations from our OEM and technology partners.


SAPPHIRE is SAP’s premier annual event, with an anticipated 20,000  people in attendance and an additional 80,000 tuning in online. SAPPHIRE attracts  CIOs and line-of-business managers who want to meet with SAP experts and  industry partners to learn the latest in Internet of Things (IoT) technologies,  in-memory computing, and data center and cloud strategies.

 

The conference starts off with a bang on Tuesday morning,  May 17 when Intel CEO Brian Krzanich joins SAP chief executive Bill McDermott  on stage for a discussion of the latest innovations across the industry. Be  sure to be in your seat as BK shares information about advances to the joint Intel-SAP  IoT platform, and our next-generation Intel processors. In addition, you won’t  want to miss news of innovations in memory technologies that promise both to boost  performance and cut cost of memory for cloud and data center platforms. BK will  also share highlights of Intel IT’s successful conversion to SAP HANA* to run  Intel’s internal financial and enterprise resource planning (ERP) & Supply  Chain Management (SCM) systems (for more information on this proof-of-concept  deployment, view  the solution brief).

 

Intel is also showcasing two demos in the Intel Booth #625of  the joint SAP and Intel IoT platform, find out more:


  • The  Connected Worker: Industrial Wearables for Worker Safety (Demo in Intel  Booth #625 also highlighted in mini-session 10:30am-10:50am, Tues. May 17,  PS602 presented by Jeff Jackson). Learn about the Intel and SAP reference  platform for industrial safety and compliance, and experience how wearables can  help detect unsafe conditions and create automated alerts in real-time, for  both workers and supervisors.


  • Real-Time  Inventory Management (Demo in Intel  booth #625)Learn how to delight customers and minimize  out-of-stock-issues in this retail jeans store scenario that features SAP  Merchandising* applications and the Intel® Retail Sensor Platform to send  real-time alerts for inventory management and cycle count automation.

 

A Rich History of  Collaboration


Intel and SAP have worked together closely for over two  decades, with SAP software specifically engineered to take advantage of the  performance, reliability and security built into Intel processors. Today, the  rich co-engineering relationship is stronger than ever, with new joint IoT  solutions that extend analytical processing and security from the data center  to the network edge, and breakthrough business solutions that draw on the power  of SAP HANA*, the revolutionary in-memory database that’s optimized to run on  Intel® Xeon® processors. SAP HANA and Intel processors stand behind new  solutions such as SAP Business ByDesign*, a cloud-based ERP service that brings  powerful business management tools to the device of your choice; and the SAP  Digital Boardroom*, which draws on the power of SAP HANA and Intel processors  to provide C-Suite executives a real-time visualization of business performance  and reporting across the entire enterprise with SAP’s Digital Boardroom.

 

Intel and SAP’s collaboration doesn’t end there: We also  share a rich ecosystem of OEM partners who offer over 600 computing appliances that  feature SAP software pre-integrated onto Intel-based platforms for simple, out-of-the-box  functionality. A dozen of our OEM partners, including VMware, HP, Dell, Cisco,  and SGI, will be at SAP SAPPHIRE. Stop by their booths to check out their  latest innovations, and join us at Intel booth #625 where we will host over 30  tech talks by Intel partner experts.

 

Of particular interest are a series of in-booth  presentations by Dr. Matthieu-P. Schapranow, program manager in E-Health and  Life Sciences at SAP’s Hasso Plattner Institute. Schapranow’s presentations  (12:30pm on Tues. May 17 and Wed. May 18, and 11:30am on Thurs. May 19) address  the topic of analyzing genomes using  in-memory databases and the advent of real-time analysis of medical big data.

 

Intel is once again a proud sponsor of the SAP HANA® Innovation Awards, which recognize  customers and enterprises who have found innovative ways to use SAP HANA to  drive business value. Kudos to each of the over 150 entrants who competed this  year, and a special congratulation in advance to the five finalists to be named  in a special ceremony on Monday evening.

 

Stop by the Intel booth #625 to say hello, and watch me shoot man-on-the-street videos for viewing on Periscope.

 

Follow me @TimIntel  and #TechTim for the latest news on Intel and SAP.

HBase.png

 

DRAM, and DRAM managed by Java (the java heap) could always be exhausted by large HBase database instances, especially in the Big Data realm where large data is being sourced via Hadoop's HDFS.  Apache* HBase,  the NoSQL database of the Hadoop  ecosystem has a few ways, to extend DRAM. One of them is to use a L1 cache, it is called the Java LRU Map implementation and this is a DRAM-based cache. Your next foray should be learn more about the BlockCache API  and how do I implement something called the BucketCache. So next orient yourself with this blog, Block Cache 101 to get a feel for what is the BlockCache API. From there you will find an actual showdown and testing report, on the various solutions in Nick’s follow-on blog. The BlockCache showdown. You should also orient yourself with the BucketCache API which can run on heap in Java (DRAM), or off heap (also in DRAM), or in a file-based mode (like in a block device, such as an PCIe, NVMe-based SSD, SATA SSD, or HDD) via the HBase Reference Guide: https://hbase.apache.org/book.html.

 

So now that you have some orientation and an HBase developer’s view on the different modes of implementing the BlockCache, and you have also learned that larger data need implementations benefit from the BucketCache implementation, let’s look at Intel’s storage comparison data of BucketCache in file mode. Intel’s Software Labs did a study that actually used three types of storage media as the BucketCache storage. This allows for TB’s of storage that could be HBase accessible, depending on your architectural goals, but clearly the size limits of low latency SSDs are far better than say 256GB of DRAM on a standard server for data serving. HBase is flexible enough to let you level these caches as your needs requires. Intel tested a hard disk drive (HDD), a Intel-based SATA SSD and the Intel Data Center Family for PCIe SSD, P3700 drive in each of the 3 threading load scenarios, of 10, 25 and 50 threads of activity.  What they found was the following overall performance for the BucketCache in file mode (using a standard filesystem not a DRAM-based tmpfs):

 

The PCIe SSD outperformed the HDD from 81x to 189x.

 

The PCIe SSD even beat the SATA SSD with performance ranging from 4x to 11x, which is pretty impressive, for an SSD of a new interface type (namely NVMe), beat another SSD by this margin using database software.

 

We encourage you to read the entire report to see what larger cache footprints on better latency SSDs can do to let you extend your database over the BlockCache API in HBase.

 

http://www.intel.com/content/dam/www/public/us/en/documents/solution-briefs/apache-hbase-block-cache-testing-brief.pdf

Are you ready? Today Microsoft released Technical Preview 5 (TP5) of Windows Server 2016 with Storage Spaces Direct.  We at Intel have been working with Microsoft on configurations to help OEMs, ODMs, and systems integrators bring solutions to market quickly.

 

The hyper-converged architecture of Storage Spaces Direct (S2D) is a giant leap forward for many key IT business applications. There are several enhancements in TP5, as discussed in Claus Joergensen’s blog: Storage Spaces Direct Technical Preview 5, along with the use of solid state drives (SSDs) for S2D that have caught the attention of the enterprise IT community.

 

Many IT professionals see the promise of hyper-converged and are rethinking how it can assist them from a compute as well as storage perspective. We are excited to share our joint work with Microsoft in helping prepare for TP5 evaluation. Learn from our combined knowledge working on S2D how to take advantage of Microsoft and Intel technologies for your target workloads.  These configurations include Intel® Xeon® processor- based servers and Intel® Solid State Drives (SSDs), providing a range of options for performance, reliability, and agile enterprise solutions.

 

We collaborated with Microsoft to develop three configurations that span a range of needs from the most latency/IOP sensitive business processing applications to capacity hungry data warehousing. We have been testing some of these configurations already and will be testing all three configurations with TP5 using hardware from different OEMs.  We plan to share the results in upcoming blogs as soon as the data is available.

 

1. IOPs Optimized


All Flash NVMe SSD configuration for IOP and latency sensitive business processing applications that demand the best quality of service (QoS).

 

  • Server : 1U 1Node or 2U 1Node
  • CPU: High core count processor, such as Intel® Xeon® processor E5-2699 v4 with 22 cores
  • DRAM: DDR4 - 16GBx24=384 GB (Min); 32GBx24=768GB (Max)
  • Cache Storage: Low-latency, high-endurance SSD, such as 2x Intel® SSD DC P3700: 800GBx2=1.6TB
  • Capacity Storage: 6-8x Intel® SSD DC P3520/DC P3500: 2TBx6-8=12-16TB
  • NIC: 2x40GbE RDMA NIC (iWARP preferred)
  • Switch:  40GbE switch

Config 1.jpg

2. Throughput/Capacity Optimized


All Flash configuration NVMe cache tier and high capacity SATA SSDs for blend of high performance and capacity for decision support and general virtualization

 

  • Server : 2U 1Node
  • CPU: Relatively high core count processor, such as Intel® Xeon™ processor E5-2695 v4 18 cores
  • DRAM: DDR4 -16GBx24=384 GB (Min); 32GBx24=768GB (Max)
  • Cache Storage: Low-latency, high-endurance SSD, such as 4x Intel® SSD DC P3700: 800GB X4=3.2TB
  • Capacity Storage: 20x SATA Intel® SSD  DC S3610: 1.6TB X20=32TB
  • NIC: 2x 40Gb RDMA NIC (iWARP Preferred)
  • Switch:  40 GbE switch

Config 2.jpg

 

3. Capacity Optimized


Hybrid configuration to optimize $/GB for efficiency using NVMe SSD cache plus HDDs for high capacity data storage, suitable for data warehousing, Exchange or SharePoint

 

  • Server : 2U 1Node
  • CPU: medium core count processor, such as Intel® Xeon® processor E5-2650 v4 with 12 cores
  • DRAM: DDR4 - 16GBx16=256 GB
  • Cache Storage: Low-latency, high-endurance SSD, such as 2x Intel® SSD DC P3700: 1.6TBx2=3.2TB
  • Capacity Storage: 8x HDD 3.5”: 6TBx8=48TB
  • NIC: 2x10Gb RDMA NIC (iWARP Preferred)
  • Switch:  10 GbE switch

Config 3.jpg

 

To maintain a reliable storage system, we selected SSD technology with the best blend of read and write performance, drive reliability, and endurance levels. NVMe provides the lowest latency and high performance consistency with 10 drive write per day endurance that is necessary for the performance critical cache tier. NVMe devices are also more CPU efficient than their SATA counterparts. We selected The Intel® SSD DC P3700 NVMe for the cache tier of all configurations.

 

Standard to mid endurance SSDs can be used in the capacity tier behind the high endurance cache drives. The choice between NVMe and SATA for capacity storage will depend on the performance and latency sensitivity of the applications and the platform capacity needed. Consistent performance is an important attribute for supporting all enterprise applications and larger numbers of users and virtual machines in Hyper-V virtualized environments.  We selected the Intel SSD DC P3520/DC P3500 NVMe and DC S3610 SATA SSD for capacity storage in the all flash configurations.

 

Not all “off the shelf” SSDs should be used in a S2D configuration. The Intel SSD Data Center Family is recommended because it provides a data integrity mechanism to protect against undetectable errors while maintaining superior levels of measured annual failure rate, which contributes to the high reliability of the S2D configurations.

 

Whether you are a DBA, developer or storage architect you can get up and testing quickly with one of these recommended Windows Server 2016 TP5 configurations.  Watch for our follow-on blogs sharing the testing data as it becomes available.

A Brief History of Software-Defined Storage


Software-Defined Storage (SDS) has become the new “It Girl” of IT, as storage technology increasingly takes center stage in the modern datacenter. That’s not difficult to understand, as SDS brings tremendous advantages in terms of flexibility, performance, reliability and cost-savings.

What might not be as easy for the new storage buyer to understand is “What IS SDS exactly?” Typically the answer is some reference to a particular software or appliance vendor, as though the term SDS is synonymous with a specific product or device. That’s savvy marketing, as companies would very much like you to think of their brand as the “Kleenex” or “Band-Aid” of the SDS world. What often gets missed in the process is any genuine explanation or understanding of SDS itself.

So, let’s correct that. I thought it would be useful to jump in a time machine back to the days of the first personal computers. Storage in those days was certainly not “Software-Defined”.  It was typically either a cassette tape recorder (with cassettes), or (if you were one of the cool kids) a floppy drive of some kind with the associated disks. Storage was “defined” by the hardware and physical media.

 

  




While the invention of the hard drive actually predates floppy discs by more than a decade, the first commercially viable consumer drives did not become popular until the adoption of the SCSI standard in the mid-1908s. (I purchased a SCSI drive for my own personal computer - a whopping 20MB - around that time for $795.00 … my how times have changed!)That’s where it started to get interesting. Someone realized along the way that – when you have “huge” amounts of storage – you can divide that up into separate partitions. Operating systems gained the ability to create these partitions. So, my 20MB hard drive became three “drives”: OS, PROGRAMS & DATA. It’s here where we see the first glimmerings of what would become Software-Defined Storage. All of a sudden a C: D: & E: drive did not literally have to refer to separate physical drives or media. Those 3 “drives” could be “defined” by the OS as residing on one, two or three physical devices.

 

So, we could at that point divide (or partition) a single media device into multiple drives. The next step was to make it possible to take multiple devices and make them appear as one resource. This was driven by the observation that hard drive capacity was increasing, but performance was not. The idea of using a “Redundant Array of Inexpensive Discs” (RAID) solved that performance problem (for the time being), but it was quickly realized that this came at the cost of lower reliability. Mirroring (RAID-1) and parity (RAID-5) approaches solved that issue, and now RAID is a ubiquitous part of almost all current data center storage designs.

For our purposes however, the important bit is how that changed the way storage was defined. With RAID, one could now take 2 or more drives and make them appear to the OS as one large drive, or some number of smaller drives. Storage was (and is) software-defined - at the level of the individual server.



  

 

While that might be technically correct, we still have a way to go before we get to what is currently considered SDS. It gets interesting when we take the general concept of RAID – using multiple resources as a single entity – and apply that to servers. This creates various kinds of “clusters” designed to improved performance, reliability or both. This is typical of something like Microsoft’s “Distributed File System”.

One problem encountered at this level is that shared storage resources cannot always truly act like physical drive. It’s often the case that you cannot use these shared file stores with certain applications, as they require a full implementation of a command protocol like SCSI or SATA. That’s where technology like iSCSI comes into play. It allows a complete storage command set (SCSI as you might guess) to communicate over a network link. Now it becomes possible to have truly virtualized drives, not simply shared file storage.

And that’s the level at which we get something that can truly be called “Software-Defined Storage”. All of these various technologies form a set of building blocks which allow a flexible pool of storage, spanning several servers. That storage can be divided-up (defined) as needed, expanded or contracted to meet business needs, and it works just like a local drive on the client systems which access that storage. That is the essence of “Software-Defined Storage”.

Of course that’s still a fairly primitive and basic implementation. Modern SDS configurations offer so much more. That will be the subject of the next post in this series.

By David Brown, Intel and Kenny Johnston, Rackspace

 

OpenStack is the world’s leading open source cloud operating  system. It’s been adopted by many of the world’s most prominent cloud service  providers and a growing list of global enterprises. Now the task at hand for  the OpenStack community is to address barriers to the widespread adoption of  OpenStack in the broad realm of enterprise environments and ensure the platform  is ready for the workloads of tomorrow.

 

In a word, that is the mission of the OpenStack Innovation  Center (OSIC). Launched in 2015 by Intel and Rackspace, the center is bringing  together teams of engineers to accelerate the evolution of the OpenStack  platform. Key areas of focus include improving manageability, reliability and  resilience, scalability, high availability, security and compliance, and  simplicity. The objective is to make OpenStack easy to install and deploy, with  all of the features of an enterprise-class computing platform.

 

To drive toward those goals, the center has launched an  OpenStack developer training program, assembled one of the world’s largest  joint engineering teams focused on upstream contributions to OpenStack, and  deployed the world’s largest OpenStack developer cloud.

 

While the training program is helping grow the OpenStack community,  the joint engineering team is following an open roadmap that is guiding their  development of new features in the OpenStack platform. This work is focused on  key platform challenges. To date, the team’s accomplishments include a long  list of enhancements to the building blocks for enterprise-ready OpenStack  environments, including Keystone, Tempest, Neutron, Swift, Ceilometer, Cinder,  Horizon, Nova, and Rally. This work includes rolling upgrades through support  for versioned objects and online schema migration; improvements in live migration  to counter service failures; scalability improvements through work on network  topology and IP capacity awareness; and early work to support multi-factor  authentication through one-time password support in Keystone. In addition, the  team is focused on testing each service within OpenStack to determine its  breaking point, including telemetry, instance provisioning of Nova APIs,  Autoscale in Heat, and Software Defined Networking in relation to third-party  plug-ins.

 

Meanwhile, Intel and Rackspace have launched a developer  cloud hosted by OSIC to empower the OpenStack community, ultimately comprised  of 2,000 nodes. To date, the first 1,000-node cluster has been brought online  and is being fully utilized to power work by OSIC participants, including such prominent  organizations as Cambridge University, IBM, Red Hat, Mirantis, and PLUMgrid.  Most of the current test cases focus on networking, storage, and provisioning  methods. The second cluster will be brought online and available to the  community in June of this year.

 

Since its launch, the OSIC is already delivering on the  things it set out to do. It is increasing the number of developers contributing  to upstream OpenStack code, enabling the broader ecosystem, and advancing the  scalability, manageability, and reliability of OpenStack by adding new features  and functionality and eliminating bugs.

 

All of this work makes OpenStack a more viable platform for  deployment in enterprise environments across a wide range of industries. In  delivering these gains, the work done by the OSIC is helping to bring the Intel®  Cloud for All vision to life — specifically, to unleash tens of thousands  of new clouds.

 

If you have an OpenStack test case that could benefit from  the resources of a world-class developer cloud, visit OSIC.org to request access.

In a recent blog post, I talked about the excitement that is growing around OpenStack and why you should be thinking about it. OpenStack is supported by tens of thousands of community members and more than 500 companies, and provides the foundation of a flexible private cloud that is gaining support across the business landscape. Cloud-centric companies, such as Netflix and Uber, have shown that fast innovation of digital services is key to surviving in today’s increasingly competitive business environment. But many cloud initiatives fail to deliver value simply because they’re implemented as cost-saving IT projects. Companies should instead view a private-cloud initiative as part of a larger organizational transformation because it can increase revenue and innovation while decreasing operational costs.

 

An OpenStack private cloud can provide an agile, API-accessible infrastructure foundation that lets developers integrate infrastructure directly into their application development, and provides the means to enable automated deployment. Companies can use an OpenStack foundation to transform their entire development and deployment process, which can lead to faster innovation of new digital services.

 

Mirantis, an Intel partner and a pure-play OpenStack company, suggests that companies follow an executive-driven, multi-phase approach to develop and implement a cloud strategy. These phases start with a small implementation that provides continuous integration and self-service capabilities, and then grows the OpenStack private cloud to a large implementation that additionally provides greater scalability and high availability. Each phase should contain measurable success metrics that validate your project’s return on investment (ROI), such as increased revenue from accelerated software development cycles, or increased developer productivity through deployment automation. Mirantis provides tools that are built on OpenStack itself to help you determine, in real-time, the value that your OpenStack private cloud is generating for your company, such as the revenue generated from accelerated deployments.

 

Remember that the real value in the cloud comes from how it enables fast innovation, not just its cost savings. Read our white paper, “The Business Case for Private Cloud,” to learn how to more effectively structure your private-cloud implementation. And for the latest information on private cloud technologies, be sure to follow me, @TimIntel, and my growing #techtim community on Twitter.

By Mauri Whalen, Vice President in the Software and Services Group and Director of Core System Software in the Open Source Technology Center at Intel

 

 

The message at Intel’s recent Cloud  Day event was clear:  we’re serious about making cloud deployments easier and faster through new products,  programs, and collaborations. OpenStack is critical to our cloud strategy, and we’re  excited about the OpenStack Mitaka release. I want to share some of the work  Intel is driving in the OpenStack community and through contributions to Mitaka.

 

The  OpenStack Innovation Center (OSIC), our collaboration with Rackspace, continues  to show strong momentum. The joint Intel/ Rackspace engineering team working at  OSIC has submitted 174 patches and reviewed almost 1,000 more. Additionally, the  OSIC environment itself is uniquely equipped to allow community testing of the  upstream code base at true enterprise scale. After opening the first of  two 1,000-node clusters to the community in October, we’ve seen great  response with  reservations for bare-metal allocations already at full capacity. Buildout of  the second cluster is nearing completion.

 

Turning  our attention to Mitaka, I look forward to what this release is delivering.  Intel has been very active in Mitaka, contributing tens of thousands of lines  of code targeting high availability for tenants and services, network and  storage support, and ease of deployment among other areas. Our team also  focused on improving the upgrade process, enabling the upgrade of many core  OpenStack components without downtime, and have made significant improvements  to live migration. These enhancements help enterprises deliver stable services,  supporting long-running enterprise workloads capable of withstanding  maintenance to the underlying infrastructure.

 

We  believe containers are critical to cloud computing, and we continue to push the  boundaries of what’s possible with this technology through our Clear Linux for  Intel Architecture project. From Intel Clear Container support in Magnum  to compiler techniques enabling architecture-specific optimizations at runtime  and leveraging the security features of Intel® Architecture, Intel is committed  to improving performance and security of containers in the cloud.

 

Finally, building on two successful hackathons Intel conducted with Huawei last year in  China to address OpenStack bugs, Intel upped the ante this year, joining with  six more corporate sponsors in bringing the worldwide community together for a  Global OpenStack Bug Smash March 7-9 that included new and experienced  developers, mentors, and official code reviewers. The results are impressive:  in 12 cities across 9 countries, 302 contributors authored patches to smash 293  bugs. Thanks to everyone who participated in the first global bug smash. We  look forward to many more!

 

These  efforts underscore Intel’s commitment to accelerating OpenStack adoption. I  look forward to continuing the discussion at OpenStack Summit Austin this week.  Be sure to join Intel in Austin to hear more  about how we’re improving the OpenStack experience for operators, community  members and developers alike.

smashing1.pngOpenStack is undeniably one of the largest, most successful open source projects ever, powered by a dynamic community of individual developers, corporate members, customers, and ecosystem partners. The full strength of that community was on display last month for the first Global OpenStack Bug Smash, organized to improve the Mitaka release by identifying and addressing open issues across component projects.

 

The Bug Smash event built on two successful hackathons conducted last year between Intel and Huawei in China. While preparing for the next hackathon, the planning team recognized it would be a great benefit to include more companies and organizations involved in OpenStack development. Shane Wang, an Intel engineer and individual board member of the OpenStack Foundation, took the lead in reaching out to the community and proposing a global, multi-site event.

 

In the true collaborative nature of OpenStack, the response was incredible with Rackspace, IBM, Mirantis, SUSE, Red Hat, and the China Electronics Standardization Institute (CESI) joining Intel and Huawei in hosting events around the world. These sponsors took the lead on securing space and equipment as well as inviting local community participants including new and experienced developers, mentors, and core reviewers to help get bugs fixed, approved and merged onsite.

 

The results of this global effort are impressive. From March 7-9 in 12 cities across 9 countries, 302 contributors authored patches to smash 293 bugs including significant numbers associated with Cinder, Nova, Neutron, Ceilometer, and Magnum components among many others.

 

In addition to the host organizations, participants represented a wide range of companies including: HP Enterprise, ZTE, 99Cloud, AWcloud, UnitedStack, EasyStack, Aptira, Chunghwa Telecom, KKTOWN, National Chiao Tung University, inwinSTACK, Quanta Cloud Technology, SaaSaMe, Wiwynn & YuantaFunds.

 

smashing2.png

 

Helping further unite these global efforts, marketing representatives from sponsor companies worked together to give the event a name and a common look and feel, which was displayed on signage, stickers, and t-shirts across the sites.

 

smashing3.png

 

Thank you to the host companies and to everyone who participated in the first Global OpenStack Bug Smash, helping make the Mitaka release as rock-solid as possible. We look forward to many more!


smashing4.png


smashing5.png

Filter Blog

By date: By tag: