Currently Being Moderated

Data centers managers are increasingly virtualizing servers and now networks to better utilize data center assets.  At the same time, they are adapting to the flood of information coming from Big Data, social media, Internet-of-Things and other trends. 

 

Data center storage has been challenged by these trends. Virtualized servers and networks mean a storage system is processing many more requests and is becoming a bottleneck for the entire system. 

 

Big Data, on the other hand, is flooding storage systems with more unstructured data than ever before.  IDC predicts that there will be 40 Zettabytes of data (or 5200 GB per person) stored in the next seven years!  How does a data center manager keep up with these demands without blowing the IT budget?

 

Storage systems are evolving to respond to these challenges.  First, today’s storage requires more intelligence.  Rather than simply saving all of the growing data to disk, applying optimization technologies can help to better manage storage resources and reduce the capacity required.

 

These optimization technologies include:


  • On the fly de-duplication, which uses intelligent pattern matching to reduce up to 95% of the data before it is saved to disk. (Source: IBM storage simulcast, November 9, 2011)
  • Real-time compression algorithms that exploit statistical redundancy to represent data without losing information
  • Intelligent tiering, which can dynamically allocate data across cache, SSDs and HDDs based on frequency of access and other policies
  • Thin provisioning, which allocates available storage in real time across virtual and real capacity.

 

Secondly, the architecture of storage systems is evolving from dedicated standalone appliances to distributed systems.  In fact, many of the newer storage solutions are software solutions that run on any standard Intel-based server.  This transformation of the storage system architecture, combined with ability to dynamically scale capacity and performance, has led to the term Software-Defined Storage (SDS). 

 

Everyone has a slightly different definition of SDS, but the main elements are de-coupling of storage software from hardware, automation of storage management tasks – autonomous provisioning, placement/tiering and scaling, and the third piece is the pooling of heterogeneous resources.. 

 

At the Intel Developer Forum in San Francisco this week, Software-Defined Infrastructure (SDI) is a big topic.  In fact, SDI was part of the IDF press briefing of Senior Vice President Diane Bryant, who is the general manager of the Datacenter and Connected Systems Group, and mentioned in the keynote address of Intel CEO Brian Krzanich. The promise is a data center that dynamically adapts to workload needs.

 

Software-Defined Storage, along with Software-Defined Networking (SDN) and Network Function Virtualization (NFV), is a key element of that.  It’s a new way to think about infrastructure in the face of the challenges confronting today’s data centers.

 

All through IDF, Intel and our partners will be demonstrating SDS solutions and the Intel technologies involved, which include the Intel® Xeon® processor E5, Intel® Solid-State Drives Data Center Family and 10 Gigabit Intel® Ethernet Converged Network Adapters.  These SDS partners include EMC’s ViPR, Inktank, Nexenta, Red Hat Storage, Scality and VMware’s Virtual SAN.  Come by the SDI Community in the Technology Showcase and see the solutions being demonstrated.

 

SDS is ushering in an exciting new world of storage that will help remove bottlenecks, improve capacity utilization, reduce CapEx and OpEx costs, and keep performance in line with other elements of the software-defined infrastructure, while offering new flexibility to keep up with the data tsunami that firms are experiencing today.

Comments

Filter Blog

By author:
By date:
By tag: