1 2 Previous Next

The Data Stack

22 Posts authored by: pauline.nist

Intel has launched the latest member of the Itanium family, the Itanium Processor 9500. It was first previewed last year at the International Solid-State Circuits Conference as an 8 core, 3.1 Billion-transistor device run on the 32nm process, and compatible with the previous generation Itanium 9300 (Tukwila) processor. Now with the formal launch and official performance numbers, it’s clear that the combination of new core architecture and more cores delivers up to 2.4X the previous generation with 33% faster I/O speed.

 

leap_in_performance.png

 

There are additional neat features like 80% less idle power, additional parallelism, new RAS features like Intel Instruction Replay Technology, and Complete Machine Check Architecture with firmware first error handling.   Intel also released more information regarding the next generation Itanium chip that will follow the Itanium 9500. It’s here that you find one of the most interesting nuggets of new information. Intel will continue to move forward with the common platform strategy, launched with the Itanium 9300, where Chipsets, Interconnects and Memory are shared with the Xeon platform.  The Modular Development Model will take a major leap forward from shared silicon design elements, to full socket compatibility with the next generation Kittson Itanium chip having socket interchangeability, with the future  “Haswell” Xeon chip.

 

MC_Roadmap.png

 

Aside from allowing OEMs to design a single motherboard for both Itanium and Xeon, it allows for a cost effective, sustainable path to bring future Itanium processors to market, where the major design investment is the unique instruction set logic of the core. Itanium can benefit from Xeon economics by sharing not only memory, I/O and RAS, but now packaging and socket.

 

 

The other business news is to remind readers that Inspur and Huawei continue to be Itanium OEMS, developing a range of systems including large 32 socket NUMA designs, and there is serious potential in the PRC market.  Lastly, with the election behind us, for entertainment you can still look forward to Round 2 in the HP-Oracle Itanium lawsuit. HP won round 1 having convinced the judge that there was indeed an agreement for Oracle to provide its products on Itanium. As a result it would appear that Oracle would continue porting to Itanium. It will be interesting to see the fight for damages in round two, scheduled to begin in February, in front of a jury here in Santa Clara County. – Too bad I can’t volunteer… ;-)

I appeared on a Big Data panel last week at IBM’s Information on Demand (IoD) 2012 conference along with two IBM executives: Anjul Bhambhri, vice president of Big Data, and John Borkenhagen, CTO of System x and BladeCenter.

 

It was an excellent opportunity to hear firsthand what people are saying about Big Data today. The questions we received from the audience are a good indicator that some of the apprehension around embracing Big Data we saw a year or two ago is giving way to excitement about the possibilities. We had questions from Telecom companies inquiring about sentiment and geographic analysis of phone data; there were questions on using MapReduce for analysis of social media content; and queries about what systems we recommended to get the most powerful intelligence out of the most challenging Big Data.

 

I came away with the sense that companies are ready to engage positively with Big Data. After all, there’s no reason to fear Big Data – it offers major benefits to those organizations that embrace it. With the correct application of analytics, it can offer a deep understanding of customer sentiment, provide insights into fraud detection, and deliver data analysis for such diverse areas as scientific research, healthcare, traffic flow, and the financial markets. Analysis of machine-to-machine data has enormous potential for supply chain optimization for retail and manufacturing.

 

What makes Big Data analytics compelling is the view it offers into how we live our lives right now, our likes and dislikes, the patterns that structure our activities. Today’s Big Data is a lot more than just the accumulative build-up of transactional information: it reflects changes in our lifestyles and technologies, pulling content from social media and data from Smartphones. It includes sensor and RDID data and information from medical and scientific records. Organizations that take advantage of this real-time analysis can gain a decisive edge in today’s competitive business environment.

 

Big Data is here already, so you don’t want to play catch up with your Big Data solution. IBM has pioneered many of the technologies that drive today’s Big Data solutions, including BigInsights, BigSheets and InfoSphere Streams, and is out ahead of the pack with highly integrated, scalable products and technologies that capture intelligence from the most massive data sets. Intel and IBM have worked together for over 15 years to optimize software performance on the Intel infrastructure, and their Big Data solutions and integrated infrastructures are ready to go. Just add Big Data.

 

Start reaping the benefits of Big Data now at Intel.com/bigdata and ibm.com/bigdata.

I have been blogging about mission critical technology for the last year or so, but more recently you may have noticed a subtle change in my focus. At the end of last year I officially moved from managing the Mission Critical Server Business to a new position driving Enterprise Software Strategy for the Datacenter and Connected Systems Group at Intel.

 

This involves working with our key independent software vendor (ISV) partners at an exciting time in the evolution of software and solutions.  Since I’ve spent most of my career in Mission Critical and Software, this isn’t a big change from my point of view.  What surprises a lot of people is that the job is at Intel, who isn’t generally known as a software company.

 

People who stop to think about it for a minute will know that Intel has provided compilers, debuggers and parallel programming tools in support of its chips.  Additionally they might also know that Intel is a solid contributor to open source Linux, again in support of its chips. Last year Intel made news with its purchase of McAfee, making Intel one of the world’s 10 largest software companies.

 

My reason for changing jobs is twofold. First in my former Mission Critical role, I was spending close to half of my time with the large ISV and Database software partners, who are key to providing business solutions. Over time I realized that we were knee deep in what I characterize as the next major evolution of the software and server business.  That’s the second reason for the change.  The confluence of Moore’s law, enabling amazing price/performance computing solutions, delivered by companies like Intel, , with this new software, is driving a  renaissance in computing solutions, led by cloud, open source, big data, and everything- everywhere mobile applications.  This will no doubt give rise to an entire new generation of software companies, fueled by venture capital investments and  highly valued due to anticipation (which usually exceeds reality) of future success.

 

Of course nowhere in the above do solutions from established ISVs jump out at you. But, I’ve been around long enough to know you don’t throw the baby out with the bath water, and one only has to look at how stalwarts like IBM have morphed into primarily a Services and Software company, and Oracle moved from debunking NoSQL to announcing a product. Thus in the interim, I believe we will see efforts to create mashups of the old and the new enabling the best of both for worlds for customers eager to deploy cutting edge big data solutions in the real world. What survives in the longer term is anybody’s guess, and that’s what makes it so exciting.  I started my career as a software engineer, so maybe I’m going back to my roots!

Please note: This blog originally apeared on InformationWeek.com in the "Cloud" section as a sponsored blog.

 

 

With the rise of social media sites such as Facebook and YouTube, companies are getting hit with a blizzard of unstructured data. This data onslaught comes on top of rapid growth in the amount of customer data housed in enterprise systems. Companies now must find cost-effective ways to integrate and analyze the collective pool of big data to generate granular business insights.

 

A recent Intel white paper may have said it best: "We are in the midst of a revolution in the way companies access, manage, and use data. Simply keeping up with the explosive growth in data volumes will be an ongoing challenge, yet the true winners will be those that master the flow of information and the use of analytics throughout their value chain." ¹

 

 

To stay on top of it all, many companies are deploying Hadoop, an open-source parallel processing framework, to process and analyze social media data on distributed server clusters. They then integrate Hadoop with systems housing other customer data to gain rich insights. To support these efforts, solution providers are building Hadoop interfaces into database products to help with the performance of big data delivery, management, and usage.

 

What we have here is the intersection of the traditional methods of delivering, managing, and viewing information and a new approach that allows data of all types and formats to be quickly sorted for transactional and operational opportunity. This new era of data exchange requires next-generation compute, storage, and I/O technologies-like those found in the Intel® Xeon® processor E5 family.

 

 

This next-generation processor family is a great platform for running analytics on big data in private cloud solutions and enterprise data centers, as well as in cloud deployments. The raw compute power of the processors enables efficient, intelligent, and secure archiving, analysis, discovery, retrieval, and processing of critical data. And along with fast processing, the Intel Xeon processor E5 family accelerates throughput with PCIe 3.0 technology and Intel® Integrated I/O, which is designed to dramatically reduce I/O latency and eliminate data bottlenecks across the data center infrastructure.

 

On the storage side, the Intel Xeon processor E5 family incorporates accelerated RAID and Intel' AES New Instructions (Intel® AES-NI) to speed data encryption. This latter technology is particularly important in private cloud environments, where pervasive encryption is used to protect data from hackers and other threats.

 

 

The new Intel architecture also incorporates innovative technologies designed to reduce power and cooling costs and enable dense configurations with thousands of processors. This is a perfect chip for blade environments.

 

If your organization is moving ahead to the new architecture from the Intel® Xeon® processor 5600 series, Intel has a good ROI story to tell, on top of the performance story. And if your organization uses systems based on earlier-generation processors, the new Intel architecture can deliver spectacular ROI while moving you further along in your journey to the cloud.

 

While it's a great chip for cloud environments, the Intel Xeon processor E5 family is also ideal for private clouds and enterprise data centers looking to accelerate the processing of large datasets while driving down the cost of computing. It's up to the challenges of big data.

It’s clear that the two biggest buzzwords of 2012 are   “Hadoop” and “Big Data.” (Maybe even buzzier than “cloud”)  I keep hearing that they are taking over the world, and that relational databases are so, “yesterday.”

 

I have no disagreement with the explosion of unstructured data.  I also think that open source capability like Hadoop is allowing for unprecedented levels of innovation.  I just think it’s quite a leap to decide that all of the various existing analytic tools are dead, and that no one is doing predictive or real time analytics today.  Likewise, the death of scale up configurations is widely exaggerated.

 

First, I’d like to offer that the notion that real time frequently involves utilizing in-memory analytics for response time. Today, and for the near term, that often implies larger SMP type configurations such as those employed by SAP HANA.  Many think that the evolution to next generation NVRAM server memory technology will redefine server configurations as we know them today, particularly if they succeed competing with DRAM and enabling much larger memory configurations at much lower price points. . This could revolutionize how servers that handle data are configured both from the amount of memory and its non-volatility.

 

Second, precisely because Hadoop is Open Source, it makes sense that existing analytics suppliers are moving to incorporate many of its key features into existing products, such as Greenplum has done with MapReduce. Further, key players like Oracle are now offering Big Data appliances embracing both Hadoop and NoSQL, separately or in conjunction with other offerings.  Even IBM, with arguably the best analytics portfolio in existence today, offers InfoSphere BigInsights Enterprise Edition, which delivers Hadoop (with HDFS and MapReduce) integrated with popular offerings such as Netezza and DB2.  Predictive analytics also exist from companies like SAS in addition to Bayesian modeling from specialty providers.

 

Now, on one level this is capitalism at its best, allowing for the integration of open source with existing (for a price) products, while taking full advantage of open innovation.  On another level, it is an acknowledgement that unless you are a pure internet company (a la Facebook, Google, YouTube), you have a variety of data that might also originate in the physical world, and occasionally would like to connect to actual transactional data.  It also acknowledges that predictive analytics exist today, and the capability can be adapted and applied to unstructured data, while adding installation, management, security and support, and connecting to other warehouses and databases.

 

I think it’s extremely premature to categorize the Hadoop world as separate from everything that has gone before it.  It is naïve to believe that expertise previously developed has no value and that existing suppliers won’t evolve to integrate and combine the best of breed solutions incorporating all data types. Likewise configurations will continue to evolve with new capabilities.

 

I am personally thrilled to see the excitement and energy that is driving new data types, real time and predictive analytics, and automation.  I can only hope that credit card fraud detection becomes much more sophisticated!

 

 

Have something to say or share?  Feel free to find me on Twitter @panist or comment below.

Recently, HP issued a press release detailing its plans for “Odyssey,” Odyssey is a project to redefine mission critical computing with a roadmap that will unify UNIX and Xeon server architectures to bring industry-leading availability, increased performance, and client choice to a single platform.

 

HP has finally responded to Oracle’s salvo of last March regarding Itanium. Although extended support for Oracle v11g continues to be available on Itanium through January 2018, customers have been nervous about the long term.

 

The plan includes delivering blades with Intel Xeon processors for the HP Superdome2 enclosure (code name “Dragonhawk”) and the scalable C-Class blade enclosures, while fortifying Windows and Linux environments with innovations from HP-UX within the next two years. This means that customers with “Dragonhawk” will be able to run mission-critical workloads on HP-UX on Intel Itanium based blades while SIMULTANEOUSLY running workloads on Windows or Red Hat Enterprise Linux on Intel Xeon based blades in the same Superdome 2 enclosure.

 

This announcement essentially provides the path forward that allows customers to present a long term roadmap for currently shipping Superdome 2 Itanium purchases, with longer term support for Intel Xeon for those workloads no longer supported on Itanium.  It should make it vastly easier for customers to justify purchase orders to their senior management by offering the capability to evolve to either continued Itanium usage or mission critical Xeon capability.

 

It also signals that future (within 2 years) Superdome 2 Xeon systems will provide all the availability and resiliency features that UNIX customers recognize.  Intel’s continued Itanium and Xeon innovation will allow HP and Intel to provide customers with greater flexibility and choice to do mission critical computing on their terms. Intel remains equally committed to the Itanium and Xeon platforms, both of which represent our portfolio approach to bringing open standards based computing to the mission critical market segment.

 

While one may wonder why HP took a while to respond with Odyssey, it is highly unusual for HP to preannounce system plans two years in advance with this level of detail.

 

By expanding mission-critical HP Converged Infrastructure and bringing innovations to Intel Xeon systems, HP will enable clients running Windows or Linux to:

 

  • Increase scalability with 32 socket SMP Xeon systems enabling clients to deploy the smallest to largest workloads in a dynamic, highly scalable pool of IT resources.

 

  • Increase availability of critical Linux applications with the HP Serviceguard Solution, which automatically moves application workloads between servers in the event of a failure or an on demand request.

 

  • Boost flexibility and availability of Xeon systems with NP nPartitions technology (nPars), which provide precise partitioning of system resources across variable workloads. HP nPars is electrically isolated to eliminate failure points, which allows clients to “scale out” within a single robust system.

 

  • Enhance business continuity with HP Analysis Engine for X86 to insure efficient diagnoses and automatic repair of complex system errors while restoring system stability in seconds.

 

  • Boost reliability and resiliency with fault-tolerant HP crossbar Fabric that routes data within the system for redundancy and high availability.

 

  • Achieve higher levels of availability with HP Mission Critical services, which identify and resolve sources of downtime.

 

More details on this advancement in Mission Critical services can be found at HP’s website.

Conference season isn’t over yet, and I’m getting ready to leave for Las Vegas and IBM’s annual Software conference (IOD). I expect at least 10,000 people; it will be the first coming out party for Netezza since its acquisition by IBM about a year ago. Given all the hoopla around analytics and big data, (and all of the competitive announcements at that conference at Moscone earlier this month) I’m sure we will see a lot of energy around IBM’s new offerings across both systems (and appliances?) and software.

 

Intel does not have a booth on the show floor (after all, this is a software conference), but we’ll be in the IBM System X Series booth showing off systems with the latest Intel Xeon Series processors.  A featured demo will be the IBM Smart Analytics System 5710 (with two Xeon 5600 series processors), along with theater sessions on our Machine Check Architecture. (BTW did you like those great Intel Q3 results?)

 

We will participate in technical sessions (see below) ranging from IBM pureScale to In-memory performance, and highlight products from Informix and Netezza, along with an intriguing customer session from France Telecom/Orange.  If you stop by and pick up one of our “passports,” and collect two stamps prior to dropping it off at the booth, you’ll be entered to win a 160GB Intel SSD at the daily booth drawing and at the drawings at each participating sessions (see details on the passport)!

 

Just for amusement, in case you don’t see me at IOD, I’m sharing my IBM 100th Anniversary (or is it Birthday?) video clip ?

 

 

 

I also expect to tweet from the show when interesting things happen, so follow me on Twitter: @panist.

 

Intel Sessions @ IOD:

 

Monday, Oct 24

 

8:15am – 9:45am:

Opening General Session with Intel video:  Congratulations on IBM Centennial

 

10:15am-11:15am:

Intel Diamond Session – The Mission Critical Offerings of IBM & Intel: The Innovation Spiral (Berni Schiefer, David Baker) - Mariner B

 

1:00pm- 1:20pm:

Vendor Sponsored Presentation on MCA-R – Want uptime?  Choose Intel Xeon processors with IBM Software (Jantz Tran) – Business Partner theater Expo floor

 

Tuesday, Oct 25

 

11:15am – 12:15pm:

Joint System x/Software Group/Intel Session – Under the covers of DB2 pureScale with Intel Xeon processors (B. Schiefer, J. Borkenhagen, M Shults) - S. Pacific G

 

1:30pm- 1:50pm:

Vendor Sponsored Presentation on MCA-R – Want uptime?  Choose Intel Xeon processors with IBM Software (Jantz Tran) – Business Partner Theater Expo floor

 

1:45pm – 2:45pm:

Keynote – Brave New World: Appliances, Optimized Systems and Big Data (Intel DB2 pureScale on System x video will be in this keynote)

 

3:00pm-4:00pm:

Intel Diamond Session – France Telecom Orange Open Grid with IBM and Intel Xeon (Soumik Sinharoy, France Telecom) – South Pacific I

 

Wednesday, Oct 26

 

2:00pm – 3:00pm:

Technical Session – Maximizing Performance for Data Warehouse and BI with Netezza, Information Server, and Intel Xeon (Sriram Padmananbhan, Mi Wan Shum, Garrett Drysdale) – South Pacific J

 

Thursday, Oct 27

 

8:15am – 9:30am:

Technical Session – IBM and Intel Collaborate to Improve In-memory Performance (Dan Behman, K. Doshi, Jantz Tran) – Islander I

 

11:30pm – 12:30pm:

Technical Session – Performance and Scalability of Informix Ultimate Warehouse Edition on Intel Xeon E7 processors (M. Gupta, J. Tran) – Tradewinds C

 

3:30pm – 4:30pm:

Technical/Business Session – Build a Cloud Ready BI Solution (Sunil Kamath, Larry Weber, K. Doshi) – Islander F

Places to go, people to see, things to do!

 

 

 

If Oracle Open World didn’t provide at least 3-4 enticing options for each and every waking hour, then it either failed to deliver, or you shouldn’t have  attended.  For the first time ever, Oracle streamed all of the keynotes live on YouTube. If you are intrigued by any of my comments, feel free look for Oracle Open World 2011 to experience the messages and entertainment first hand. Most, if not all partners were also posting items.

 

Larry Ellison opened Sunday night with a very hardware centric pitch. He reviewed the Exadata and Exalogic systems and announced the new Exalytics box.  Based on four Intel Xeon E7-4800s (10 cores each for a total of 40 cores) with 1TB of memory, the Oracle Exalytics BI machine features optimized versions of Oracle BI Foundation Suite (Oracle BI  Foundation) and the Oracle TimesTen In-Memory Database for Exalytics.

 

It constitutes the third leg in the stool, connecting to Exadata and/or Exalogic  via Infiniband. The query responsiveness (real time analytics) and scalability provided the latest example of “hardware and software engineered to work  together". After extolling the virtues of open commodity hardware married with  optimized software and techniques like data compression (whilst teaching us that 10x10=100), Larry then suavely transitioned to the Sparc T4 SuperCluster and announced his intention to go head to head with IBM Power in Database performance. The SuperCluster also uses 48 Intel Xeon “Westmeres” for storage processing, so some portion of its performance comes from Xeon!  The next morning saw Thomas Kurian announce  the Oracle Big Data Appliance Oracle’s embrace of Hadoop and a new NoSQL database to process your unstructured data prior to loading the results into one of the Exa boxes.

 

Tuesday provided an opportunity for Kirk Skaugen to deliver Intel’s Keynote on Cloud Vision 2015: the Road to 15 Billion Connected Devices. Kirk covered Intel’s role as a  trusted advisor to the Open Data Center Alliance, and our vision for open clouds that were federated, automated and client aware. These clouds would  support intelligent connected devices ranging from sensors, to cell phones, Ultrabooks, Smart signs and automobiles. Kirk also reviewed Intel’s refresh of  the Xeon server product line including the Xeon E3 and  E7 and the shipping, and soon to be announced Xeon E5  (Sandybridge) products. Kirk's slides are posted on Slideshare and his full presentation is available on the Oracle Open World page.

 

Larry’s Wednesday keynote was totally software focused, and introduced the new Oracle Public Cloud. Oracle’s new Public Cloud offers Fusion Middleware and Applications both in a platform as a service or an appliance as a service  configuration. He also announced and demoed the Oracle Social Network (can a movie be far behind?). The keynote allowed him a forum for his continuing  feud with Marc Benioff and Salesforce.com, who he might actually dislike as much as some of his other competitors. Catch the video if you’d like to hear  the “roach motel” comments!  It's never dull when Larry is on stage!

It’s  that time of year again—when Oracle Open World takes over (overwhelms) the city  of San Francisco.  The big opening event  is always Larry’s keynote on Sunday evenings, when he announces new products,  benchmarks and his personal perspective on the world of technology. I also  expect we’ll see a much higher profile from Mark Hurd, now that he’s been onboard for about a year.

 

Kirk Skaugen will deliver  the Intel Keynote on Datacenter 2015 and the Impact of Intelligent Devices, on Tuesday at 1:30PM in the Novellus Theater at Yerba Buena Center for the Arts – (Remember—you  need to exit Moscone and walk to the theater!).


While  there are the usual lineup of additional CEO keynotes, Intel will follow up  with a robust roster of more detailed enterprise presentations, along with a number of Customer Events with a focus on Exalogic and Exadata.  I’ve included some of my favorites below:

 

 

OW11_Intel_Schedule.png

 

 

We’ll  also have a full booth on the show floor with a large number of partner demos  and regular presentations

 

                                                                                                                                                                                                                               

Demo

Description

Security for DB End to End Oracle Security

Two demos: On Oracle Firewall    and the other on AESNI on Oracle 11gr2 with Oracle Sun Fire

Oracle Solutions with Intel Ethernet and Intel Storage

Unified Networking and Storage for the Virtual Data Center

Cisco Unified Computing System

Oracle E-Business Suite deployment on UCS showing the basic running of transactions with dynamic resource allocation 

Exploiting System X Memory for Oracle’s Database

Take full advantage of Oracle    Solutions with IBM System X features!

Fujitsu Compute Continuum Story

Showcase of Q550 client with new Xeon Server driving backend for a complete compute continuum.

Oracle Exadata Exadata X2-8

Latest Oracle Solution

Solid State Oracle RAC

Oracle RAC using the latest Intel processors and SSDs to demonstrate high performance and flexibility with unprecedented density and ROI

SAP ERP with Oracle DB in the Cloud Environment

Showcasing SAP ERP with Oracle Database on VMWare cloud infrastructure on Intel Xeon E7

HP & Intel Partner Deliver Enterprise Solutions

Showcase of the HP DL980 server with Intel E7 ten-core processors running Oracle Linux

Dell - E-Business Suite – a private cloud

Latest Intel hardware with dynamic resource allocation, and integration in to the cloud

 

All of this and more are at the Intel Booth

 

Remember to listen to Chip Chat Live Monday & Tuesday and stop by and  see us if you’re at the show!

It’s that time of year again! Only one day until Intel’s  annual developer forum (IDF) at Moscone  Center in San Francisco from September 13-15.   While there is always a lot of focus on new Intel client products and  technologies, we always fight to include Server content. I am happy to report  that we have again succeeded!

http://www.moscone.com/uploads/multimediaphoto/57/lowRes/MWExterCornerFourthSt-.jpg

Source: Moscone Center - SMG

 

 

 

If you are an Intel server customer, we love to focus on  solution stories with our many software partners.  Given the success of our Xeon E7 (Westmere)  launch earlier this year, we have had the opportunity to see most of our  partners demonstrate data center solutions and utilize key features. This is  your opportunity to come and learn firsthand from many of the experts.  I’ve listed key server sessions below,  highlighting Mission Critical features and Data Center solutions.

 

Additionally, there will be numerous demonstrations that  showcase Intel 10GbE network products, Intel Node Manager, software and  security, and new chip technologies such as our Many Integrated Core (MIC) chip  for scalable technical computing.  New  computing solutions such as Microserver, in addition to Cloud and Virtualization,  will also be in the Technology Showcase, along with partner solutions from  VMware, Microsoft, Oracle, Cisco, Dell, RedHat, Citrix, EMC and others.


Some key IDF Tech Sessions featuring Intel Xeon mission critical technologies are:


 

 

 

 

 

 

 

These sessions will be delivered by many of our Key  Architects, Strategists, Software Engineers and performance experts.  Please check out the IDF  Program for detailed abstracts, speaker information and times and  locations. They provide an opportunity to see how Intel technology currently  delivers real world solutions, along with future product plans and emerging  technology trends.

 

You will also want to attend the Executive Keynote  sessions   to hear our executives  discuss Intel’s future plans. Intel will also host bloggers and broadcast from IDF.  I’ll be sure to share my impressions and  reactions to the conference after IDF, so look out for another blog post soon! See  you at IDF!

Intel is using this week’s Hotchips  conference to disclose additional new information about its next generation  Itanium chip, codenamed Poulson.

 

The initial Poulson details (8 cores, 3.1 Billion  transistor, 32nm process) were disclosed at the International  Solid State Circuit Conference earlier this year.  While Itanium customers are always interested  in coming attractions, it’s also worthwhile for Intel Xeon Server customers to  also keep an eye on the evolution of Itanium, as many features originally  introduced on Itanium  often waterfall  down to subsequent generations of Xeon CPU chips.  Remember that Poulson, like the current Intel  Itanium 9300 processor shares many common platform ingredients with Xeon,  including the Intel QuickPath and Scalable Memory Interconnects, the Intel 7500  Scalable Memory Buffer and DDR3, and the Intel 7500 Chipset.

 

So, what’s new?  There  are three key feature areas.  The first  is Intel Instruction Reply Technology, which is a major RAS enhancement.  This is the first Intel processor with  Instruction Replay RAS capability, and it utilizes a new pipeline architecture  to expand error detection in order to capture transient errors in execution. Upon  error detection, instructions can then be re-executed from the instruction  buffer queue to automatically recover from severe errors to improve resiliency.

 

The same instruction buffer capability also enables the  second new feature, an improved Hyper-Threading Technology. It supports performance  enhancement with Dual Domain Multithreading support, which enables independent front and  backend pipeline execution to improve multi-thread efficiency. As EPIC  architecture is already known for its highly parallel nature, this enhancement  will help take Poulson’s overall parallelism to the next level.

 

Lastly, Poulson is adding new instructions in four key  areas.  First there are new Integer  operations (mpy4, mpyshl4, clz). In support of the higher parallelism and multithreading  capabilities, there is expanded Data Access Hints (mov dahr), Expanded Software  Prefetch (ifetch.count) and Thread Control (hint@priority). These new  instructions lay the foundation for the Itanium architecture to grow with  future needs.

 

As you can see, most of these features are designed to take  full advantage of the 8 core, 12-wide issue architecture by enabling the  maximum amount of parallel execution. Poulson is on track for 2012 delivery (if  you attended HP Discover you may have had a chance to actually see an active Poulson  system!)  and the follow-on future  Kittson processor is under development.

 

If you’d like to learn more details check out the full Hotchips presentation.

Customers that look to deploy mission critical applications  on Intel Xeon Servers always consider IBM as one of the key hardware  partners.  IBM offers a variety of system  configurations, and also invests both in benchmarks and software partners. With  these investments, IBM can differentiate its products and demonstrate the full  capability to handle enterprise server workloads.

 

Last week, IBM published the latest in a series of  benchmarks on Intel’s Xeon E7 processors performance.  This one is a very impressive 3 million  transactions per minute TPC-C benchmark, which is the highest performance  result ever published on an X86-64 system.   It also ranks fifth in the TPC-C Top Ten performance results for non-clustered  systems and also in the TPC-C Top Ten price/performance results for non-clustered  systems.   Housed in a 43U rack, this entire system  configuration is perfect for enterprise database applications.

 

The IBM x3850 X5 achieved this result by using IBM’s  innovative MAX5 technology. The MAX5 technology allows for a scalable, 1U,  memory expansion drawer. This expansion drawer provides an additional 32 DIMM  slot with a memory controller for added performance, and boosts scalability  with a node controller for the x3850.

 

The TPC-C configuration above had a total of 3TB of memory  (2TB in the server and 1TB in the IBM MAX5 for System x). Previously, IBM has  also published papers that indicate the effect of additional memory capacity on database performance.  While this paper focuses on in-memory database  performance, the memory expansion can also increase the performance of other  application workloads like web, file, virtualization and cloud computing.

 

It is worth noting that in addition to the TPC-C benchmark, IBM also published a result that sets new  records for 4-socket performance and overall price/performance on the TPC-E  benchmark that utilizes Microsoft SQL Server 2008 R2 Enterprise Addition  configured with SSD storage.  I mention  this because there is always lively discussion regarding the merits of the  TPC-C versus the TPC-E benchmarks and their  relationships to actual production workloads.

 

I believe these are all great examples of the workload and performance capability of Intel’s Xeon E7 chips     IBM has demonstrated that when partners work  collaboratively it is possible to implement unique features that deliver  additional capability to the customer.   The tradeoff between CPU and memory has always provided the ability to  tune configurations for database workloads.  With these new benchmarks IBM has validated those  options on its x3850 X5  server.

I had a chance to spend a day in Chandler, Arizona (not a  boondoggle if you go in July!) with David Baker and his Enterprise Server  Engineering team, which is part of Intel’s Developer Relations Division.


In layman’s terms, these are the guys that do all the work with  our software partners to optimize performance on Xeon servers.  We drive this team crazy every time we launch  a new Xeon server chip because all of the OEM and ISV partners look for  benchmarks to show off their respective hardware and  software performance.   Internally, even the Intel server group is  equally guilty because they want to feature multi core and scaling performance,  along with neat new features like our AES-NI encryption  instructions  but, as we all know,  benchmarks are benchmarks.  Every partner  looks for the one that will highlight some unique feature of their  implementation, and that’s all well and good. However, while customers may view  benchmarks as necessary, rarely are they sufficient to demonstrate the actual  deployed real world workloads (I know, you are shocked!).

 

As a result, the months in between Intel Xeon  chip launches are actually just as busy for our team in Chandler. That’s when  they essentially work on customer workloads, and/or interesting emerging  technologies like the Franz Semantic Database.  Do  you know about Triples? While there is still pressure; it’s a much more  creative environment as they get the variety of challenges introduced by new  technology like Franz’s AllegroGraph.

 

Lately,  the team has had the chance to work with a lot of healthcare partners. For me,  this is ultimately the “most real” application you get to show end user’s visible  technology improvements, such as faster diagnostic scan results. Whether it is delivered  from dedicated systems or it is Software as a Service, applications don’t get  more mission critical than healthcare.

 

“In critical patient care situations like a stroke,  time is essential. Significant technology  advancements like the Intel® Xeon® processor 5500 series processor combined  with our Vitrea fX brain perfusion application enable the fast processing of  large amounts of image data to provide doctors with quantitative results  related to patients’ regional cerebral blood volume (rCBV), mean transit time  (MTT) and regional cerebral blood flow (rCBF).” Vikram Simha, Chief Technical Officer, Vital  Images

 

Hopefully, you also caught my  recent blog about how Intel Xeons help deliver digital mammogram results even faster and more  efficiently.

 

There are a lot of yet to be  announced efforts underway in additional healthcare workloads, BI, drug  discovery and other areas.  If you’re a  partner and you’ve worked with this talented team, or you work with them now,  feel free to send along a thank-you! Keep watching for future results, and keep  sending us challenges!

Now that I’ve recovered from HP Discover 2011, it’s time to  talk about the good, the bad and the ugly.

 

First, HP (along with their key sponsors Intel, SAP and  Microsoft) did an excellent job of putting on a great show that pulled together  all of HP in one conference, for the first time ever.  It was a class act, and gave them the  opportunity to show off new products from the high  end 32-socket Itanium Integrity, to the entry level web  OS tablet to be launched in July.  HP  set up a great Blogger Lounge on the main show floor, providing a great  convenient central place for connecting (along with all the comforts of  home).  I had a chance to do a quick  video interview with Thomas Jones @niketown588.

 

Kirk Skaugen delivered a great Intel  keynote on Monday evening. He strongly confirmed the Intel commitment to Itanium, along  with highlighting Intel’s Ultrabook effort to drive a next generation of lightweight,  ultra-portable laptops with the attributes of a tablet and the performance of a  PC.

 

As part of his keynote, Martin Fink further reinforced the Itanium  commitment from HP when he demonstrated a running Poulson prototype system (on  target for delivery in 2012), along with announcing the availability of Virtual  Partitions on the entire Integrity portfolio.  Of course, the big news in Martin’s keynote was the official HP letter to  Oracle threatening legal action regarding Oracle’s decision to end  Itanium support, followed by the more recent decision to file the lawsuit.

 

Finally, in order to avoid any post show letdown, HP waited  until the Monday after Discover to announce the latest round of corporate  reshuffling, which implemented many of the changes that were first rumored  around the time of the annual HP shareholder  meeting in March.

 

Last but not least, the break snacks at HP Discover were awesome, with  more healthy choices than I’ve ever seen in Las Vegas or anywhere!

I just returned from my European trip and wanted to  summarize some of the key customer feedback that I received from a breakout  session on BI and Analytics.

 

I need to preserve the identity of the customers so I’ll  leave out names, and I’m sure that the messages are ones you have heard before,  but perhaps with some amplification.

 

Most of the attendees agreed that BI was an area of  investment for their businesses, agreeing with analyst trends like:

 

 

There is a lot of interest for in-memory databases,  primarily for either real time BI with response time in seconds  to drive production, or for critical decision  workloads where time = money. There’s also interest in utilizing in-memory for  predictive analysis because the speed up would let you consider more scenarios.  All of those  interested were experimenting now! They want it ASAP, even if they have to roll  up their sleeves and work with the vendors.

 

There was a lot of discussion about cloud as a BI resource.  Obviously if you are analyzing data that comes from the web (social media,  online sales etc.,) is has some appeal, assuming that you can deal with the  data security issues.  If your data  doesn’t originate in the cloud, there is concern about the cost of moving it  back and forth. We actually had one customer say that they had shipped hard  drives as that was a lot cheaper than shipping the data! (Maybe there is a long  term market for HDDs as interchangeable media?!?)

 

Also, there was a lot of excitement about cloud resources,  as a way to quickly set up business in emerging countries without conventional  infrastructure.   These are places were  cloud works for traditional BI and DB until the datasets are too large, and it  lets you ramp up a new business or expand into a new geography.

 

Lastly, there is some concern about consolidation in the BI  sector because the customers are enjoying the pace of innovation and some are  working with smaller companies.  Because  no one stack is perfect, they like to pick and choose.

 

It was a great session and always good to hear directly from  end users!

Filter Blog

By date:
By tag: