There are always plenty of new terms looking to join the data center vernacular. You’re familiar with many that have taken root over the past several years: virtualization, cloud, consolidation, mission critical... the list goes on and on. A hot term you’ve probably heard recently is Big Data. There’s a lot of buzz around this one, as evidenced by the many companies introducing products designed to meet the challenges of Big Data. In this post, I’m going to take a look at Big Data and see what its implications could be for data center networks.

 

33042_Rack-server-aisle_8273_5400.jpg

 

First things first: what is Big Data? The specifics may vary depending whom you ask, but the basic idea is consistent: Big Data means large sets of data that are difficult to manage (analyze, search, etc.) using traditional methods. Of course, there’s more to it than that but that’s a decent enough answer to make you sound semi-intelligent at a cocktail party. Unless it’s a Big Data cocktail party.

 

A logical follow-up question is “why is Big Data so...big?” The main cause for these massive data sets is the explosive growth in unstructured data over the past several years. Digital images, audio and video files, e-mail messages, Word or PowerPoint files – they’re all examples of unstructured data, and they’re all increasing at a dizzying rate. Need a first-hand example? Think about your home PC. How many more digital photos, MP3s, and video files are on your hard drive compared to a few years ago? Tons, right? Now imagine that growth on an Enterprise scale, where thousands of employees are each saving gigabytes worth of presentations, spreadsheets, e-mails, images, and other files. That’s a lot of data, and it’s easy to see how searching, visualizing, and otherwise analyzing it can be difficult.

 

[Structured data, for those of you wondering, is data organized in an identifiable structure. Examples include databases or data within a spreadsheet, where information is grouped into columns and rows.]

 

So what to do? There’s no shortage of solutions billed as the answer to the Big Data problem. Let’s take a look at one that’s getting a lot of attention these days: Hadoop.

 

Hadoop is an open source software platform used for distributed processing of vast amounts of data. A Hadoop deployment divides files and applications into smaller pieces and distributes them across compute nodes in the Hadoop cluster. This distribution of files and applications makes it easier and faster to process the data, because multiple processors are working in parallel on common tasks.

 

Let’s take a quick look at how it works.

 

Two major software components comprise Hadoop, the Hadoop Distributed File System (HDFS) and the MapReduce engine.

  • HDFS runs across the cluster and facilitates the storage of portions of larger files on various nodes in the cluster. It also provides redundancy and enables faster transactions by placing a duplicate of each piece of a file elsewhere in the cluster.
  • The MapReduce engine divides applications into small fragments, which are then run on nodes in the cluster. The MapReduce engine attempts to place each application fragment on the node that contains the data it needs, or at least as close to that node as possible, reducing network traffic.

 

So who’s using Hadoop today and why? You’ve heard of the big ones – Yahoo!, Facebook, Amazon, Netflix, eBay. The common thread? Massive amounts of data that need to be searched, grouped, presented, or otherwise analyzed. Hadoop allows organizations to handle these tasks at lower costs and on easily scalable clusters. Many of these companies have built custom applications that run on top of HDFS to meet their specific needs, and there’s a growing ecosystem of vendors selling applications, utilities, and modified file systems for Hadoop. If you’re a Hadoop fan, the future looks bright.

 

What are the network implications of Hadoop and other distributed systems looking to tackle Big Data? Ethernet is used in many server clusters today, and we think it will continue to grow in these types of deployments, as Ethernet's ubiquity makes it easy to connect these environments without using specialized cluster fabric devices. The same network adapters, switches, and cabling that are being used for data center servers can be used for distributed system clusters, simplifying equipment needs and management. And while Hadoop is designed to run on commodity servers, hardware components, including Ethernet adapters, can make a difference in performance. Dr. Dhabaleswar Panda and his colleagues at Ohio State University have published a research paper in which they demonstrate that 10GbE makes a big difference when combined with an SSD in an unmodified Hadoop environment. Results such as these have infrastructure equipment vendors taking notice. In its recent Data Center Fabric announcement, for example, Cisco introduced a new switch fabric extender aimed squarely at Big Data environments. You can expect to see more of this as distributed system deployments continue to grow.

 

Big Data isn’t going away. It’s going to keep getting bigger. We’ll see more products, both hardware and software, that will be designed to make your big data experience easier, more manageable, and more efficient. Many will use a distributed model like Hadoop, so network infrastructure will be a critical consideration.

 

So, here are some questions for you, dear reader: Have you deployed a Hadoop cluster or are you planning to do so soon? What network considerations did you take into account as you planned your cluster?

 

We’d love to hear your thoughts.

 

Follow us on Twitter: @IntelEthernet

Since my days in junior high school (many years ago in a galaxy far, far away), I’ve recognized myself as somewhat challenged in proper use of commas, pronouns, adjectives, and the general mechanics of the English written word.  While I’ve improved over the years (thanks to my editorial helpers), a 1971 edition of the Practical English Handbook remains my trusted and constant companion.

 

Having made this confession, it was with some irony that I wrote my latest industry perspective for Data Center Knowledge discussing my fifth fundamental truth of cloud computing strategy: Cloud is a verb, not a noun.

 

In general, a noun is a word denoting a person, place, thing, event, or idea. A verb is typically a word denoting action, existence, or occurrence.  If you’ve followed my Industry Perspectives columns (and thanks if you have), you’ve likely recognized that I tend to characterize the cloud as a complex process that requires alignment of many moving parts. In other words (at least in Bob’s opinion) it’s more a verb than a noun.

 

At recent industry conferences, I’ve met attendees who seem intent on defining the cloud as a discussion about the data center (with sub-topics of virtualization and consolidation) and thin client.   While these two elements are certainly components of the cloud (with recognition that usage models and  bandwidth must be considered first for the client side discussion), they do not really represent the whole.  While it’s not clear to me why this is happening, it seems to be an attempt to slice and dice the cloud topic into bits that are less intimidating.  Who knows?

 

Is this something familiar in your world?  If so, I encourage you to read my industry perspective and join in the discussion.   For more information or answers to your questions, please feel free to contact me on LinkedIn.

Conference season isn’t over yet, and I’m getting ready to leave for Las Vegas and IBM’s annual Software conference (IOD). I expect at least 10,000 people; it will be the first coming out party for Netezza since its acquisition by IBM about a year ago. Given all the hoopla around analytics and big data, (and all of the competitive announcements at that conference at Moscone earlier this month) I’m sure we will see a lot of energy around IBM’s new offerings across both systems (and appliances?) and software.

 

Intel does not have a booth on the show floor (after all, this is a software conference), but we’ll be in the IBM System X Series booth showing off systems with the latest Intel Xeon Series processors.  A featured demo will be the IBM Smart Analytics System 5710 (with two Xeon 5600 series processors), along with theater sessions on our Machine Check Architecture. (BTW did you like those great Intel Q3 results?)

 

We will participate in technical sessions (see below) ranging from IBM pureScale to In-memory performance, and highlight products from Informix and Netezza, along with an intriguing customer session from France Telecom/Orange.  If you stop by and pick up one of our “passports,” and collect two stamps prior to dropping it off at the booth, you’ll be entered to win a 160GB Intel SSD at the daily booth drawing and at the drawings at each participating sessions (see details on the passport)!

 

Just for amusement, in case you don’t see me at IOD, I’m sharing my IBM 100th Anniversary (or is it Birthday?) video clip ?

 

 

 

I also expect to tweet from the show when interesting things happen, so follow me on Twitter: @panist.

 

Intel Sessions @ IOD:

 

Monday, Oct 24

 

8:15am – 9:45am:

Opening General Session with Intel video:  Congratulations on IBM Centennial

 

10:15am-11:15am:

Intel Diamond Session – The Mission Critical Offerings of IBM & Intel: The Innovation Spiral (Berni Schiefer, David Baker) - Mariner B

 

1:00pm- 1:20pm:

Vendor Sponsored Presentation on MCA-R – Want uptime?  Choose Intel Xeon processors with IBM Software (Jantz Tran) – Business Partner theater Expo floor

 

Tuesday, Oct 25

 

11:15am – 12:15pm:

Joint System x/Software Group/Intel Session – Under the covers of DB2 pureScale with Intel Xeon processors (B. Schiefer, J. Borkenhagen, M Shults) - S. Pacific G

 

1:30pm- 1:50pm:

Vendor Sponsored Presentation on MCA-R – Want uptime?  Choose Intel Xeon processors with IBM Software (Jantz Tran) – Business Partner Theater Expo floor

 

1:45pm – 2:45pm:

Keynote – Brave New World: Appliances, Optimized Systems and Big Data (Intel DB2 pureScale on System x video will be in this keynote)

 

3:00pm-4:00pm:

Intel Diamond Session – France Telecom Orange Open Grid with IBM and Intel Xeon (Soumik Sinharoy, France Telecom) – South Pacific I

 

Wednesday, Oct 26

 

2:00pm – 3:00pm:

Technical Session – Maximizing Performance for Data Warehouse and BI with Netezza, Information Server, and Intel Xeon (Sriram Padmananbhan, Mi Wan Shum, Garrett Drysdale) – South Pacific J

 

Thursday, Oct 27

 

8:15am – 9:30am:

Technical Session – IBM and Intel Collaborate to Improve In-memory Performance (Dan Behman, K. Doshi, Jantz Tran) – Islander I

 

11:30pm – 12:30pm:

Technical Session – Performance and Scalability of Informix Ultimate Warehouse Edition on Intel Xeon E7 processors (M. Gupta, J. Tran) – Tradewinds C

 

3:30pm – 4:30pm:

Technical/Business Session – Build a Cloud Ready BI Solution (Sunil Kamath, Larry Weber, K. Doshi) – Islander F

I just got my latest upate on the agenda for Open Compute Summit from the Open Compute Project (OCP) and I really look forward to the event! The event will be in New York City next week. The agenda looks great with technical workshops on storage, open rack, virtual I/O, systems management, and data center Design.

 

 

 

The last Open Compute Project Summit was a huge success. Specifically, I recall the presentations on the platform design. I was enlightened by the next level detail on the “Vanity Free” engineering (I think I have quoted the weight-reduction statistic, saving six pound, on their servers multiple times – why shouldn’t designs be minimalist?).

 

I am so excited for the event that now I am struggling to choose between the Storage (Big Data is one of the Big Challenges), System Management, and Datacenter Design. I guess I'll have to flip that proverbial three sided coin.

 

I plan to write one or two blogs on the most interesting take-aways from the meeting, so stay tuned!

The biggest challenge facing high performance technical computing is to deliver an Exaflop per second. What makes the problem challenging is not just the achievement of that scale of computing performance, but to do it within a “reasonable” power budget of 20MW as Kirk Skaugen recently announced .

 

It’s a performance goal that cannot be achieved without an efficiency breakthrough. But the problem is more than just efficiency. As one of the smart guys I work with at Intel is fond of pointing out to me (whenever I get too crazy about efficiency), his wrist-watch is extremely energy efficient - but also not useful for computing.

 

The story is about performance and efficiency. Neither is sufficient. Both are necessary.

 

So I asked myself, which systems are closest to achieving Exascale goals and how can a rank order be established?  Is there an easy way to look at how close we are to that goal?

 

A good place to start is the Top500, which ranks on performance, and the newer Green500, which ranks solely on efficiency.

 

The problem with those metrics is that they are mutually independent. For instance in the Top 20 Green 500 you have servers that are near the bottom of the performance heap. And in the upper eschelons of the Top500 you have many inefficient systems.

 

For our purposes the separation of the Top500 and Green500 does not provide insight to the Exascale goal.

 

So I started messing around with several way to look at the data over a weekend. I thought I would just share here the one way to look at things that seemed fruitful,

 

In the graph below I’ve just taken the efficiency and performance data from most recent Green500 List and plotted performance against efficiency on a log scale.

 

Exascalar Graph.jpg

Please click the image for larger view


 

To the graph I have added four things: 1. the publicly stated Exascale performance and efficiency goal, 2. an arrow indicating a scalar quantity gauging how “far” each point is from the Exascale goal logarithmically,  3. iso-power lines at 2MW and 20MW, and 4. The boundary of the “Top20” based on the scalar value.

 

One thing I like is about this representation is that systems having either low performance or efficiency are naturally excluded. To rank highly, you need good performance and efficiency.

 

Does this approach tell us anything new? One way to tell this is to look at the Top10 based on this ranking. NOTE: This is not intended as formal re-analysis of the data - this is a blog exploring a concept only. The Table below shows how the systems stack up.

 

Exascalar List.jpg

Please click the image for a larger view

 

 

It is clear that while the top “exascalar” ranking closely aligns to performance, efficiency does have a risruptive effect on the ranking. Systems with more balanced scores tend to move up the “exascalar” ranking (for instance the GSIC HP Proliant system, which ranks 4th in efficiency and 5th in performance, moves up to 3rd in this scheme), whereas relatively inefficient systems, even with high performance, tend to move down.

 

Looking back at the data in the graph, it is certainly intriguing that systems with relatively low performance are, in fact, nearly within the top 20 range of for this “exascalar.” So efficiency leadership may count for something if it can ultimately scale in performance.

 

What I like about this approach is the easy interpretation of the scalar value - the “number of orders of magnitude” remaining to Exascale. It's an efficiency and a performance problem. A value of three means a factor of one thousand away from the goal of delivering 1 Exaflop in 20MW.

 

It remains to be seen how the Exascale challenge will be won, of course. New announcements of plans to improve systems are coming out regularly.  Perhaps this approach, or one like it, which looks at both efficiency and performance in the nose-bleed range of supercomputing, will get us beyond looking at performance or efficiency separately and help us to understand which architectures, systems, and approaches are best closing the gap to the solution. And that ultimately will translate to a win for everyone.

 

As always, I’m interested in your thoughts and insights.

 

Is it helpful to understand how close systems are to exascale levels of capability. Does“exascalar” approach provide insigh into that? Does it address the question, “which system is closest to achieving Exascale goals of performance and efficiency?” better than looking at power and efficiency separately?  How would you improve things? (for instance one could plot power instead of efficiency, but when I looked at it it seemed to provide less insight). What alternatives schemes might be proposed as way to look at Performance and Energy Efficiency in supercomputing and what insights do they offer?

 

 

Comments are welcome.

I was at a conference recently and while I demonstrated the Xeon E7 processor RAS (Reliability, Availability, Serviceability) features, I discussed RISC to IA migration with the attendees. Occasionally I got blank stares."Oh, we’re doing server migration," was a common response, along with "we’re migrating our UNIX servers to Linux."

 

Perhaps we here at Intel get ourselves tripped up over our usage of terms. Many of us live in the world of processor architectures.  This leads us to use the terms RISC to IA, which refer to the different processor architectures.  Moving up the stack to the hardware layer, some refer to Server Migration or Platform Migration.  Let’s take another step up the stack to the operating systems. Here, what we discuss is referred to as UNIX to Linux migration.  This also includes moving applications from proprietary servers to the cloud and even data center consolidation where large proprietary servers are replaced by Virtual Servers running on Intel based hardware.

 

If you’re doing any of the above, then you need to read these blogs for hints and suggestions.  You should also visit the server migration website for helpful tips.  The website is public and is filled with white papers, case studies and How-Tos. We add to the site all the time, so keep visiting us!

 

The term proprietary hardware usually refers to servers that are marketed uniquely by one vendor.  For example, POWER 7, sold uniquely by IBM, or SPARC sold by Oracle. Both fall into this definition.  While these processors have many good features, they have a serious limitation.  Usually, once you buy into the server line, you are limited to the same processor based servers for future upgrades. These upgrades can involve significant physical changes in the data center.  Often, the term ‘forklift upgrade’ is used as this type of upgrade entails the replacement of an entire rack of hardware.

 

The term ‘Open Servers’ is used for hardware that utilizes the x86 architecture.  This hardware is characterized by two processor vendors and numerous choices for server platform manufacturers, as well as form factors.

 

Another term used here is commodity server.  This usually means something cheap but low cost Xeon processors are anything but ‘cheap’ in the pejorative sense.   Here, commodity is related to availability from a wide range of vendors in a variety of form factors.

 

Another term used at a very technical level is Endianness. This refers to the byte order of mutli-byte data. In a crude example, say the number 1758 is represented by two bytes. On a Big Endian machine, like SPARC or POWER, the first byte has 17 and the second byte has 58. On a Little Endian machine, like the Intel Xeon, the first byte has 58 and the second byte has 17. There are a lot of reasons for each architecture but suffice it to say that multi-byte data has to be converted before it can be used by a machine with a different architecture.  While some programs are written to be ‘endian neutral,’ most databases are not, and custom built programs aren’t either.

 

This endian difference is the reason we have to go through this migration protocol. If you search for endian difference on Intel.com, you can find white papers and even a video addressing how to code for this.

 

I want to end on where the term ‘endian’  came from (I worked with Danny Cohen 10 years after he wrote this paper). Almost 300 years ago, Jonathon Swift wrote Gulliver’s Travels, a satire on the politics of the day.  He had the characters of the world he created go to war over which end of an egg to break, the big end or the little end, when taking the shell off of a hardboiled egg.  To me, the bottom line is that we have to be careful how seriously we take ourselves. While endian differences created considerable work for us, there should never be an argument about who is right or wrong.

brunodom

Capacity Planning for IaaS

Posted by brunodom Oct 17, 2011

This is the last post from a series of articles about Capacity Planning for Cloud. I started with capacity planning for SaaS and Capacity Planning for SaaS Part 2, and in the next one I discussed PaaS. These topics are not co-dependent, i.e. you don’t need to have an IaaS in place to have PaaS, or a PaaS to have SaaS. However, the concept of layers makes it easier to understand the possible issues that can be avoided in the lower stack of a cloud infrastructure.

 

Usually, the biggest concern for an IaaS solution is how to architect the infrastructure in order to provide the capacity and performance flexibility required for a multi-tenant environment. IaaS is basically storage, network, processor and memory wrapped in a service offering. Thus, what I want to discuss are some underlying trends on each of these components.

 

Storage

 

The base of the IaaS stack is virtualization.  Most challenges happen in a physical approach, such as underutilization of server resources, difficulty in protecting server availability and dealing with disaster recovery. These can all be alleviated with virtualization. However, the biggest challenge is storage management due to the complexities associated with hypervisor management resources, and the shared storage model.

 

In an IaaS solution, usually there are two approaches to the design of the storage solution: scale-up and scale-out. The decision about which to adopt will affect the overall cost, performance, availability and scalability of the entire solution.

 

Besides the fact that topology decision is a combination of functionalities, price, TCO and skill, the biggest difference between scale-out and scale–up topology is shown in the following table:

 

 

Scale-out

Scale-up

Hardware scaling

Add commodity devices

Add faster, larger devices

Hardware limits

Scale beyond   device limits

Scale up to   device limit

Availability, resiliency

Usually more

Usually less

Storage management complexity

More   resources do manage, software required

Less   resources do manage

 

 

Usually, scaling up an existing system often results in simpler storage management than with scale-out approach, as the complexity of the underlying environment is reduced or at least known. However, as you scale up a system, the performance may suffer due to increasing density of shared resources in this topology. On the contrary, with scale-out topology, the performance may increase due to increased number of nodes, where more CPU, memory, spindle and network interface are added with each node.

 

If you plan for a local private cloud in a Greenfield, the scale-up approach can attend very well because of the  simplicity to manage. If you design a very large public cloud and take the benefit to grow theoretically without limits and small grains, you should consider the scale-out approach.

 

Storage is a key component in cloud computing. Now, there are various options based on workload, as shown below:

StorageTypes.png

There isn’t a “one solution fits all” in a cloud environment. The architecture concept should be built to allow decoupling of virtual machines from physical layer and a virtual storage topology that allows any virtual machine to connect to any storage in the network. This is a requirement for a well-designed IaaS.

 

Network


Actually, the consolidation factor in a virtual environment where a single physical host reaches a point where it has 15-25+ virtual machines, and the amount of network traffic is equivalent a top of rack switch. Usually, at least 8x GbE is required to handle VMs network traffic plus hypervisor management traffic. Besides Ethernet interfaces, it is not uncommon use 2x HBAs interfaces for storage connectivity.  Considering Data Center optimization best practices for increasing rack density and managing 10 units server/rack, it’s only 120 cables  for servers (i.e. 2x power cables + 8x GbE cables + 2 FC cables per server).

 

Managing a high density server environment adds a lot of complexity for connectivity. Unified fabric is a key technology for IaaS. Unified Networking concept with 10GbE can reduce the amount of cables from 10 to 2 per server with 25% more throughput. At same time, it allows flexibility to dynamically allocate bandwidth to VMs and balance between storage and Ethernet with SLA policy.

 

In order to deal with availability and improve flexibility, the best practice is to configure both interfaces for use by each VM and connect each interface to a different 10GbE switch. The following picture illustrates this configuration:

 

vm-traffic.png

 

Personally, I don’t see much reason to use the 1GbE LOM only for manageability. 10GbE has enough bandwidth and reliability so that you do not need to use a 10GbE switch port or place a second top of rack switch for 1GbE.

 

Unified Networking definitely makes the capacity planning much easier!

 

Server


Physical servers should be the result of a collection of factors: hypervisor licensing model, expected VMs templates, capabilities, network and storage architecture, Data Center facilities and budget constraints.

 

To illustrate it, I used some assumptions about a fiction environment of 1000 server, where I expect that 80% of VMs are 1vCPU with 3GB of memory and 15% with 2vCPUs with 8GB and only 5% with 4vCPUs with 16GB, so an average of 4.4GB/VM.

 

Assuming that in this scenario I used a rack server configuration from the Dell website and adopted the VMWare vSphere 5.0 Enterprise Plus SKU with new license model, we get the following spreadsheet, based on amount of memory installed. For this exercise, I assumed that total amount of pMemory is equal to vMemory and using 100% memory allocation – don’t over commit.

For 2 sockets servers:

2STable.png

And for 4 sockets server:

4STable.png

Now, plotting these two tables together we can see a 2 socket server is the best choice for this particular environment:

2Sand4Splot.png

 


There isn’t a “one size fits all” for IaaS. This is not because virtualization gives us the flexibility to allocate computational resources so that we always make good decisions. In fact, it’s the opposite. With virtualization, you can remediate a bad decision -- but now our decisions have a profound impact on solution TCO.

As a person that has spent the bulk of my career in planning, strategy and marketing roles in technology company headquarters locations, I’ve always enjoyed the opportunities I’ve had to get out and speak to customers.  I always find it immensely useful to get the opportunity to really engage folks to understand their real challenges, discuss potential solutions and get real feedback on our technologies and messages in order to make our solutions better and as a result make the customers’ work lives better.  I recently had several such opportunities at VMworld in Las Vegas.  There was one opportunity that I found particularly enriching—as it offered multiple benefits.  I was fortunate to moderate a panel discussion where we brought together a number of security experts from several Intel business units and McAfee to discuss cloud security.

 

The format was a brief introduction to set context and scope, followed by each of our panel experts—Steve Orrin from the Intel, Kim Singletary from McAfee, Aric Keck from the Intel and Ned Smith from Intel — took ~5 minutes to discuss their view of the market and the key technologies and use models that most excite them.  As a security enthusiast myself, it was great to just absorb some of that knowledge and perspective.  But then came the next benefit and really the icing on the cake—we had 30 minutes to open the floor to the audience for them to customize the discussion to make it really address their issues and concerns.  For me, this is where the rubber hits the road.  Apparently, I was not alone as we had lines of folks coming to the microphones to ask questions and really engage our experts until we finally had to close the session as we ran out of time before the audience ran out of questions!

 

I know our experts enjoyed the dialog, and the audience members that passed in the halls or lingered to talk with the speakers even as we were being semi-forcibly cleared from the room to make way for the next session.  Even when we weren’t discussing Intel technologies specifically, the context such discussions provide for everyone is really enriching.

 

For this reason, I was very pleased when Kathy Browning and the team organizing the Intel presence at VMworld Europe in Copenhagen offered to organize a similar panel at that event. As great as it is to get customer feedback and perspective, it is equally vital to get as broad a perspective as we can, so gaining this opportunity to engage with European participants is a boon.  Hopefully they get the same benefit that their US counterparts experienced.  Iddo Kadim from Intel will moderate the session in Copenhagen and we were again fortunate to secure the participation of experts with broad experiences from across Intel and McAfee.  Marco Righini from Enterprise Solutions, Thomas Maxeiner from McAfee, Andreas Carlson from Nordic Edge and Rob Kypriotakis from Datacenter Solutions Group will join Iddo in the same format discussion.

 

I hope Iddo and the team get the rich interaction we experienced in Las Vegas.  I hope that the audience takes this opportunity to get their questions answered and really customize the discussion to their interests and concerns. I’d encourage anyone that has the ability to visit VMworld in Copenhagen to take the opportunity to participate, engage and enrich us all with their shared perspective.  The session is titled “Client to Cloud Security Panel: Enhancing Protection at All Layers” and is session number SPO3979 in the event guide. It is on Wednesday October 19, 2011 in Hall B5-M2.

Last month, I introduced you to the first winner of our IT Tuneup Contest for small business: Josh Shannonhouse from Button Dodge in Kokomo, IN. Now it’s time to meet winner #2: Dr. Eric Oristian, a pediatric surgeon from Silver Springs, MD.

 

Dr. Oristian’s situation was typical of many small businesses. As a busy physician with a successful practice, he was focused entirely on delivering quality medical care without the time or inclination to manage a small-business network server – or even to enter the Intel IT Tuneup Contest. Luckily, his IT provider Joe Cox of EastNet Communications spotted the contest in a channel publication and his daughter Christy, on a break from college, submitted a video entry for her father.

 

 

 

 

In her video, Christy Oristian described a dire situation: “Daddy’s server—the one that keeps track of all our computers and patient information—is from the year 2000. It uses tapes for backup and the operating system is something called Windows NT, which I am starting to think may have been the operating system on the world’s very first computer.”

 

Many small businesses discover that it’s time for a hardware upgrade when they need to update their software.  Dr. Oristian’s old Windows NT server could no longer support the recommended version of his practice management software. Suddenly, he looked at steep expenses to upgrade both.

 

Fortunately, Dr. Oristian was selected as a winner of the IT Tuneup Contest, and could rely on his current IT service provider, Joe Cox, to implement the prize package, which included a well-equipped Intel® Xeon® Processor 400 Series-based pedestal server. One of the major benefits of the new server was its remote management capabilities. As Dr. Oristian put it, “Now I can stop trying to be the onsite IT guy and concentrate on my real job: taking care of patients, and not the computers.”

 

Read the full case-study to learn more about the details of Dr. Oristian’s Small Business IT makeover

Almost a year ago I posted an article about why the time is right for cloud computing.

 

In that post, I spoke a lot about the changes that made the cloud an interesting option.  I will stop here to define my terms (note: I did not say define the terms, but define my terms as I am using them, at least for today).  For the next minute or so, Cloud is an environment I can host some of my business compute functionality where I retain management and control of the "Applications" and “Servers”.  Cloud means just about everything to somebody today...

 

 

jonimitchell.png

Here is where I say "I've looked at the cloud from both sides now," but then I get that song by Joni Mitchell stuck in my head for the rest of the day, so I am not going to say that – no way.  This also is a pretty good indicator of my age demographic when a Joni Mitchell song can get stuck in my head.

 

 

Moving on, the key idea in my earlier post was that virtualization has changed the game.  Virtualization provided a container that made the future of cloud technology possible. Intel has done a lot to make virtualization better. With the myriad of technologies ( VTx, VTd, VTc, …) layered into the processor, chipset, network adapters, etc Intel made it possible to virtualize everything.  With overhead as low as 4-6%, why not virtualize every server?

 

Finally, I want to talk about some of the other “barriers” to cloud adoption.  Virtualization made it possible, but there are reasons not to play there today; namely safety/privacy/security.

 

The first Intel technology I want to mention is AES/NI (an oh-so-clever engineering driven name).  AES/NI are a set of new instructions supported across all current Intel Xeon processors.  These instructions are called by encryption/decryption algorithms to improve encrypt/decrypt performance by as much as 400%.   What this enables for the folks counting coins and running servers and applications is an end to the encryption trade off.  If encrypting databases uses an extra 10-15% of my server, I might sweat the cost benefit before I click the encrypt checkbox.  With encryption pushed down to 2 or 3%, it is a no brainer.  Safer is better and I can afford to encrypt everything.  Even if someone/thing gets access to my data on disk, it will look like this #$%^&*()_ :).  Well, not exactly, but it will not be valuable.  AES/NI delivers the encryption performance to eliminate encryption cost benefit gambling.


The second technology that will make clouds “safer” is Intel TXT ( aka Trusted Execution Technology).  Here is Ken’s explanation of TXT and its benefit:  In a non-virtualized world, you load a series of applications onto your server.  The operating system has various rules about what code can see what, and what codes touch certain bits of memory.  This is good enough for most businesses, and as long as they have control of the operating system and take appropriate steps to prevent OS corruption, they feel ‘reasonably’ safe with their software jewels on the server.  In reality the hardware has access to “everything” but hacking the processor and chipset have to date been sufficiently difficult to make this situation “good enough”.

 

Then, along comes virtualization.  In a virtualized environment I can still have that sense of blissful safety in my management and control of my operating system in My VM.  The issue comes in what is under my VM.  Instead of raw Iron (silicon and microcode) there is a hypervisor.  This hypervisor is a chunk of software that has God-like access to anything in any of the VMs it controls.  Actually, it is a lot like the hardware in the non-virtualized example.  The issue is the “soft” part of software.  A hypervisor could be corrupted.  It's not trivial and not common but quite possible.  This is what TXT was built to address.  TXT “measures” the boot of the hypervisor and can assure that this critical chunk of software has not been deflowered.  TXT enables a VM owner to Trust that the hypervisor has not been corrupted, and therefore trust the cloud platform.

 

With VT, AES, and TXT Intel has made the cloud explosion possible.

Places to go, people to see, things to do!

 

 

 

If Oracle Open World didn’t provide at least 3-4 enticing options for each and every waking hour, then it either failed to deliver, or you shouldn’t have  attended.  For the first time ever, Oracle streamed all of the keynotes live on YouTube. If you are intrigued by any of my comments, feel free look for Oracle Open World 2011 to experience the messages and entertainment first hand. Most, if not all partners were also posting items.

 

Larry Ellison opened Sunday night with a very hardware centric pitch. He reviewed the Exadata and Exalogic systems and announced the new Exalytics box.  Based on four Intel Xeon E7-4800s (10 cores each for a total of 40 cores) with 1TB of memory, the Oracle Exalytics BI machine features optimized versions of Oracle BI Foundation Suite (Oracle BI  Foundation) and the Oracle TimesTen In-Memory Database for Exalytics.

 

It constitutes the third leg in the stool, connecting to Exadata and/or Exalogic  via Infiniband. The query responsiveness (real time analytics) and scalability provided the latest example of “hardware and software engineered to work  together". After extolling the virtues of open commodity hardware married with  optimized software and techniques like data compression (whilst teaching us that 10x10=100), Larry then suavely transitioned to the Sparc T4 SuperCluster and announced his intention to go head to head with IBM Power in Database performance. The SuperCluster also uses 48 Intel Xeon “Westmeres” for storage processing, so some portion of its performance comes from Xeon!  The next morning saw Thomas Kurian announce  the Oracle Big Data Appliance Oracle’s embrace of Hadoop and a new NoSQL database to process your unstructured data prior to loading the results into one of the Exa boxes.

 

Tuesday provided an opportunity for Kirk Skaugen to deliver Intel’s Keynote on Cloud Vision 2015: the Road to 15 Billion Connected Devices. Kirk covered Intel’s role as a  trusted advisor to the Open Data Center Alliance, and our vision for open clouds that were federated, automated and client aware. These clouds would  support intelligent connected devices ranging from sensors, to cell phones, Ultrabooks, Smart signs and automobiles. Kirk also reviewed Intel’s refresh of  the Xeon server product line including the Xeon E3 and  E7 and the shipping, and soon to be announced Xeon E5  (Sandybridge) products. Kirk's slides are posted on Slideshare and his full presentation is available on the Oracle Open World page.

 

Larry’s Wednesday keynote was totally software focused, and introduced the new Oracle Public Cloud. Oracle’s new Public Cloud offers Fusion Middleware and Applications both in a platform as a service or an appliance as a service  configuration. He also announced and demoed the Oracle Social Network (can a movie be far behind?). The keynote allowed him a forum for his continuing  feud with Marc Benioff and Salesforce.com, who he might actually dislike as much as some of his other competitors. Catch the video if you’d like to hear  the “roach motel” comments!  It's never dull when Larry is on stage!

Some relationships seem to reflect the natural order of things. Examples that come to mind include night giving way to day, rain and thunder, winter turning to spring and (at least in the Deutsche household when our boys were younger), peanut butter and jelly.

 

It seems to me this same type of relationship exists between a cloud architecture and a services-oriented enterprise (SOE is comprised of a services-based infrastructure, services-oriented architecture, and services-oriented enterprise management). In my latest industry perspective on Data Center Knowledge, I discuss my fourth fundamental truth of cloud computing strategy: services-oriented taxonomy is not optional.

 

As in earlier posts, I use an example from the automotive vertical to demonstrate why SOE is a necessity if you’re looking to build a robust cloud architecture. To this end, I use Henry Ford’s Model T to draw a parallel between assembly line standardization and interchangeable parts to the need for a cloud-based application to integrate back into a company’s application portfolio. Paralleling the experience of the Ford Model T production lines, whether and how this is done drives both assembly and lifecycle costs of the entire cloud effort.

 

At the moment, I simply don’t see a clear path for the cloud to fully deliver its potential without some kind of services-based architecture running on the back end of the enterprise. As always, I hope you can share your company’s real-world experience of moving to the cloud to either support or contradict my conclusion.

 

Read the post and join in the discussion. For more information or answers to your questions, please feel free to contact me on LinkedIn.

Download Now

 

Aramex.jpgTo support growth, global transportation company Aramex needed to increase capacity at its data centers worldwide, enhancing enterprise efficiency and delivering better services to end users. Aramex virtualized its Dell PowerEdge* blade servers, based on Intel® Xeon® processor 5550, with VMware and Dell EqualLogic* storage. It chose Dell ProSupport* to help maximize performance.


Aramex was able to consolidate its server environment by around 50 percent and reduce energy use by approximately 40 percent. Employees can access files approximately 30 percent quicker. And the IT team saves up to 13 hours a month on storage management.


“We lowered energy consumption by approximately 40 percent with Dell,” explained Samer Awajan, chief technology officer for Aramex. “The servers’ Intel® technology has played a key role in minimizing power consumption.”


For the whole story, download our new Aramex business success story.

 

 

*Other names and brands may be claimed as the property of others.

Never do something in IT that ends up in the newspaper and embarrasses your boss.  That’s the second rule of business.  The first rule is ‘Never Surprise your Boss.’

 

It appears that the Bank of America team that manages the on-line banking website system forgot this.  They did a feature update AND a platform migration at the same time.  This works only when it is done with careful planning and a lot of testing.

 

I do not know if the server platform migration was a RISC to RISC migration or a RISC to IA migration. Either way, it appears that basic steps were overlooked.

 

The first basic step is planning.  Plan for how the application upgrades intersect with the platform migration.  Plan to ensure that the application upgrades will not burden the overall service to the end customer.  Plan the day of the release. Conduct a Proof of Concept and test the whole configuration in there before Release To Market (RTM).

 

Enterprises know the traffic volumes for their web sites.  In the case of a bank, they know that traffic increases at the beginning of the month when paychecks, and Social Security comes in and a lot of bills are paid. In the case of a retail firm the web site and back end need to be locked down by Halloween to shake out problems before ‘Black Friday’.  I did a retail application (RETEK) RISC to IA migration starting in August and we were hard pressed to get everything done by early November.

 

Following my blog theme of RISC to IA migration testing is crucial.  Test after you complete the PoC to determine if the new hardware really handles the load. Frequently, you may discover that you need new hardware for your production system.  This is the hardware you target for the dress rehearsal of the production migration.  Test it to ensure that the application meets the firms Service Level Agreements (SLAs) and that the eventual production migration won’t end up in the paper.

 

A quick perusal of the web shows that there are a wide range of testing tools for web sites and application performance. I don't recommend any particular tool; testing tool selection must take into account your budget and your environment.

 

Plan the testing to have the application hit by more users than expected in production.  With some testing tools, this can get expensive. But what is the cost of an embarrassment where the application doesn’t perform and it causes you to lose customers?

 

Testing can reveal problems before the application is released.  One firm had a large Bill of Materials for their product all in one child table in an Oracle database.  The BOM explosion was perfectly fine in the original environment but testing revealed that performance was terrible on the new, more powerful platform.  Since this was done prior to the release of the application to the users, root cause analysis could be done and it did not affect the business.   (The problem was an instruction in one processor that wasn’t in the other processor.)

 

Once you complete testing, the team can ensure that the production migration will result in adequate performance to meet the firm’s requirements.  Once completed, the team can plan for the production migration.   Plan carefully -- eliminate the critical dates like the end of the month for a bank or after Halloween for a retail firm or around a launch date for firms working with NASA.  Most enterprises know when the application environment should remain frozen.   Give yourself a week or so on either side of the dates for slip-ups and other unforeseen occurrence, and give yourself time to test the final migration release if you can.

 

 

The world lost a leader this week.  Many of the comments about Steve Jobs point back to this commercial as a key to understanding him.  I agree with them.

Think Differently – RIP Steve Jobs

In today’s data centers, many organizations maintain an Ethernet network for core networking and a Fibre Channel network for storage traffic. As just about everyone knows, this well-established approach to networking comes with its challenges—in the form of different protocols, different hardware, different management tools, and different skills sets for administration.

 

In an exploration of the solution to these data center and network challenges, the animated narrator of a new video from Intel expresses the problem in simple terms: “The results are often high complexity, high costs, and high blood pressure.”

 

So what do you do to lower these dynamics in your data center? This how-to video offers a simple prescription: unified networking based on Intel® Ethernet 10 Gigabit (10GbE) products.

 

In particular, this video demonstrates how to configure a Fibre Channel over Ethernet (FCoE) storage solution using products from Intel, NetApp, and Cisco. The video uses animated graphics and many screen captures to walk you through the process of creating a unified network based on a tested reference architecture. Along the way, the narrator offers helpful tips to smooth out deployment wrinkles.

 

If you’re considering moving to a unified network, this video is a great place to begin your planning efforts. In just 17 minutes, you’ll gain an up-close view of how it’s done—and just how easy it can be.

 

Truthfully, though, we can’t promise that unified networking will lower your blood pressure. But it can sure make life simpler for your network administrators.

 

For the deeper dive, check out Unified Networking with Intel® Ethernet 10 Gigabit Server Adapters and NetApp Storage. --

It’s now playing on a YouTube screen near you.

Billy Cox

The new "Outsourced CIO"

Posted by Billy Cox Oct 5, 2011

This post originally apeared as an Industry Perspective on Cloud Computing on Data Center Knowledge.

 

 

I had a chance this week to speak with the CEOs of a number of small companies. One of the things that really jumped out at me is how hard it is for these small companies to get a “CIO”. Of course, they could hire they person but most are not large enough to justify a full time CIO. But what really came out to me, is that, thanks to cloud, the role of this “outsourced CIO” for these companies has a different meaning than even just a few years back.

 

1. The “outsourced CIO”, being the face of an IT organization, is tasked with creating value through IT. We (Intel) built a CIO white paper that makes this point: IT is a part of the value creation machine (not a cost center). For the small or medium business, this means that whoever is acting as the CIO for the small company has to really help drive their partners business, not just run their IT. For the myriad of resellers and SIs that act as ‘outsourced’ IT for small and medium businesses, this is a fundamentally different view of their role – assuming they aspire to be their partners “outsourced CIO”.

 

I would argue that cloud has not only made this kind of role practical but it has made the need for this role essential. It is practical since the “outsourced CIO” is far more likely to understand the options and nuances of selecting services for the business than the business itself would be. It is essential since the small or medium business can not afford to spend time or money learning IT (after all, they do have a business to run).

 

2. Outsourcing has been around for a long time. But until salesforce.com made the SaaS model practical and popular for all businesses, the concept of outsourcing a specialized function was a rare business model. Now, with cloud, we have a multitude of specialized functions to select from, all of which are delivered as a SaaS meaning that no hw is purchased and in some cases may not even require a contract.

 

For the “outsourced CIO”, this means a LOT more partners and a lot more interpretation of the business requirements in that ocean of options.

 

For example, it means that the security requirements of the small or medium business need to be very well understood. In a traditional enterprise model where everything was hosted ‘behind the corporate firewall’, it was easy (“just buy more servers”). However, in the cloud or SaaS model, we have to actually evaluate the security requirements of an offering and make a judgment as to the suitability. Alas, the days of “just buy more servers” are long gone.

 

If you are the CEO of a small or medium business: Who is your CIO?

If you find yourself in the role of “outsourced CIO”: Are you acting like a CIO, or just the manager of IT?

So the theme here at Oracle Openworld 2011 is "engineered for innovation" and its great to see Intel Xeon platforms dominate the line-up of engineered solutions. Its also great to know that taking an open approach you can engineer your own solution of choice and my session on performance and scalability with Eric Wan focused on some tips on how to go about achieving this. For example I have a had a number of follow up questions around turbo boost, so here is the link from Julian Dyke (my co-author on Pro Oracle 10g RAC on Linux) and here is Julian's PL/SQL.

 

SET SERVEROUTPUT ON
SET TIMING ON
DECLARE
  n NUMBER := 0;
BEGIN
  FOR f IN 1..10000000
  LOOP
    n := MOD (n,999999) + SQRT (f);
  END LOOP;
  DBMS_OUTPUT.PUT_LINE ('Res = '||TO_CHAR (n,'999999.99'));
END;
/

 

If you were in the session you will know the difference between performance and scalability and this routine is single threaded so the focus is most definitely on the single threaded performance side but gives a great insight into the Intel turbo boost feature. Of course calcuating mathematical routines to measure performance are not new but I really like Julian's idea of a test for Oracle DBA's that can be run on any Oracle instance on any system including an ASM instance.

So what is Turbo Boost?  Turbo Boost allows an increase to the frequency of your cores when operating conditions allow, From an Oracle perspective the benefits are mostly seen when running single-threaded queries or PL/SQL.

Your first port of call should be ark.intel.com to see if your CPU supports turbo boost so for example checking here on the site I can see that the E7-8870 has a clock speed of 2.4GHz and  a max turbo frequency of 2.8GHz. If your CPU does support turbo boost you should then check your BIOS settings to see if it is enabled on your system. (You have checked your BIOS settings for maximum Oracle performance haven't you?)

If you have turbo boost and its enabled if you are running Oracle on Linux you might think you can look in /proc/cpuinfo to view the turbo boost frequency, after all it does change dynamically but you won't actually see it here, you need to get the turbostat utility that is included in the pmtools package to view the turbo boost frequency.

So now using the PL/SQL test in the session we showed an example of the Intel Xeon Processor 5680 with a clock speed of 3.33GHz and a Max turbo frequency of 3.6GHz. This is a frequency boost in this case of 1.08X and the PL/SQL test ran in 8.89 secs with turbo boost off (at the BIOS) and 8.21 with it enabled (again at the BIOS level). An Oracle performance gain of 1.08X that directly corresponds to the CPU feature.

Remember this gain is on one core so when you are running Oracle workloads these incremental gains add up to process your workload faster and now you know how to test for whether you have turbo boosted your Oracle workload.

When it comes to protecting the security of your assets in a cloud environment, the core questions are: What do I need to know and what do I need to do?

                                                                                        

These are questions I, together with Brian Foster from McAfee, will address in an upcoming session—“Do I need a private cloud?”—at the McAfee FOCUS Security Conference, taking place Oct. 18-20 in Las Vegas. While we can’t explore these questions in depth in this post, we can at least get started down the path.

 

Before we start though, we need to have a clear picture of the “asset” we are securing. If your company produces highly specialized, high value products, then the asset has high value and demands greater protection. If your company produces open source software, then perhaps a lesser degree of protection would suffice. With this in mind, consider the following:

 

1. Understand the services you are consuming and the associated risks.

Many organizations don’t have a clear view of the cloud services they are consuming and the risks those services pose to the organization. Let’s take a simple example: Are you using Gmail or hosted Microsoft Exchange for your company’s email? While both email services are reasonably secure, Exchange is generally considered to be more appropriate for corporate environments.

 

Once you have a clear picture of the asset, you will then need to make certain that the security of the services is appropriate.

 

2. Provide the proper security training for all employees.

Your own people are one of the keys to overall security, and one of the risks. If, for example, a single employee opens a malicious attachment on an email message, you could end up with a significant breach in security.

 

This reality points to the need for ongoing security training and awareness efforts. When it comes to the security of your systems, applications, and data, all employees are on the front lines.

 

3. Build a secure infrastructure.

Cloud security is a multi-layered problem that requires multiple layers of security at both the client and the data center level. Some of these layers overlap, such as network firewalls and intrusion prevention systems that help protect both client and server systems.

 

At the client level, you want to take all the usual steps, such as requiring all client systems to run anti-malware software that automatically updates itself on a regular basis and is optimized for the client to minimize system performance impact.

 

At the data center level, you need to put trusted compute pools in place to create a security foundation. This hardware-level security is enabled by technologies such as Intel® Trusted Execution Technology (Intel® TXT), which protects IT infrastructure against software-based attacks. It does this by checking the consistency in behaviors and launch-time configurations against a “known good” sequence.

 

Complement this launch-time security with a well coordinated approach to security across your network, servers, data, and storage that helps you identify and stop attacks in real time. By connecting policies and controls across physical, virtual, and cloud infrastructures, your data center team can enable secure, elastic, on-demand services without compromising on compliance or jeopardizing availability.

 

While they may seem obvious, these simple steps are extremely important. If you haven’t fully covered them, you’ve got holes in your cloud security strategy.

 

We’ll talk more on this at the data center track session on Oct. 20 at 2:30 p.m. at the FOCUS event. In the meantime, push forward with your security efforts.

Writing this as I sit in Steve Shaw's session on Oracle performance optimization on Intel Xeon.  Steve really understands the low-level interaction of the Oracle database and the Intel Xeon server platform.  I suggest you review his material to better understand how to take advantage of Xeon features like hyperthreading and Turbo Boost to enhance Oracle DBMS performance.  Some of his suggestions aren't obvious, and some are even counter-intuitive, but all are important if you want to get the best possible performance out of an Oracle database running on the Xeon processor platform.

 

Speaking of Oracle on Xeon, last night brought some interesting news, with Larry's announcement of the new Oracle Exalytics in-memory database appliance.

 

The appliance is based on a 4-socket Xeon E7 server, equipped with 1TB of main memory and the new Exalytics software stack.  That stack is built from new versions of two long-standing Oracle software products - the TimesTen in-memory database and the ESSBase MOLAP engine.

 

I haven't worked with this appliance in a production environment yet, or talked to anyone who has, so please consider these comments preliminary.

 

I've spoken elsewhere on the importance of in-memory techniques generally, and the Exalytics appliance is consistent with those comments.  Instead of spending much of its processing time waiting on storage I/O operations to complete and poking around inside of disk-oriented data management structures, Exalytics can simply treat memory as the primary data repository, which allows everything in the database to be addressed directly in software.  In essence, the entire contents of the database becomes a gigantic multi-dimensional array structure on which software can act directly.

 

The result, as Mr. Ellison so exuberantly put it last night, is "insight at the speed of thought".  (Which sounds good, but neurons are actually quite slow in comparison to transistors, so if I was being humorous about it, I'd be tempted to say "provides answers before you can think of the question"!)

 

Which is actually not far from the truth, since it turns out that the new UI that Oracle has built for Exalytics attempts to predict what question you're asking as you enter the query, and displays results based on those guesses as you type.  I could imagine that being somewhat disconcerting, but I could also see it being stimulating to new ideas and insights.

 

With all of the excitement surrounding SAP's in-memory analytics efforts and HANA, it was inevitable that Oracle would come up with something in response.  Now we know what it is, and it looks pretty intriguing, to say the least.  Since both HANA and the Exalytics Appliance are based on the E7 Xeon processor, customers can be confident that regardless of the choice they make, they're running on the best processor for in-memory analytics on the planet.

 

Customers who have deployed SAP on Oracle will now have two distinct choices for taking advantage of in-memory techniques to accelerate analytics of their SAP data.  Oracle will likely try to make a case for customers to migrate to Oracle on Exadata, accompanied by Exalytics, since Exalytics can take direct advantage of Exadata's hybrid columnar compression and 40Gbit InfiniBand architecture fabric to accelerate in-memory data loading.

 

SAP, on the other hand, is likely to suggest that customers are better served by adding a HANA appliance to their existing environment specifically to accelerate analytics without having to change or upgrade their Oracle environment at all.  And in the future, as HANA gains support for core OLTP functionality, SAP will likely suggest that customers ought to consolidate everything on HANA.

 

I can see advantages and disadvantages to either approach.  But I know it's good for customers to have Oracle and SAP contending for their business, and I know it's comforting to customers to know that no matter which approach they choose, it will run best on the Xeon E7 server plaform.

It’s  that time of year again—when Oracle Open World takes over (overwhelms) the city  of San Francisco.  The big opening event  is always Larry’s keynote on Sunday evenings, when he announces new products,  benchmarks and his personal perspective on the world of technology. I also  expect we’ll see a much higher profile from Mark Hurd, now that he’s been onboard for about a year.

 

Kirk Skaugen will deliver  the Intel Keynote on Datacenter 2015 and the Impact of Intelligent Devices, on Tuesday at 1:30PM in the Novellus Theater at Yerba Buena Center for the Arts – (Remember—you  need to exit Moscone and walk to the theater!).


While  there are the usual lineup of additional CEO keynotes, Intel will follow up  with a robust roster of more detailed enterprise presentations, along with a number of Customer Events with a focus on Exalogic and Exadata.  I’ve included some of my favorites below:

 

 

OW11_Intel_Schedule.png

 

 

We’ll  also have a full booth on the show floor with a large number of partner demos  and regular presentations

 

                                                                                                                                                                                                                               

Demo

Description

Security for DB End to End Oracle Security

Two demos: On Oracle Firewall    and the other on AESNI on Oracle 11gr2 with Oracle Sun Fire

Oracle Solutions with Intel Ethernet and Intel Storage

Unified Networking and Storage for the Virtual Data Center

Cisco Unified Computing System

Oracle E-Business Suite deployment on UCS showing the basic running of transactions with dynamic resource allocation 

Exploiting System X Memory for Oracle’s Database

Take full advantage of Oracle    Solutions with IBM System X features!

Fujitsu Compute Continuum Story

Showcase of Q550 client with new Xeon Server driving backend for a complete compute continuum.

Oracle Exadata Exadata X2-8

Latest Oracle Solution

Solid State Oracle RAC

Oracle RAC using the latest Intel processors and SSDs to demonstrate high performance and flexibility with unprecedented density and ROI

SAP ERP with Oracle DB in the Cloud Environment

Showcasing SAP ERP with Oracle Database on VMWare cloud infrastructure on Intel Xeon E7

HP & Intel Partner Deliver Enterprise Solutions

Showcase of the HP DL980 server with Intel E7 ten-core processors running Oracle Linux

Dell - E-Business Suite – a private cloud

Latest Intel hardware with dynamic resource allocation, and integration in to the cloud

 

All of this and more are at the Intel Booth

 

Remember to listen to Chip Chat Live Monday & Tuesday and stop by and  see us if you’re at the show!

Filter Blog

By author:
By date:
By tag: