I have been blogging about mission critical technology for the last year or so, but more recently you may have noticed a subtle change in my focus. At the end of last year I officially moved from managing the Mission Critical Server Business to a new position driving Enterprise Software Strategy for the Datacenter and Connected Systems Group at Intel.

 

This involves working with our key independent software vendor (ISV) partners at an exciting time in the evolution of software and solutions.  Since I’ve spent most of my career in Mission Critical and Software, this isn’t a big change from my point of view.  What surprises a lot of people is that the job is at Intel, who isn’t generally known as a software company.

 

People who stop to think about it for a minute will know that Intel has provided compilers, debuggers and parallel programming tools in support of its chips.  Additionally they might also know that Intel is a solid contributor to open source Linux, again in support of its chips. Last year Intel made news with its purchase of McAfee, making Intel one of the world’s 10 largest software companies.

 

My reason for changing jobs is twofold. First in my former Mission Critical role, I was spending close to half of my time with the large ISV and Database software partners, who are key to providing business solutions. Over time I realized that we were knee deep in what I characterize as the next major evolution of the software and server business.  That’s the second reason for the change.  The confluence of Moore’s law, enabling amazing price/performance computing solutions, delivered by companies like Intel, , with this new software, is driving a  renaissance in computing solutions, led by cloud, open source, big data, and everything- everywhere mobile applications.  This will no doubt give rise to an entire new generation of software companies, fueled by venture capital investments and  highly valued due to anticipation (which usually exceeds reality) of future success.

 

Of course nowhere in the above do solutions from established ISVs jump out at you. But, I’ve been around long enough to know you don’t throw the baby out with the bath water, and one only has to look at how stalwarts like IBM have morphed into primarily a Services and Software company, and Oracle moved from debunking NoSQL to announcing a product. Thus in the interim, I believe we will see efforts to create mashups of the old and the new enabling the best of both for worlds for customers eager to deploy cutting edge big data solutions in the real world. What survives in the longer term is anybody’s guess, and that’s what makes it so exciting.  I started my career as a software engineer, so maybe I’m going back to my roots!

DSC00263.JPG

There were many announcement during IDF Brazil 2012 held in São Paulo, and I was there, not only as an attendee but also as a speaker on “Rethinking Information Security” and “Addressing Cloud Challenges.”  It was the first time that Intel brought IDF to Brazil and they brough important announcements too! — Intel will invest in Brazil in the next 5 years almost the same amount of last 25 years, about $1 billion.

 

The biggest part of this show was Ultrabooks, tables and smartphones with many designs form factors, OEMs and applications. More interestingly there was also a vending machine totally redesigned for Coca-Cola with a big interactive touch-screen bringing a completely new connected experience with social networks, smart display technology for shopping together all with Intel Inside™. There were also tons of other innovative application of this technology to improve our lives. One of these applications was “automatic pilot” for cars in traffic…

 

It’s not unusual that in big cities with high population density that the average speed during a rush hour is less than 10 Mph (20 Km/h) or even worse at a stop-go-pace. So imagine if your car could communicate with other cars and the traffic system to get you where you need to be as fast as possible. This is wonderful technology that will change our lives imagine, hands free transit without public transit.

Here are some highlights of the other great items in the keynote and the event

 

I would also like to highlight the collaboration agreement announced during IDF with Banco do Brasil to improve security of their online banking services using Protected Transaction Display (PTD). I demonstrated a prototype of Banco do Brasil Online Banking in my session “Rethinking Information Security” that was prepared by Banco do Brasil’s security team that did an awesome job of making this demo ready on time. Really they are a team of talented people and passionate technologists.

On the cloud computing side, there were technical sessions on mission critical solutions for the cloud, Infrastructure as a Services, and so on. I had the honor to share my session with Gilberto Mautner, CEO of Locaweb – a talented and brilliant executive.

 

I learned a great deal from his presentation on his experiences leading Locaweb and point of view on the cloud computing market.  It was an excellent opportunity and for sure IDF 2013, I’ll be there.

 

Best Regards!

Mercator.jpgDownload Now


Mercator Ocean wanted an R&D computing system that would let it conduct high-resolution simulations while reducing costs and enhancing performance. It  deployed Dell PowerEdge* servers with Intel® Xeon® processors to deliver the highest performance for code simulating the complexities of the physical state of the world’s oceans.


Performance rose around 600 percent due to increased server density and enhanced processing power. Total cost of ownership went down because of lower power and coding costs.


“We’ve greatly increased processing power and performance while significantly reducing power and cooling costs,” explained Bertrand Ferret, head Mercator Ocean’s IT department.


To learn more, download the new Mercator Ocean business success story. As always, you can find many more like this one on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes.  And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

 

*Other names and brands may be claimed as the property of others.

The Xeon E5 Family launch brought the next generation of Intel based Dell servers to customers and the Dell PowerEdge family of servers has fully adopted Intel Node Manager technology into their breadth of platforms.  So not only do you get the newest Intel Xeon E5-2600 Series Processor Technologies, you also get the granular capability to manage your datacenter on a per-server basis.

 

Dell recognized Intel Node Manager Technology and the Intel Data Center Manager SDK in being a catalyst to help customers manage their power in the datacenter. Dell merged the technologies into the PowerEdge platform and created Dell OpenManage Power Center to give you a simple, yet powerful interface to control your Dell PowerEdge servers en mass.

 

And now that you’ve purchased your Dell PowerEdge 12G servers, you can simply download Dell OpenManage Power Center  FOR FREE to start monitoring and managing power in your server, rack, row, or entire data center.  Installation is very simple and you can start scanning for PowerEdge systems within a few minutes of logging into the console.  Customers are always looking for ways to do simple management and when the OEM offers a console to pair up with your server platfrom - it makes data gathering that much easier.

 

As you can see in the screenshot below, Power Center walks you through some very simple setup steps and you're able to scan for systems and you're off and running with no impact to your environment and can start collecting data within a few minutes.

 

dell_blog_pic_1.png

 

An Intel Cloud Builders Reference Architecture was created to assist customers with Dell PowerEdge systems.  The PowerEdge servers come with the iDRAC interface which is the main access point for Power Center.  It's a free software download  and I recommend checking it out if you are using a Dell PowerEdge 12G or 11G system with iDRAC capabilities.  Here's a screen capture of a dashboard view in Power Center showing rack power usage and temperature in the data center.


  dell_blog_pic_2.png

 

Dell OMPC gives you control of multiple systems in scale gives you the capability to deploy, manage and control your data center power.  Taking what was once a very manual task and making it a very easy connect point for critical data in your data center and gives you granular control over multiple systems.  Gathering this information is critical to knowing your power footprint, and being able to predict and allocate the appropriate power budget for your data center.

 

Here is the link to the Intel Cloud Builder Reference Architecture for Dell Open Manage Power Center:

Intel® Cloud Builders Guide: Dell* OpenManage* Power Center
Built on the Intel® Xeon® Processor E5-2600 Product Family

Intel today announced four new channel-ready dual-socket motherboards to support the just released Intel® Xeon® processor E5-2400 product family.

 

The new boards take advantage of core features of the Intel Xeon processor E5 family, such as Intel Integrated I/O, Intel Turbo Boost technology 2.0 and Intel Trusted Execution Technology. They also address a range of common customer scenarios, from small and medium businesses (SMBs) planning to roll out their first servers to larger, budget-minded businesses looking at enterprise, cloud and high performance computing deployments.

 

Today’s announcement underscores Intel’s commitment to provide channel partners with the industry’s most extensive server motherboard product portfolio and offers several powerful, flexible and efficient entry level options for channel partners looking to drive new business.

The new dual-socket motherboard offerings include:

 

  • Intel® Server Board S2400SC:  A cost-effective, value solution for demanding first server environments, this board supports Intel® Server Management and has expanded I/O with 32 total PCIe* lanes, including 28 PCIe Gen3.
  • Intel® Server Board S2400GP: Ideal for mid-sized businesses, the S2400GP includes mainstream 12 DIMM support and flexible I/O with 48 PCIe Gen3 lanes.
  • Intel® Server Board S2400BB: Designed for rack mount environments, the S2400BB features 12 DIMM support on 6 memory channels and mainstream 48 PCIe Gen 3 I/O lanes and also provides great configuration flexibility, including multiple boot options like DOM and mSATA modules. It is very well suited for enterprise and cloud computing deployments.
  • Intel® Server Board S2400LP: A half-width form factor, the S2400LP will appeal to organizations looking for a cost-effective solution for high performance computing (HPC) and high demand cloud deployments.  The board is price and performance optimized with Infiniband* and Dual Gigabit Ethernet capabilities.

 

The new boards also come with a range of innovative Intel software and services, providing the channel with more ways to boost service offerings, grow revenue and enhance their partner position with end customers. The offerings include:

 

  • The Intel Server Continuity Suite, which delivers software for managing both Intel® Rack and Pedestal Server and Intel® Modular Server products. It offers the channel and their SMB customers a low cost, easy to use server management solution that provides real-time, continuous backup, point-and-click systems management and easy to deploy virtualization capabilities.
  • Intel® On-site Repair for Servers, which provides Next-Day onsite repair, is available in all 50 US States and select Canadian cities.
  • Intel® Server Component Extended warranty, which adds two more years to the standard three-year warranty.

 

The new entry level motherboards can be ordered immediately and are expected to ship by the summer. For more information, please go to The Intel Server Edge.

5600 Series.jpgThree Taiwanese companies are building their IT—and their businesses—with the Intel® Xeon® processor 5600 series:

  • Blackmagic Design gets the power of efficient, compact, and high-performance rendering with Intel® Xeon® processor 5600 series.
  • Panasonic Taiwan builds an energy-and cost-efficient enhanced virtualized environment through high-performance servers based on Intel Xeon processors 5600 series.
  • Taiwan Taxi uses a cloud computing system powered by Intel Xeon processor 5600 series  to give passengers and drivers a comfortable and safe riding experience.

Designed for industry-leading performance and maximum energy efficiency, the Intel Xeon processor 5600 series delivers versatile one-way and two-way 64-bit multi-core servers and workstations that are ideal for a wide range of infrastructure, cloud, high-density, and high-performance computing (HPC) applications.


As always, you can find many more success stories like these on the Intel.com Business Success Stories for IT Managers page  or the Business Success Stories for IT Managers channel on iTunes.  And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

By Jim Pappas

 

The SSD Form Factor Working Group has announced its Enterprise SSD Form Factor 1.0 specification defining a new standard connector which for the first time includes PCI Express as a interface to attach 2.5"/3.5” SSDs to computer systems.  Intel was one of five promoter companies, along with more than 50 contributor companies, to define the technology, and write this specification. What really differentiates this announcement from most other standardization efforts is the amount of cooperation across a large number of standards organizations.

CONNECTOR.jpg

 

In addition to supporting PCI Express, this new specification also supports existing interfaces already widely used in the industry. Representatives from the SATA-IO International Organization, SCSI Trade Association, ANSI T10, PCI SIG and SFF organizations all worked cooperatively to define this specification. Rarely does the industry see such widespread cooperation amongst such organizations. The benefit to end users is the enablement of a single storage interface, which accepts SSD drives with almost any interface, giving users greater choice when configuring their computer environments. It is expected that this work will become the base for the emerging SFF-8639 specification.

 

Jim Pappas is a Director of Technology Initiatives at Intel.

 

Please note: A version of this article originally apeared on The Data Center Journal

 

 

 

Every server in a data center runs on an allotted power cap that is programmed to withstand the peak-hour power consumption level. When an unexpected event causes a power spike, however, data center managers can be faced with serious problems. For example, in the summer of 2011, unusually high temperatures in Texas created havoc in data centers. The increased operation of air conditioning units affected data center servers that were already running close to capacity.

 

Preparedness for unexpected power events requires the ability to rapidly identify the individual servers at risk of power overload or failure. A variety of proactive energy management best practices can not only provide insights into the power patterns leading up to problematic events, but can offer remedial controls that avoid equipment failures and service disruptions.

 

 

Best Practice: Gaining Real-Time Visibility


Dealing with power surges requires a full understanding of your nominal data center power and thermal conditions. Unfortunately, many facilities and IT teams have only minimal monitoring in place, often focusing solely on return air temperature at the air-conditioning units.

 

The first step toward efficient energy management is to take advantage of all the power and thermal data provided by today’s hardware. This includes real-time server inlet temperatures and power consumption data from rack servers, blade servers, and the power-distribution units (PDUs) and uninterrupted power supplies (UPSs) related to those servers. Data center energy monitoring solutions are available for aggregating this hardware data and for providing views of conditions at the individual server or rack level or for user-defined groups of devices.

 

Unlike predictive models that are based on static data sets, real-time energy monitoring solutions can uncover hot spots and computer-area air handler (CRAH) failures early, when proactive actions can be taken.

 

By aggregating server inlet temperatures, an energy monitoring solution can help data center managers create real-time thermal maps of the data center. The solutions can also feed data into logs to be used for trending analysis as well as in-depth airflow studies for improving thermal profiles and for avoiding over- or undercooling. With adequate granularity and accuracy, an energy monitoring solution makes it possible to fine-tune power and cooling systems, instead of necessitating designs to accommodate the worst-case or spike conditions.

 

 

Best Practice: Shifting From Reactive to Proactive Energy Management


Accurate, real-time power and thermal usage data also makes it possible to set thresholds and alerts, and it introduce controls that enforce policies for optimized service and efficiencies. Real-time server data provides immediate feedback about power and thermal conditions that can affect server performance and ultimately end-user services.

 

Proactively identifying hot spots before they reach critical levels allows data center managers to take preventative actions and also creates a foundation for the following:

 

  • Managing and billing for services based on actual energy use
  • Automating actions relating to power management in order to minimize the impact on IT or facilities teams
  • Integrating data center energy management with other data center and facilities management consoles.

 

 

Best Practice: Non-Invasive Monitoring


To avoid affecting the servers and end-user services, data center managers should look for energy management solutions that support agentless operation. Advanced solutions facilitate integration, with full support for Web Services Description Language (WSDL) APIs, and they can coexist with other applications on the designated host server or virtual machine.

 

Today’s regulated data centers also require that an energy management solution offer APIs designed for secure communications with managed nodes.

 

 

Best Practice: Holistic Energy Optimization


Real-time monitoring provides a solid foundation for energy controls, and state-of-the-art energy management systems provide enable dynamic adjustment of the internal power states of data center servers. The control functions support the optimal balance of server performance and power—and keep power under the cap to avoid spikes that would otherwise exceed equipment limits or energy budgets.

 

Intelligent aggregation of data center power and thermal data can be used to drive optimal power management policies across servers and storage area networks. In real-world use cases, intelligent energy management solutions are producing 20–40 percent reductions in energy waste.

 

These increases in efficiency ameliorate the conditions that may lead to power spikes, and they also enable other high-value benefits including prolonged business continuity (by up to 25 percent) when a power outage occurs. Power can also be allocated on a priority basis during an outage, giving maximum protection to business-critical services.

 

Intelligent power management for servers can also dramatically increase rack density without exceeding existing rack-level power caps. Some companies are also using intelligent energy management approaches to introduce power-based metering and energy cost charge-backs to motivate conservation and more fairly assign costs to organizational units.

 

 

Best Practice: Decreasing Data Center Power Without Affecting Performance


A crude energy management solution might mitigate power surges by simply capping the power consumption of individual servers or groups of servers. Because performance is directly tied to power, an intelligent energy management solution dynamically balances power and performance in accordance with the priorities set by the particular business.

 

The features required for fine-tuning power in relation to server performance include real-time monitoring of actual power consumption and the ability to maintain maximum performance by dynamically adjusting the processor operating frequencies. This requires a tightly integrated solution that can interact with the server operating system or hypervisor using threshold alerts.

 

Field tests of state-of-the-art energy management solutions have proven the efficacy of an intelligent approach for lowering server power consumption by as much as 20 percent without reducing performance. At BMW Group,[1] for example, a proof-of-concept exercise determined that an energy management solution could lower consumption by 18 percent and increase server efficiency by approximately 19 percent.

 

Similarly, by adjusting the performance levels, data center managers can more dramatically lower power to mitigate periods of power surges or to adjust server allocations on the basis of workloads and priorities.

 

 

Conclusions


Today, the motivations for avoiding power spikes include improving the reliability of data center services and curbing runaway energy costs. In the future, energy management will likely become more critical with the consumerization of IT, cloud computing and other trends that put increased service—and, correspondingly, energy—demands on the data center.

Bottom line, intelligent energy management is a critical first step to gaining control of the fastest-increasing operating cost for the data center. Plus, it puts a data center on a transition path towards more comprehensive IT asset management. Besides avoiding power spikes, energy management solutions provide in-depth knowledge for data center “right-sizing” and accurate equipment scheduling to meet workload demands.

Power data can also contribute to more-efficient cooling and air-flow designs and to space analysis for site expansion studies. Power is at the heart of optimized resource balancing in the data center; as such, the intelligent monitoring and management of power typically yields significant ROI for best-in-class energy management technology.


[1]White Paper, PoC at BMC, “Preserving Performance While Saving Power Using Intel Intelligent Power Node Manager and Intel Data Center Manager” http://software.intel.com/sites/datacentermanager/whitepaper.php

To find out more, read the entire article at http://www.datacenterjournal.com/facilities/driving-under-the-limit-data-center-practices-that-mitigate-power-spikes/

Used with permission from The Data Center Journal (www.datacenterjournal.com) – EDM2R Enterprises, Inc., Copyright 2012. All rights reserved.

vontobel.jpgTwo financial services companies are using the top-of-the-line Intel® Xeon® processor E7 family to get record-breaking performance and scalability for their mission-critical challenges.


For example, Helvetia Group created standardized and centralized data center model that was more scalable and agile than its existing distributed approach. It's cut average provisioning times in half and increased virtualization from 65 to 85 percent, reducing server racks from 6 to 0.5.


The Vontobel Group needed to boost the performance of the core banking platform on which all its customer and business interactions are based. It migrated from a RISC platform to x86, powered by the Intel Xeon processor E7-4800 and Intel Xeon processor 5600 series. This has boosted application performance by a factor of three while reducing costs and enhancing manageability.


To learn more, download the new Helvetia Group and Vontobel Group business case studies. As always, you can find many more like this one on the Intel.com Business Success Stories for IT Managers page or the Business Success Stories for IT Managers channel on iTunes. And to keep up to date on the latest business success stories, follow ReferenceRoom on Twitter.

Two questions never fail to come up whenever I’m talking about cloud computing:  What are best practices for cloud security and what are you Intel folks doing together with McAfee to address it?    So when we commissioned a study and cloud security survey on IT perspectives on cloud security, I didn’t think that I’d find too many surprises.   Seeing that 87% of companies surveyed said that they had substantial concerns regarding public cloud security certainly didn’t surprise me, but the fact that 69% had similar levels of concern around private clouds did.

 

While security obviously isn’t just a challenge for public clouds, 65% of respondents believed they had a higher number of security breaches in public clouds vs private ones.   I know many of the leading cloud service providers in the industry and they do a very solid job of managing security and continuously enhancing their features.   But regardless of whether their security feature set is superior to the average enterprise, when it comes to purchasing decisions, perception is reality and apparently we need to help build confidence in IT’s use of public cloud services.

 

To address this need, we’ve been working with McAfee to develop combinations of Intel hardware-enabled features that are exposed and management by McAfee tools to enhance the security capability for both public and private clouds.  In fact, we’ve taking on the joint mission to make security in the cloud as equal or better as best-in-class enterprise security.

 

As an example of some of the capability we’re jointly enabling, we want to enable secure, trusted server pools and allow policies and access tools to recognize when those servers have been secured.   At Intel, we’ve enabled Trusted Execution Technology (TXT) in our latest Xeon E5-based platforms.    This allows virtual environments to boot with hardware-enhanced security features.   We’ve worked with Trapezoid Digital Security to demonstrate how TXT can be combined with McAfee’s e-Policy Orchestrator to demonstrate how to manage permissions based on whether a server has an established hardware root of trust.    This is just one of the elements that we’re highlighting in our joint McAfee and Intel security briefing today.   You can see some of the other solutions and highlights at www.intel.com/cloudsecurity.

 

Want to hear more or see how some of your peers are addressing cloud security?  Then join me at Forecast 2012 – a unique event led by the Open Data Center Alliance (a group of over 300 datacenter and IT professionals) – where both your peers and solutions providers will share their latest thinking on cloud security and best practices.

By: Jason Blosil, Product Marketing Manager, NetApp

 

Jason Blosil has over 15 years of industry experience in finance, marketing, and product management. He is employed at NetApp as a product marketing manager in the core software group and volunteers as the chair of SNIA’s Ethernet Storage Forum (ESF).

 

 

 

Ethernet network technology originated in the 80’s, back when I was sporting a Members Only jacket and a feathered haircut. Since that time, Ethernet has evolved into the de facto standard for Local Area Networks (LAN) and is now establishing a stronger position in the data center. Ethernet is evolving, but has never really gone “out of style.” (My Members Only jacket, on the other hand, has long ago made it to the Goodwill bin).

 

The evolution of Ethernet now includes support for multiple traffic types, such as voice, video, file data, and block data. Ethernet based storage networks, supporting iSCSI and NAS traffic, enjoy increased adoption in data centers, especially for use with highly virtualized server environments. In terms of market share, traditional Fibre Channel networks still represent the largest market for storage area networks (SAN). However, IP storage networks as a whole are growing at a much faster clip at the expense of traditional Fibre Channel networks.  Rather than continue to maintain diverse technologies in the data center, organizations are looking for more efficient ways to manage their sprawling data center infrastructures, and new technologies are needed to make the transition.

 

 

The introduction of Fibre Channel over Ethernet presents an opportunity to consolidate Ethernet and Fibre Channel data center networks onto a single shared 10Gigabit Ethernet (10GbE) infrastructure, delivering increased efficiency and performance, as well as simplified management and lower overall cost. Most implementations of FCoE require dedicated HBAs or Converged Network Adapters (CNA) that run the FCoE protocol stack on an embedded processor. Another approach, however, is to move the FCoE stack onto the server CPU using a native software initiator integrated into the operating system.

 

Open Fiber Chanel over Ethernet Solution Stack Diagram

Overview of Open FCoE Initiator Solution with Intel 10GbE CNA

 


 

Intel, a leader in Ethernet networking devices, is pioneering the use of native initiators for FCoE at the host with Open FCoE. Open FCoE follows the same model as iSCSI software initiators, using standard data center bridging (DCB) enabled 10GbE adapters and CNAs to transport the FCoE protocol generated by a software driver integrated in the operating system or hypervisor and running on the host CPU. Intel is making the bet that the adoption of FCoE will dramatically increase with this design approach, just as it did with iSCSI. This design promises to deliver substantial reductions in cost while also simplifying the management and configuration of FCoE deployments.

 

NetApp and Intel are working closely together to drive 10GbE and FCoE adoption to market with solutions like Open FCoE and Ethernet storage. Our partnership with Intel benefits from years of development, research, and market leadership. NetApp has been shipping 10GbE storage systems since 2006, and was the first to offer FCoE storage in 2009. In 2010, we were the first to introduce Unified Connect, which includes support for FCoE and IP protocols (iSCSI, NFS, CIFS) over a shared 10GbE wire making it possible to deploy a converged Ethernet network, end to end, from server to switch to storage. No other vendor can make that claim.

 

Converged data center networking is a reality with many options available in the market. For many, network convergence is still very new, and strong technology partnerships will help eliminate risks and enable successful technology transitions. So get ready. The transition is coming. And we’ll let time tell if IT trends such as the adoption of Open FCoE will be as interesting to observe as fashion trends. Yeah.

Dream_Works_Intel_Kun_Fu_Panda_Ad.png

 

 

I'm a lover of quality animation, so I was excited to have a chance to have Derek Chan, head of technology for global operations for DreamWorks on Chip Chat talking about how the technologists at DreamWorks are using cutting edge technology to provide a world where the only thing holding back animators is the limits of their imaginations.

 

Our talk discussed the history of computing use within the animation industry and how new generations of server performance such as Intel's recent launch of the Xeon E5 family of processors directly translate to new levels of animation brilliance on the screen. He also discussed the intersect between Moore's Law and Shrek's Law...something everyone should hear about.

 

Enjoy the episode and remember to follow @IntelITS for more!
For more check out Intel Chip Chat on Intel.com, iTunes, and SoundCloud.

Filter Blog

By author:
By date:
By tag: