1 2 3 Previous Next

IT Peer Network

1,334 posts


Amplify Your Value (6).png

“Hello, my name is Jeff Ton and it has been one thousand, two hundred and seventy two days since I last opened Outlook.”


February 6, 2012, an historic date in Indianapolis, Indiana. Yeah, there was some little football game that night, Super Bowl XLVI - New York Giants against the New England Patriots. But that is not the event that made the date historic (though it was great to watch a Manning beat Brady!) What made that date historic was our go-live on Google Apps, our first step in our Journey to the Cloud.


Now that I have offended everyone from the Pacific Northwest and New England, let me rewind and start at the beginning. In 2010, I arrived at Goodwill Industries of Central Indiana. We were running Microsoft Exchange 2003 coupled with Outlook 2010. Back in the day, the adage was “No one ever got fired for buying IBM”, I was in the “No one ever got fired for buying Microsoft camp”. In fact, when I learned the students in our high school were using Google, I was pretty adamant that they use Office. After all, that is what they will be using when they get jobs!


At about this same time, we were switching from Blackberry to Android-based smartphones. We were having horrible sync problems between Exchange and the Androids using ActiveSync. We needed to upgrade our Exchange environment desperately!


As we were beginning to make plans for upgraded servers to support the upgraded exchange environment, I attend my first MIT Sloan CIO Symposium in Boston. Despite the fact that I bleed Colts blue, I actually love Boston, the history, the culture, the vibe; but I digress. At the conference I learned about AC vs. CF projects (see: That Project is a Real Cluster to learn more). I could not fathom a more likely CF project than an email upgrade project. Why not look to the cloud? Since we were doing an upgrade anyway, perhaps this would be the LAST email upgrade we would have to do!


Enter The Google Whisper. For months a former colleague-turned-Google-consultant had been telling me we should check out Google as an email platform. Usually my response was “Google? That’s for kids not an enterprise!” (Ok, now I have offended everyone from Silicon Valley, too!) Everytime I saw him, he would bring it up. I finally agreed to attend one of Google’s roadshow presentations. I came away from that event with an entirely different outlook (pun intended) on Google.


We decided to run a A/B pilot. We would convert 30 employees to the Google Apps platform for 60 days. We would then convert the same 30 employees to BPOS (predecessor to Office 365) for 60 days and may the best man, er, I mean platform, win. We handpicked the employees for the pilot. I purposely selected many who were staunchly in the Microsoft camp and several others who typically resisted change.


At the end of the pilot an amazing thing happened. Not one person on the pilot team wanted to switch off of Google onto BPOS, in fact, each and every person voted to recommend a Google migration to the Executive Team. Unanimous! When was the last time that ever happened in one of your projects?!!?


The decision made, we launched the project to migrate to the cloud! We leveraged this project to also implement our email retention policy (email is retained for five years). The vast majority of the work in the project involved locating all the .PST in our environment, moving them to a central location from network file folders, local drives, and yes, even thumb drives and CDs. Once in that central location, they were uploaded to the Google platform. During this time, we also mirrored our email environment so every internal and external email also went to the Google platform in real time.


The process took about three months, but finally it was Super Bowl Sunday, time for go-live. Now before you think me an ogre of a boss for scheduling a major go-live for Super Bowl Sunday, I should tell you, the date of February 6, 2012 was selected by the project team. Their thought? No one is going to be doing email after the game is over. We announced a blackout period of eight hours beginning at midnight to do our conversion. Boy, were we ever wrong about the length of the blackout period! Our conversion that night took about 20 minutes. 20 minutes and email was flowing again in and out of the Google environment.


Our implementation included email, contacts, calendar, and groups for three domains. We made the decision to keep the other Google Apps available, but not promote them. We also implemented our five year archive and optional email encryption for sensitive communications. The other decision we made (ok, I made) was not to allow the use of Outlook to access Gmail. One of the tenets of our strategic plan was “Any time, Any place, Any device”, I felt having a piece of client software violated that tenet and created additional support issues that were not necessary.


We learned several things as a result of the project. First, search is not sort. If you have used Gmail, then you know there is not a way to sort your Inbox, it relies instead on the power of Google Search. People really like their sort. Took some real handholding to get them comfortable.


Second, Google Groups are not Distribution Lists. We converted all of our Exchange Distribution Lists to Groups. Yes, they do function in somewhat the same way, however, there are many more settings in Groups, settings that can have unexpected consequences. Consequences like the time our CFO replied to an email that had been sent to a Group, and even though he did not use reply all, his reply went to everyone in the Group! We found that setting very quickly and turned it off! (Sorry Dan!)


The third lesson learned was “You cannot train enough”. Yes, we held many classes during the lead up to conversion and continued them long afterwards. A lot of the feedback we had heard (“everyone has Gmail at home, we already know how to use it”) led us to believe once the initial project was complete we didn’t need to continue training. We recently started a series of Google Workshops to continue the learning process. Honestly, I think some of this is generational. Some love to click on links, watch a video, and then use the new functionality. Others, really want a classroom environment. We now offer both.

One of the things that pleasantly surprised us (well, at least me) was the organic adoption of other Google tools. The first shared Google Doc came to me from outside the IT department. The first meeting conducted using Google Hangouts came from the Marketing department. People were finding the apps and falling in love with them.


Today, one thousand, two hundred and seventy-two days later our first step to the cloud is seen as a great accomplishment. It has saved us tens of thousands (if not hundreds of thousands) of dollars, thousands of hours, and has freed up our team to work on those AC projects!


Before I close, I do want to say, we are still a Microsoft shop. We have Office, Windows, Server, SQL Server and many other Microsoft Products. This post is not intended to be a promotion of one product over another. As I said in my previous post, your path may be different from ours. For us, a 3,000 employee non-profit, Google was the right choice. You may find it meets your requirements, or you may find another product is a better fit. The point here is not the specific product, but the product’s delivery method...cloud...SaaS. The project was such a resounding success, we changed one of our Application Guiding Principles. We are now “cloud-first” when selecting a new application or upgrading an existing one. In fact, almost all of the applications we have added in the last three and half years have been SaaS-based, including Workday, Domo, Vonigo, ETO, Facility Dude and more.


Go and Get Your Google On!

Go and get your Google on, later hit your Twitter up

We out workin’ y’all from summer through the winter, bruh

Red eye precision with the speed of a stock car

You’re now tuned in to some Independent Rock Stars


Next month, we will explore a project that did more to take us to a Value-add revenue generating partner than just about any other project. Amplify Your Value: Reap the Rewards!


The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue


We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).


Jeffrey Ton is the SVP of Corporate Connectivity and Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.


Find him on LinkedIn.

Follow him on Twitter (@jtongici)

Add him to your circles on Google+

Check out more of his posts on Intel's IT Peer Network

Read more from Jeff on Rivers of Thought

Woman-talking-on-phone-and-using-tablet-in-retail.jpgI had the privilege of representing Intel at the Fashion Institute of Technology’s (FIT) Symposium on Omni Retailing in New York in April.

 

And the privilege of listening to several industry leaders and – of great interest – a team of FIT’s top senior students, who presented their vision for the store of tomorrow.

 

Some common threads:

  • We’re living in a world of digital screens – brands can either get on board or get left behind.
  • Brand success is as much about effective storytelling as it is about product and operational efficiency. And the best brands tell their stories across the screens.
  • When it comes to the millennial shopper, it’s about authenticity and trust.

 

And, of course, technology is the thread that runs through it all.

 

Highlights

 

Jennifer Schmidt, Principal and leader of the Americas Apparel Fashion and Luxury practice at McKinsey & Company, emphasized the importance of storytelling in this important global segment. According to Ms. Schmidt, 50 percent of value creation in fashion and luxury is about perception – the ability of a brand to consistently deliver (in every facet of the business) a differentiating, conversation-building, relationship-building story.

 

(Those who joined Dr. Paula Payton’s NRF store tour in January will remember her emphasis on storytelling and narrative).

 

  1. Ms. Schmidt also spoke to three elements of import in her current strategy work:
    • The change in the role of the store – which now shifts from solely emphasizing transactions to brand-building – and with 20-30% fewer doors than before;
    • The change in retail formats – which, in developed world retailing, now take five different shapes: 1) flagship store, 2) free-standing format, 3) mini- and urban-free standing, 4) shops within shops and 5) outlet;
    • The importance of international expansion, especially to the PRC and South Asia.

 

Daniella Yacobovsky, co-founder of online jewelry retailer Baublebar, also noted the importance of brand building – and she explained that her brand story is equal parts product and speed. Baublebar works on an eight-week production cycle, achieving previously unheard of turns in jewelry. Data is Ms. Yacobovsky’s friend – she tracks search engine results, web traffic and social media to drive merchandising decisions.

 

And, last but certainly not least: FIT seniors Rebeccah Amos, Julianne Lemon, Rachel Martin and Alison McDermott, winners of FIT’s Experience Design for Millennials Competition, opined on what makes the best brand experience for millennials. Their unequivocal answer – paired with a lot of good, solid retailing advice – was videos and music.

 

It’s not just about entertainment. It’s also an issue of trust and authenticity (does a brand’s playlist resonate with you?), which ultimately leads to brand stickiness.

 

Envision video – and lots of it. On enormous, in-store video walls, on mobile, hand-held devices and on brand YouTube channels. To display products virtually or provide information on how to wear or accessorize them. With in-store video, retailers can orchestrate, curate and simplify, giving shoppers a fast, trusted way to be on trend.

 

Music? The students suggested that every brand needs a music director. Brand-right soundtracks and playlists and connections to the right bands and music events can be powerful influences on today’s largest consumer group.

 

Quite the day.

 

Jon Stine
Global Director, Retail Sales

Intel Corporation

 

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

 

* Other names and brands may be claimed as the property of others.

 

© 2015 Intel Corporation

two-men-projecting-image-on-tablet-to-screen.pngWhen the term design is used in mobile business intelligence (BI), it often refers to the user interface (UI). However, when I consider the question of design in developing a mobile BI strategy, I go beyond what a report or dashboard looks like.

 

As I wrote in “Mobile BI” Doesn’t Mean “Mobile-Enabled Reports,” when designing a mobile BI solution, we need to consider all facets of user interactions and take a holistic approach in dealing with all aspects of the user experience. Here are three areas of design to consider when developing a mobile BI strategy.

 

How Should the Mobile BI Assets Be Delivered?

 

In BI, we typically consider three options for the delivery of assets: push, pull, and hybrid. The basic concept of a “push” strategy is similar to ordering a pizza for home delivery. The “users” passively receive the pizza when it’s delivered, and there’s nothing more that they need to actively do in order to enjoy it (ok, maybe they have to pay for it and tip the driver). Similarly, when users access a report with the push strategy, whether through regular e-mail or mobile BI app, it’s no different than viewing an e-mail message from a colleague.

 

On the other hand, to have pizza with the pull strategy, users need to get into their cars and drive to the pizza place. They must take action and “retrieve the asset.” Likewise, users need to take action to “pull” the latest report and/or data, whether they log on using the app or mobile browser. The hybrid approach employs a combination of both the push and pull methods.

 

Selecting the right delivery system for the right role is critical. For example, the push method may be more valuable for executives and sales teams, who travel frequently and may be short on time. However, data updates are less frequent with the push method, so accessing the latest data can’t be critical if you choose this option. In contrast, the “pull” strategy may be more appropriate for analysts and customer service teams, who depend on the latest data.

 

Additional considerations include data security and enterprise mobility. Does the current BI solution or software support both options? Can the integrity of data security be maintained if data assets are delivered outside the demarcation lines (for example, mobile BI report delivered as an attachment to an e-mail)?

 

What Are the Format and Functionality of the Mobile BI Assets?

 

The format deals with the type and category of the asset that is delivered to mobile BI users. What does the end-user receive? Is it a static file in Adobe PDF or Microsoft Excel format with self-contained data, or is it dynamic such as a mobile BI app that employs native device functionality? Is the format limited to data consumption, or does it allow for interactions such as “what-if” scenarios or database write-back capability?

 

If the format supports exploration, what can I do with it? Can I select different data elements at run time as well as different visualization formats? How to I select different values to filter the result sets, like prompts? Does the format support offline viewing? Is the format conducive to collaboration?

 

Does the User Interface Optimize the BI Elements?

 

The UI represents the typical BI elements that are displayed on the screen: page layout, menus, action buttons, orientation, and so on. When you consider the design, decide if the elements really add value or if they’re just pointless visualizations like empty calories in a diet. You want to include just the “meat” of your assets in the UI. More often than not, a simple table with the right highlighting or alerts can do a better job than a colorful pie chart or bar graph.

 

In addition, the UI covers the navigation among different pages and/or components of a BI asset or package. How do the users navigate from one section to another on a dashboard?

 

Bottom Line: Design Is Key for the User Experience

 

The end-to-end mobile BI user experience is a critical component that requires a carefully thought-out design that includes not only soft elements (such as an inviting and engaging UI), but also hard elements (such as the optimal format for the right role and for the right device). Designing the right solution is both art and science.

 

The technical solution needs to be built and delivered based on specifications and following best practices – that’s the science part. How we go about it? That’s completely art. It requires both ingenuity and critical thinking, since not all components of design come with hard-and-fast rules that we can rely on.

 

What other facets of the mobile BI user experience do you include in your design considerations?

 

Stay tuned for my next blog in the Mobile BI Strategy series

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

police-car-in-england.jpgWhat enables you to do really great work? Motivation to do a good job and belief in what you are doing are important. You also need access to the right tools and resources — be they pen and paper, a complex software package, or your team and their expertise. And you need the freedom to decide how you are going to pull all this together to achieve your goals.

 

I’ve recently seen how Wiltshire Police Force has used technology to bring together the combination of drive, the right tools and the freedom to act. Working with Wiltshire Council, it has developed a new approach to policing that empowers staff members to decide how, when and where they work in order to best serve the local community.

 

The organization deployed 600 tablets and laptop PCs, all powered by Intel® Core™ i5 processors, placing one in each patrol vehicle and giving some to back-office support staff. The devices connect (using 3G) to all the applications and systems the officers need. This allows them to check case reports, look up number plates, take witness statements, record crime scene details, and even fill in HR appraisal forms, from any location.


It’s What You Do, Not Where You Do It


Kier Pritchard is the assistant chief constable who drove the project. He and his team follow the philosophy that “work should be what you do, not where you go”. By giving officers the flexibility to work anywhere, he’s empowering them to focus on doing their jobs, while staying out in the community.

 

“We’re seeing officers set up in a local coffee shop, or the town hall,” he said. “In this way they can keep up to date with their cases, but they’re also more in touch with the citizens they serve.”

 

The other advantage of the new model is that officers can be much more productive. There’s no more driving to and from the station to do administrative tasks. Instead, they can catch up on these in quiet periods during their shift. “This essentially means there’s no downtime at all for our officers now,” said Pritchard.

 

The introduction of this new policing approach has gone down well with Wiltshire’s officers. They’ve taken to the devices enthusiastically and are regularly coming up with their own ways of using them to improve efficiency and collaboration.

 

In addition to making the working day more productive and rewarding for its staff, the mobile devices have also made a big difference to Wiltshire residents. Specialists in different departments of the police force are able to collaborate much more effectively by sharing their findings and resources through an integrated platform, making the experience for citizens much smoother. Areas in which the devices are used have also seen an improvement in crime figures thanks to the increased police presence within the community  — for example in the town of Trowbridge, antisocial behaviour dropped by 15.8 percent, domestic burglaries by 34.1 percent, and vehicle crime by 33 percent.

 

You can read more about how the officers are using the devices to create their own ideal ways of working in this recently published case study or hear about it in the team’s own words in this video. In the meantime, I’d love to hear your views on the role of mobile technology in empowering the workforce — how does it work for you?

 

To continue this conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


Find me on LinkedIn.

Keep up with me on Twitter.

BigDataJourney.png

Since Intel IT generated US$351 million in value from Big Data and analytics during 2014, you might wonder how Intel started on the road to reach that milestone.  In this presentation named “Evolution of Big Data at Intel: Crawl, Walk and Run Approach” from the 2015 Hadoop Summit in San Jose, Gomathy Bala, Director, and Chandhu Yalla, Manager and Architect, talk about Intel IT’s big data journey. They cover its beginning, current use cases and long term vision.  Along the way, they offer some useful information to organizations just starting out to explore big data techniques and uses.

 

One key piece of advice that the presenters mention is to start on small, well-defined projects where you can see a clear return.  That allows an organization to develop the skills to use Big Data with lower risk and known reward, part of the “crawl” stage from the presentation title.  Interestingly enough, Intel IT did not rush out and try to hire people who could immediately start using tools like Hadoop.  Instead, they gathered engineers who were passionate about new technology and trained them to use them.  This is part of “walk” stage.  Finally, with that experience, they developed an architecture to use Big Data techniques more generally.  This “run” stage architecture is shown below, where all enterprise data can be analyzed in real time.  We will be talking about Intel's Data Lake in an upcoming white paper.

 

Another lesson is to evaluate Hadoop distributions and use distributions that is core open source. This is one of a number of criteria that were established.   You can see more on Intel IT’s Hadoop Distribution evaluation criteria and how we migrated between Hadoop versions in a previous blog entry.

 

A video of “The Evolution of Big Data at Intel, Crawl, Walk and Run Approach” can be seen here, and the presentations slides are available as a slideshare.  A video of Intel CIO Kim Stevenson talking about Intel's use of Big Data is shown in the presentation video, but a clearer version can be found here.

BigDataArchitecture.png

MFW .gif

Mobilizing the Field Worker

 

I recently had the opportunity to host an industry panel discussing the business transformation that occurs when mobility solutions are deployed for field workers. Generally speaking, field workers span a spectrum of industries and currently operate in the four following ways: pen & paper, laptop tethered to a truck, consumer grade tablet or a single function device like a bar code scanner.

 

Intel currently defines this market as 10 million workers divided into two general categories - hard hat workers and professional services. Hard hat workers generally function in a ruggedized environment – think construction or field repair teams. Professional services includes real estate appraisal, insurance agents, law enforcement, and many others.

 

Field teams are capable of improving customer service, generating new revenue streams, and actively driving cost reductions.  A successful mobile strategy can enable all three.  Regardless of the industry, field workers need access to vital data when they’re not in the office.

 

The panel of experts consisted of system integrators as well as communication, hardware, and security experts. Together, we discussed the elements required for the successful deployment of a mobile solution.

 

The panel was comprised of; Geoff Goetz from BSquare, Nancy Green from Verizon Wireless and Michael Seawright from Intel Security. They brought a wealth of information, expertise and insight to the panel.  I have tried to share the essence of this panel discussion – I am sure I will not do it justice as they were truly outstanding.


The field worker segment represents a great business opportunity.  By the very definition the field worker is on the front line delivering benefits and services to customers.  They are reliant upon having the right information in a real time manner. Frequently, this information is available only through applications and legacy based software running back at headquarters.  In planning for a successful deployment the enterprise must consider how they connect the field worker to this information. Hardware, applications, and back-office optimizations must all  be considered.

 

bsquare.pngGeoff Goetz from BSQUARE shared the perspective of both a hardware and system integrator. BSQUARE is a global leader of embedded software and customized hardware solutions. They enable smart connected systems at the device level, which are used by millions every day.  BSQUARE offers solutions for mobile field workers across a spectrum of vertical industries.  They have worked closely with Microsoft to develop a portfolio of Windows 10 based devices on 5, 8 and 10” form factors.  What was interesting to me was the 5” Intel-based handheld capable of running Windows 8.1 and soon Windows 10.  The Inari5 fills the void for both field workers and IT managers. The Inari5 is a compelling solution that doesn’t comprise on performance or functionality.  Geoff and his team truly understand the value of having the right device for the job as well as the software and applications to accelerate an enterprise while achieving the full benefits of mobilizing their field teams.


verzn.pngNancy Green from Verizon Wireless highlighted the advantages of utilizing an extensive network to deliver connectivity right to the job site. Verizon Wireless offers a full suite of software solutions and technical capabilities to accelerate mobile programs across industries. Verizon delivers upon the value proposition for both the Line of Business manager seeking a competitive advantage, as well as the IT manager looking to easily manage and secure the devices in the field. As I mentioned before, one of the most critical requirements for field workers is access to information.  Verizon has worked with numerous companies to unlock workforce optimization by reducing costs, simplifying access to remote data, and increasing collaboration.  I was very impressed with the extensive resources Verizon can bring to bear in designing a mobile solution for field workers. 


intelsecurity.pngMichael Seawright from Intel Securities is an industry advocate who has been successfully leading business transformation with Intel’s fellow travelers for more than 20 years.  In a hyper competitive market, the field worker has the opportunity to drive customer good will, address and fix problems the first time all the while driving sell-up. 

Meanwhile, many companies are struggling figuring out the right level of management and security for their mobile workforce.

 

One advantage in deploying Intel-based mobile solutions is the built-in security at the processor level.  Ultimately, the device security is only as good as its user’s passwords. The Intel Security team is working to address the vulnerabilities associated with passwords.

 

Ultimately, mobility is a business disruptor offering a chance to transform business processes and gain a competitive advantage.  A successful program requires the IT department and its vendors to think beyond the device.  It requires a solution approach to successfully manage the development, implementation and rollout.  In addition, it may require back-office optimization.  The following image depicts my attempt to highlight the architecture framework that should be considered for mobile program.


beyond tablets.png

 

mgseawright; kodonovan; IT Peer Network; ChrisPeters; JenAust; @Pattie_Sims; quoyeser; arose;

Intels-history-of-networking-data-cetner.pngIf I asked you to play a round of word associations starting with ‘Intel’, I doubt many of you would come back with ‘networking’. Intel is known for a lot of other things, but would it surprise you to know that we’ve been in the networking space for more than 30 years, collaborating with key leaders in the industry? I’m talking computer networking here of course, not the sort that involves small talk in a conference centre bar over wine and blinis. We’ve been part of the network journey from the early Ethernet days, through wireless connectivity, datacentre fabric and on to silicon photonics. And during this time we’ve shipped over 1 billion Ethernet ports.

 

As with many aspects of the move to the software-defined infrastructure, networking is changing – or if it’s not already, it needs to. We’ve spoken in this blog series about the datacentre being traditionally hardware-defined, and this is especially the case with networking. Today, most networks consist of a suite of fixed-function devices – routers, switches, firewalls and the like. This means that the control plane and the data plane are combined with the physical device, making network (re)configuration and management time-consuming, inflexible and complex. As a result, a datacentre that’s otherwise fully equipped with the latest software-defined goodies could still be costly and lumbering. Did you know, for example, that even in today’s leading technology companies, networking managers have weekly meetings to discuss what changes need to be made to the network (due to the global impact even small changes can have), which can then take further weeks to implement? Ideally, these changes should be made within hours or even minutes.

 

So we at Intel (and many of our peers and customers) are looking at how we can take the software-defined approach we’ve used with compute and apply it to the network as well. How, essentially, do we create a virtualised pool of network resources that runs on industry-standard hardware and that we can manage using our friend, the orchestration layer? We need to separate the control plane from the data plane.

 

Intels-workload-consolidation-strategy.png

Building virtual foundations

 

The first step in this journey of network liberation is making sure the infrastructure is in place to support it. Historically, traditional industry-standard hardware wasn’t designed to deal with networking workloads, so Intel adopted a 4:1 workload consolidation strategy which uses best practices from the telco industry to optimise the processing core, memory, I/O scalability and performance of a system to meet network requirements. In practice, this means combining general-purpose hardware with specially designed software to effectively and reliably manage network workloads for application, control, packet and signal processing.

 

With this uber-foundation in place, we’re ready to implement our network resource pools, where you can run a previously fixed network function (like a firewall, router or load balancer) on a virtual machine (VM) – just the same as running a database engine on a VM. This is network function virtualisation, or NFV, and it enables you to rapidly stand up a new network function VM, enabling you to meet those hours-and-minutes timescales rather than days-and-weeks. It also effectively and reliably addresses OpEx and manual provisioning challenges associated with a fixed-function network environment in the same way that compute virtualisation did for your server farm. And the stronger your fabric, the faster it’ll work – this is what’s driving many data centre managers to consider upgrading from 10Gb Ethernet, through to 40Gb Ethernet and on to 100Gb Ethernet.

 

Managing what you’ve built

 

So, hooray! We now have a path to virtualising our network functions, so we can take the rest of the week off, right? Well, not quite. The next area I want to address is software-defined networking (SDN), which is about how you orchestrate and manage your shiny new virtual network resources at a data centre level. It’s often confused with NFV but they’re actually separate and complementary approaches.

 

Again, SDN is nothing new as a concept. Take storage for example – you used to buy a fixed storage appliance, which came with management tools built-in. However, now it’s common to break the management out of the fixed appliance and manage all the resources centrally and from one location. It’s the same with SDN, and you can think of it as “Network Orchestration” in the context of SDI.

 

With SDN, administrators get a number of benefits:

 

  • Agility. They can dynamically adjust network-wide traffic flow to meet changing needs agilely and in near real-time.
  • Central management. They can maintain a global view of the network, which appears to applications and policy engines as a single, logical switch.
  • Programmatic configuration. They can configure, manage, secure and optimise network resources quickly, via dynamic, automated SDN programs which they write themselves, making them tailored to the business.
  • Open standards and vendor neutral. They get simplified network design and operation because instructions are provided by SDN controllers instead of multiple, vendor-specific devices and protocols. This open standards point is key from an end user perspective as it enables centralised management.

 

Opening up

 

There’s still a way to go with NFV and SDN, but Intel is working across the networking industry to enable the transformation. We’re doing a lot of joint work in open source solutions and standards, such as OpenStack.org - unified computing management including networking, OpenDaylight.org - a platform for network programmability, and also the Cisco* Opflex Protocol – an extensible policy protocol. We’re also looking at how we proceed from here, and what needs to be done in order to build an open, programmable ecosystem.

 

Today I’ll leave you with this short interview with one of our cloud architects, talking about how Intel’s IT team has implemented software-defined, self-service networking. My next blog will be the last in this current series, and we’ll be looking at that other hot topic for all data centre managers – analytics. In the meantime, I’d love to hear your thoughts on how your business could use SDN to drive time, cost and labour out of the data centre.

 

To continue the conversation on Twitter, please follow us at @IntelITCenter or use #ITCenter.


*Other names and brands may be claimed as the property of others.

man-looking-at-enterprise-mobility-devices.pngWhen an organization is considering implementing a mobile BI strategy , it needs to ask/consider if its current information technology (IT) and business intelligence (BI) infrastructure can support mobile BI. It must determine if there are any gaps that need to be addressed prior to going live.

 

When we think of an end-to-end mobile BI solution, there are several areas that can impact the user experience. I refer to them as choke points. Some of the risks associated with these choke points can be eliminated; others will have to be mitigated. Depending on the business model and how the IT organization is set up, these choke points may be dependent on the configuration of technology or they may hinge on processes that are embedded into business or IT operations. Evaluating both infrastructures for mobile BI readiness is the first step.

 

IT Infrastructure’s Mobile BI Readiness

 

The IT infrastructure typically includes the mobile devices, wireless networks, and any other services or operations that will enable these devices to operate smoothly within a set of connected networks, which span those owned by the business or external networks managed by third party vendors. As mobile BI users move from one point of access to another, they consume data and assets on these connected networks and the mobile BI experience should be predictable within each network’s constraints of flexibility and bandwidth.

 

Mobile device management (MDM) systems also play a crucial role in the IT infrastructure. Before mobile users have a chance to access any dashboards or look at data on any reports, their mobile devices need to be set up first. Depending on the configuration, enablement may include device and user enrollment, single sign-on (SSO), remote access, and more.

 

Additionally, failing to properly enroll either the device or the user may result in compliance issues or other risks. It’s critical to know how much of this comes preconfigured with the device and how the user will manage these tasks. When you add to the mix the bring-your-own-device (BYOD) arrangements, the equation gets more complex.

 

BI Infrastructure’s Mobile BI Readiness

 

Once the user is enabled on the mobile device and business network, the BI infrastructure will be employed. The BI infrastructure typically includes the BI software, hardware, user profiles, and any other services or operations that will enable consumption of BI assets on mobile devices. The mobile BI software, whether it is an app or web-based solution, will need to be properly managed.

 

The first area of concern for an app-based solution is the installation of the app from an app store. For example, does the user download the app from iTunes (in the case of an iPad or iPhone) or from an IT-managed corporate app store or gallery? Is it a custom-built app developed in-house or is it part of the current BI software? Does the app come preconfigured with the company-supplied mobile device (similar to how email is set up on a PC) or is the user left alone to complete the installation?

 

When the app is installed, are we done? No. In many instances, the app would need to be configured to connect to the mobile BI servers. Moreover, this configuration step needs to come after obtaining proper authorizations, which involves entering user’s access credentials (at minimum user id and password unless SSO can be leveraged).

 

If the required authorization requests, regardless of existing BI user profiles, are not obtained in time, the user configuration can only be completed partially. More often than not, the mobile BI users will need assistance with technical and process-related topics. Hence, streamlining both installation and configurations steps will further improve the onboarding process.

 

Bottom Line

 

Infrastructure is the backbone of any technology operation, and it’s equally important for mobile BI. Close alignment with enterprise mobility, as I wrote in “10 Mobile BI Strategy Questions: Enterprise Mobility,” will help to close the gaps in many of these areas. When we’re developing a mobile BI strategy, we can’t take the existing IT or BI infrastructure for granted.

 

Where do you see the biggest gap when it comes to technology infrastructure in mobile BI planning?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

GRAHAM PALMER

IT Refresh is Over-Rated

Posted by GRAHAM PALMER Jul 20, 2015

IT-refresh_graham-palmer_linkedin-blog-post.jpg

 

I have to admit my headline is a little tongue in cheek but please hear me out. 

 

With the recent end of support for Windows Server 2003, the noise around refresh can be deafening.  I’d bet many people have already tuned out or have grown weary of hearing about it. They’ve listened to the incessant arguments about increased risks of security breaches, issues around compatibility, and estimates of costly repair bills should something go wrong.

 

While it’s true all of these things will leave a business vulnerable and less productive, it appears that’s just not enough for many companies to make a shift. 

 

Microsoft Canada estimates that 40% of its install base is running Windows Server 2003, illustrating Canadian companies’ conservatism when it comes to major changes to their infrastructure.  I’d suggest that in this current economic environment, being conservative in the adoption of new technology is leaving us vulnerable to an attack that could have a far reaching impact.

 

But let’s set that aside for the time being. You’ll be pleased to know I’m not going to talk about all the common reasons for refreshing your hardware and software including security, productivity, downtime, and support costs.  All these issues are important and valid, however I’ve no doubt we will hear a great deal about all of them in the weeks leading up to July 14th.

 

Instead I offer you a slightly different perspective.

 

In a previous post I wrote about a global marketplace that is getting more competitive.  Canadian companies are, and will be, facing off against larger enterprises located around the world. Competition is no longer from across the street or in the neighboring town. Canadian trade agreements have been signed or updated with numerous countries or economic regions globally including the European Union, China, Korea, and Chile. While these agreements signal opportunities for businesses to gain access to new markets, they also herald the risk of increased domestic competition.

 

To continue to succeed, businesses will have to find more efficient ways of doing whatever they need to get done.  This means pushing beyond their traditional comfort zone towards greater innovation.  This push will undoubtedly be enabled by advances in technology to support productivity gains.

 

As companies consider what it will take to succeed into the future, I believe you need to look at the people you have working for your company.  Are these the employees who can drive your company forward? Are they future leaders or innovators who can help you compete against global powerhouses?

 

Here’s where an important impetus to refresh your technology begins to take shape.

 

The employees of Generation Y and Generation C, also known as the connected generation, want to work for progressive, leading-edge companies and are shying away from large, stodgy traditional businesses or governments.  Being perceived as dated will limit your recruitment options as the top candidates chose the firms that are progressive in all areas of their business.

 

I’ve seen statistics indicating that 75% of the workforce will be made up of Generation Y workers by 2025. They are already sending ripples of change throughout corporate cultures and have started to cause a shift in employment expectations. These employees aren’t attracted to big blue chip firms, and in fact only 7% of millennials currently work for Fortune 500 companies.  Instead, they are attracted to the fun, dynamic, and flexible environments touted by start-ups.

 

The time has never been better to decide if you are going to continue to rinse and repeat, content to stick with the status quo, or if you are ready to embrace a shift that could take your business to the next level and at the same time position yourself to become more attractive to the next generation of employees.

 

So let’s talk a little about the opportunity here: In my experience from the UK, and I would argue it is similar in any market, new opportunities are realized first by small- and medium-sized businesses. They are more nimble and typically they are in a better position to make a significant change more quickly.  Since they are also closest to their customers and their local community, they can shift gears more rapidly to respond to changes they are seeing in their local market and benefit from offering solutions first that meet an emerging need.

 

SMBs are also in a strong position to navigate and overcome barriers to adoption of new technology since they don’t have that massive install base that requires a huge investment to change. In other words, they don’t have a mountain of technology to climb in order to deliver that completely new environment, but it takes vision and leadership willing to make a fundamental shift that will yield future dividends.

 

The stark truth is that millennials are attracted to and have already adopted the latest technology. They don’t want to take a step backward when they head into the workplace.  The technologies they will use and the environment they will be working in are already being factored into their decision about whether or not to accept a new position.

 

How do you think your workplace would be viewed by these future employees?  It goes without saying that we need to equip people to get their jobs done without worrying about the speed of the technology they’re using. No one wants delays caused by technology that is aging, slowing them down, and preventing them from doing what they need to get done.  Today’s employees are looking for more: more freedom, more flexibility, and more opportunities. But the drive to provide more is accelerating a parallel requirement for increased security to keep sensitive data safe.

 

I’d offer these final thoughts:  In addition to the security and productivity reasons, companies challenged to find talent should consider a PC refresh strategy as a tool to attract the best and brightest of the next generation.

 

Technology can be an enabler to fundamentally transform your workplace but you need a solid foundation on which to build.  A side benefit is that it will also help deliver the top talent to your door.

Two-Men-Looking-At-A-Tablet.jpgIs your mobile business intelligence (BI) strategy aligned with your organization’s enterprise mobility strategy? If you’re not sure what this means, you’re in big trouble. In its simplest form, enterprise mobility can be considered a framework to maximize the use of mobile devices, wireless networks, and all other related services in order to drive growth and profitability. However, it goes beyond just the mobile devices or the software that runs on them to include people and processes.

 

It goes without saying that enterprise mobility should exist in some shape or form before we can talk about mobile BI strategy, even if the mobile BI engagement happens to be the first pilot planned as a mobility project. Therefore, an enterprise mobility roadmap serves as both a prerequisite for mobile BI execution and as the foundation on which it relies.

 

When the development of a successful mobile BI strategy is closely aligned with the enterprise mobility strategy, the company benefits from the resulting cost savings, improvement in the execution of the mobile strategy, and increased value.

 

Alignment with Enterprise Mobility Results in Cost Savings

 

Although mobile BI will inherit most of its rules for data and reports from the underlying BI framework, the many components that it relies on during execution will be dependent on the enterprise rules or lack thereof. For example, the devices on which the mobile BI assets (reports) are consumed will be offered and supported as part of an enterprise mobility management system, including bring-your-own-device (BYOD) arrangements. Therefore, operating outside of these boundaries could not only be costly to the organization but it could also result in legal and compliance concerns.

 

Whether the mobile BI solutions are built in-house or purchased, as with any other technology initiative, it doesn’t make any sense to reinvent the wheel. Existing contracts with software and hardware vendors could offer major cost savings. Moreover, fragmented approaches in delivering the same requirement for multiple groups and/or for the same functionality won’t be a good use of scarce resources.

 

For example, building forecast reports for sales managers within the customer relationship management (CRM) system and forecast reports developed on the mobile BI platform may offer the same or similar functionality and content, resulting in confusion and duplicate efforts.

 

Leveraging Enterprise Mobility Leads to Improved Execution

 

If you think about it, execution of the mobile BI strategy can be improved in all aspects if an enterprise mobility framework exists that can be leveraged. The organization’s technology and support infrastructure (two topics I will discuss later in this series) are the obvious ones worth noting. Consider this—how can you guarantee an effective delivery of BI content when you rollout to thousands of users without having a robust mobile device support infrastructure?

 

If we arm our sales force with mobile devices around the same time we plan to deliver our first set of mobile BI assets, we can’t expect flawless execution and increased adoption. What if the users have difficulty setting up their devices and have nowhere to turn for immediate and effective support?

 

Enterprise Mobility Provides Increased Value for Mobile BI Solutions

 

By aligning our mobility BI strategy with our organization’s enterprise mobility framework, we not only increase our chances of success, but, most importantly, we have the opportunity to provide increased value beyond pretty reports with colorful charts and tables. This increased value means that we can deliver an end-to-end solution even though we may not be responsible for them under the BI umbrella. Enterprise mobility components such as connectivity, device security, or management contribute to a connected delivery system that mobile BI will share.

 

Bottom Line: Enterprise Mobility Plays an Important Role

 

Enterprise mobility will influence many of mobile BI’s success criteria. When we’re developing a mobile BI strategy, we need to stay not only in close alignment with the enterprise mobility strategy so we can take advantage of the synergies that exist, but also consider the potential gaps that we may have to address if the roadmap does not provide timely solutions.

 

How do you see enterprise mobility influencing your mobile BI execution?

 

Stay tuned for my next blog in the Mobile BI Strategy series.

 

Connect with me on Twitter at @KaanTurnali and LinkedIn.

 

This story originally appeared on the SAP Analytics Blog.

For millions of years, humans searched for the right person to love based on emotion, intuition, and a good bit of pure luck. Today, it's much easier to find your soul mate using the power of big data analytics.

 

Analyzing Successful RelationshipseHarmony_Tweet.jpg

 

This scientific approach to matchmaking is clearly successful. On average, 438 people in the United States get married every day because of eHarmony. That’s the equivalent of nearly four percent of new marriages.  

Navigating an Ocean of Data

To keep up with its fast-growing demand, eHarmony needed to boost its analytics capabilities and upgrade its cloud environment to support a new software framework. It also needed a solution that was scalable to keep up with tomorrow's needs.

Robust Private Cloud Environment

eHarmony built a new private cloud environment that lets it process affinity matching and conduct machine learning research to help refine the matching process. The cloud is built on Cloudera CDH software—an Apache Hadoop software distribution that enables scalable storage and distributed computing while providing a user interface and a range of enterprise capabilities. 

The infrastructure also includes servers equipped with the Intel® Xeon® processor E5 v2 and E5 v3 families. eHarmony chose the Intel Xeon processors because they had the performance it needed plus large-scale memory capacity to support the memory-intensive cloud environment. eHarmony's software developers also use Intel® Threading Building Blocks (Intel® TBB) to help optimize new code.  

More andMore AccurateResults

This powerful new cloud environment can help eHarmony accommodate more complex analyses—and ultimately produce more personalized matches that improve the likelihood of relationship success. It can also analyze more data faster than before and deliver within overnight processing windows.  

Going forward, eHarmony is ready to handle the fast-growing volume and variety of user information it takes to match millions of users every day. 

You can take a look at the eHarmony solution here or read more about it here. To explore more technology success stories, visit www.intel.com/itcasestudies and follow us on Twitter.

 

There’s an old joke about Model-Ts – you could get them in any color you wanted, as long as you wanted black. That’s sort of how enterprise client computing felt 15 years ago: Here’s your monitor, there’s your CPU tower with a bunch of cables. One size fits all. As Client Product Manager at Intel Corp., I can tell you that nothing could be further from the truth today. My job is to develop and execute our IT client computing strategy at Intel, including recommended refresh cycles, procurement, and platform offerings and management for Intel employees.

 

Just as cars now come in all colors, shapes, and sizes, client computing has evolved far beyond the traditional monitor and tower. Technology, how people use it, and the processes we implement to manage it have all undergone significant transformations. Intel IT has evolved from the “one size fits all” approach to client computing and now offers multiple technology choices so that employees can select a device that best suits their way of working and their job requirements.

 

The “PC fleet” at Intel is now the “client computing fleet” and encompasses many form factors. The mobile workforce movement ushered in laptops, and in recent years the consumerization of IT has sparked huge growth in the bring-your-own-device arena. Moore’s law continues to rule, enabling people to do more and more with smaller and smaller devices. At Intel, we’re seeing a continual rise in 2-in-1 and tablet usage for certain segments of the employee population.

 

DC3217IYE_front-4pres.png

But one of the most exciting areas of client computing at Intel is desktop computing. As described in a recent IT@Intel white paper, the familiar “desktop”-class PC continues to fill an important role, but desktop computing as a whole has morphed beyond the desk. New form factors are demonstrating their relevance to enterprise client computing. Here are a few examples of form factors we are putting to use at Intel:

  • Mini PCs. The Intel® NUC (Next Unit of Computing) is a good example of a mini PC – an energy-efficient, fully functioning small form factor PC. Some can literally fit in the palm of your hand.
  • ComputeStick.jpgAll-in-one PCs. This form factor integrates the system's internal components into the same case as the display, eliminating some connecting cables and allowing for a smaller footprint. Less clutter, touch capabilities, and desktop performance are just some of the advantages that AIOs offer.
  • Compute sticks. Continuing the Moore’s law phenomenon, compute sticks are PCs that can fit in your pocket, providing the capability to turn any HDMI* display (think TV, digital sign, whatever) into a PC, running either Windows* 8.1 with Bing* or Ubuntu* 14.04 LTS.

 

These “stationary computing devices” can be used in a variety of enterprise settings. Mini PCs can power digital signage and conference room collaboration. All-in-ones bring the power of touch to the desktop and are particularly useful in public settings such as kiosks and lobbies. Compute sticks combine the ease of mobility with the powerful computing capabilities associated with traditional desktop PCs. You can read more about these use cases in our recent white paper “The Relevance of Desktop Computing in a Mobile Enterprise.”

 

PRC6_Office_AllInOne_Man.jpgTraditional desktop PCs are not remaining static either. In particular, I’m interested in the increasing wireless capabilities of desktop PCs using PCI-Express* (PCIe*). A single PCIe lane can transfer 200 MB of traffic in each direction per second – a significant improvement over standard PCI connections.

 

Intel IT is actively preparing for a future workplace that incorporates many form factors as well as many input methods, including touch, voice, sensor, and gesture. We are transitioning from the traditional IT model of end-device control and management to a device-independent, services-based model. We are also revisiting our client computing management practices, procurement processes, and other aspects of client management. I hope to address these topics in future blogs. In the meantime, I’d appreciate hearing from readers. How is client computing changing in your organization? How are you adapting to these changes? Please share your thoughts and insights with me – and other IT professionals – by leaving a comment below. Join our conversation on the IT Peer Network.

Man-Looking-At-Tablet-In-Server-Room.pngToday, I’d like to take a peek at what’s around the corner, so to speak, and put the spotlight on a new and exciting area of development. We’ve spent some time in this blog series exploring Software Defined Infrastructure (SDI) and its role in the journey to the hybrid cloud. We’ve looked at what’s possible now and how organisations early to the game have started to use technologies like orchestration layers and telemetry to increase agility whilst driving time, cost and labour out of their data centres. But where’s it all going next?

 

One innovation that we’re just on the cusp of is server disaggregation and composable resources (catchy, huh?). As with much of the innovation I’ve spoken about during this blog series, this is about ensuring the datacentre infrastructure is architected to best serve the needs of the software applications that run upon it. Consider the Facebooks*, Googles* and Twitters* of the world – hyper-scale cloud service providers (CSPs), running hyper-scale workloads. In the traditional Enterprise, software architecture is often based on virtualisation - allocating one virtual machine (VM) to one application instance as demand requires. But, what happens when this software/hardware model simply isn’t practical?

 

This is the ‘hyper-scale’ challenge faced by many CSPs. When operating at hyper-scale, response times are achieved by distributing workloads over many thousands of server nodes concurrently, hence a software architecture designed to run on a ‘compute grid’ is used to meet scale and flexibility demands. An example of this is the Map Reduce algorithm, used to process terabytes of data across thousands of nodes.

 

However, along with this comes the requirement to add capacity at breath-taking pace whilst simultaneously achieving previously unheard of levels of density to maximise on space usage. Building new datacentres, or ‘pouring concrete’, is not cheap and can adversely affect service economics for a CSP.


Mix-and-Match Cloud Components

 

So, what’s the ‘The Big Idea’ with server disaggregation and composable resources?

 

Consider this: What if you could split all the servers in a rack into their component parts, then mix and match them on-demand in whatever configuration you need in order for your application to run at its best?

 

Let me illustrate this concept with a couple of examples. Firstly, consider a cloud service provider with users uploading in excess of 50 million photographs a day. Can you imagine the scale on which infrastructure has to be provisioned to keep up? In addition, hardly any of these pictures will be accessed after initial viewing! In this instance, the CSP could dynamically aggregate, say, lower power Intel® Atom™ processors with cheap, high capacity hard drives to create economically appropriate cold storage for infrequently accessed media.

 

Alternatively, a CSP may be offering a cloud-based analytics service. In this case, the workload could require aggregation of high performance CPUs coupled with high bandwidth I/O and solid state storage – all dynamically assembled, from disaggregated components, on-demand.


Data-Center-Future-Rack-Scale-Architecture.pngThe Infinite Jigsaw Puzzle

 

This approach, the dynamic assembly of composable resources, is what Intel terms Rack Scale Architecture (RSA).

 

RSA defines a set of composable infrastructure resources contained in separate, customisable ‘drawers’. There are separate drawers for different resources – compute, memory, storage – like a giant electronic pick-and-mix counter. A top-of-rack switch then uses silicon photonics to dynamically connect the components together to create a physical server on demand. Groups of racks – known as pods – can be managed and allocated on the fly using our old friend the orchestration layer. When application requirements change, the components can be disbanded and recombined into infrastructure configuration as needed – like having a set of jigsaw puzzle pieces that can be put together in infinite ways to create a different picture each time.

 

Aside from the fun of all the creative possibilities, there are a lot of benefits to this type of approach:

 

  • Using silicon photonics, which transmits information by laser rather than by physical cable, means expensive cabling can be reduced by as much as three times1.
  • Server density can be increased by 1.5x and power provisioning reduced by up to six times1.
  • Network uplink can be increased by 2.5x and network downlink by as much as 25 times1.

 

All this means you can make optimal use of your resources and achieve granular control with high-level management. If you want to have a drawer of Intel Atom processors and another of Intel Xeon processors to give you compute flexibility, you can. Want the option of using disk or SSD storage? No problem. And want to be able to manage it all at the pod level with time left over to focus on the more innovative stuff with your data centre team? You got it.

 

Some of these disaggregated rack projects are already underway. You may, for instance have heard of Project Scorpio initiatives in China, and Facebook’s Open Compute Project.

All this is a great example of how the software-defined infrastructure can help drive time, cost and labour out of the data centre whilst increasing business agility, and will continue to do so as the technology evolves. Next time, we’ll be looking into how the network fits into SDI, but for now do let me know what you think of the composable resource approach. What would it mean for your data centre, and your business?

 

1 Improvement  based on standard  rack  with 40 DP servers, 48  port ToR switch, 1GE downlink/server and 4 x10GE uplinks,  Cables: 40 downlink and 4 uplink vs . rack with 42 DP servers, SiPh patch panel, 25Gb/s downlink, 100Gb/s uplink, , Cables: 14 optical downlink, and 1 optical uplink. Actual improvement will vary depending on configuration and actual  implementation.


Tests document performance of components on a particular test, in specific systems. Differences in hardware, software, or configuration will affect actual performance. Consult other sources of information to evaluate performance as you consider your purchase.  For more complete information about performance and benchmark results, visit http://www.intel.com/performance

Fig1.pngOne of the techniques that Intel IT had to learn in order to gain $US351 Million in value from analytics was how to migrate between Hadoop Versions.  Migrating to a different Hadoop Version, whether from an older version or from another distribution, raises a number of questions for any organization that is thinking about attempting it.   Is the migration worth the effort?  Will production software break?  How can we do the transition with minimal impact to users?  Intel’s IT organization faced all of these questions and challenges when it migrated from Intel’s own custom version of Hadoop, known as IDH, to Cloudera’s Hadoop (CDH).  In this white paper, Intel IT's Hadoop Engineering team describes how their methodology and how they have seamlessly migrated a live production cluster through three different version changes.

 

The team looked at a feature by feature comparison of Intel’s IDH and Cloudera.  They determined that moving to Cloudera’s Hadoop distribution had significant advantages.  Once the decision was made to migrate, the team outlined three major concerns:

 

  • Coping with Hadoop variations
  • Understanding the scope of the changes
  • Completing migration in a timely manner

 

The first concern is about the need to understand how to properly configure the new version.  The second is about the effects of changes – making sure that application developers and internal customers and their code using the cluster would be minimally affected by the change.  The last concern expresses the need to make any migration quick and at best, transparent to live users.

fig2.png

 

Intel developed what it felt are 6 best practices for migration:

 

  1. Find Differences with a Comparative Evaluation in a Sandbox Environment
  2. Define Our Strategy for the New Implementation
  3. Upgrade the Hadoop Version
  4. Split the Hardware Environment
  5. Create a Preproduction-to-Production Pipeline
  6. Rebalance the Data

 

The first practice deals with the first and second concern listed above.  Doing the evaluation identified differences between the IDH and CDH environments without disrupting production environments.     Other practices like creating a production pipeline, were designed to deal with the last concern, migrating quickly and with minimum impact.  Intel divided each version’s instances between servers in the same rack – this leveraged the high speed network within and between racks to move data to the new version with only one transfer.

 

Using these methods, Intel IT"s Hadoop team completed the full migration from IDH to CDH in 5 weeks.  Only one piece of production code needed to be changed, and that was because it called a library that was deprecated between Hadoop versions. Since the initial migration, this methodology was also used to do 2 version upgrades with no customer impact. Some of the teams' initial concerns about security have been mitigated with Cloudera's implementation of Apache Sentry. Look for another white paper on that subject later this year.

GP-innovation-north.jpg

When I came here about two years ago to lead Intel Canada as its new country manager, I was struck by the fact Canada appeared largely untouched by the global economic recession that crippled many world economies.  Having most recently worked in the UK and Western Europe, the recession discussion was pervasive in all business conversations and it drove virtually all investment strategies, causing a reset in the way business was executed by government and the private sector.

 

I saw a business and public sector that was transformed, paving the way for a new vision of how we can interact.  Just look at the UK public sector’s digital first strategy, which delivers new services first to constituents online. This measure not only cut costs but made government much more efficient.  It was truly transformative.

 

Then I arrived in Canada. It was like entering a different world, one that had been fortunately - or unfortunately - bypassed the recession and had done a wonderful job of side stepping that whole environment.

 

You might be wondering why I say “unfortunately” bypassed the recession (one which irreparably damaged so many).  I think perhaps Canada missed an opportunity because it didn’t face the same kinds of economic hardships experienced by other markets. Canada’s GDP was comparatively healthy so it didn’t need to look outside its borders for expansion, nor were business leaders required to focus on boosting productivity and efficiency to survive.

 

Fast forward to today: The deflated Canadian dollar and economic challenges presented by the dropping oil price are impacting businesses at a time when other markets are getting back to growth.

 

What my overseas experience has distilled for me is that Canada is at a point today where other countries were a few years ago. We are at an economic inflection point … a point in time where there is fundamental change afoot. During an inflection point, businesses can either see the opportunities to realize significant opportunities for growth and market share gain, or flounder, die and disappear.

 

Are Canadian companies ready to take advantage?  Canadians have a pervasive culture that is typically more conservative when it comes to making decision, which could be detrimental but the collaborative business environment I’ve seen coast to coast can help companies make the transition successfully, for those willing to take the chance. 

 

I see tremendous potential for growth in this country in a few fundamental areas:

  • Expand the delivery of online retail solutions.  I am a big believer in bricks and mortar retail but it needs to be complemented by a very comprehensive and rich online experience.  Canada is lagging far behind other markets in the adoption of online retailing but I believe strongly that to succeed in the future retailers will need a multi-channel strategy that includes bricks and mortar, e-commerce, loyalty programs, and new delivery models that integrate in-store pick up with online delivery.  They will also need to leverage big data to provide their customers with more choices and be more responsive to providing the products consumers want to purchase, where and how they want to buy them. I’ve watched with great interest the strides being taken by Canadian Tire which, as a traditional retailer, has embraced the future and is truly transforming their business along these lines.
  • We need a strong commitment at the highest levels to a public sector digital strategy that not only provides greater online access to services (not unlike the UK’s digital first strategy) which can dramatically cut the cost per interaction to access government services.  Hand in glove with this commitment is the need for a national policy to make sure that all Canadians, regardless of location or economic status, can get online and access those services. No one should be digitally excluded.
  • Embracing the cloud is another place Canadian companies are lagging behind globally (and I’m not talking exclusively about public clouds but rather all models from public to private and hybrid models). As the UK emerged from recession, cloud computing showed a rapid build out and I think Canada is poised for a similar growth spurt if companies are willing and able to break out of their traditional mold.
  • The employees of the future will be looking for new tools to be more effective and expand their ability to be mobile.  Companies need to be quicker to adopt new technologies and solutions to drive the knowledge economy of the future, if we want to retain the best and brightest employees.  Millennials are demanding a workplace that is more progressive and will go to companies that provide the flexible and collaborative environments they want.
  • The incubator community in Canada is very impressive and I have been amazed by the collaborative nature of businesses in this country where leaders are willing to meet, share ideas, and work together.  This bodes well for the future but there is a gap when it comes to funding research initiatives and then moving ideas into marketable commercial products. Leadership and commitment are currently lacking but there is an opportunity here once all the pieces are put in place.

 

There are positive signs Canada is poised for a significant leap forward in the adoption of new technology. Senior IDC Research Analyst Utsav Arora recently told a Mississauga audience that the conventional view that Canada lags 18 months behind the world is out of date. He feels that innovation and a start-up culture have cut the gap in big data adoption to between 6 and 12 months.

 

When something changes (like our economic climate), you can positively embrace it and use it to your benefit or you can stand back and stare at it while everyone else is benefiting by that change. Businesses can no longer continue to rinse and repeat the way they have done for the last 10 years.

 

The time - and opportunity - for change is here.  In the immortal words of Winston Churchill, “A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.”

 

I believe Canadian companies are in a perfect place to seize opportunity from the difficulties we are currently facing, if they are willing.

Filter Blog

By date:
By tag: