1 2 3 Previous Next

IT Peer Network

1,389 posts

CEP.jpgThe 2015 International Security Education Workshop drew attendees from academia, industry, and government from 4 continents to collaborate on improving worldwide cybersecurity education and promoting programs at colleges and universities. 


The group, organized by the National Science Foundation (NSF), Georgia Institute of Technology and Intel Corporation, have awarded educators with micro-grants, setup workshops, and provides assistance to educators seeking sustainable cybersecurity teaching models.


With the rise in demand for cybersecurity professionals and the lack of talent available in the workforce, the industry is heavily dependent on academia to fill the human resource needs.  Some estimates show a shortfall of 1.5 million positions. 


The International Security Education Workshop is working to increase student competency in cybersecurity, course content adoption, and answer the question of what curriculum should define a cybersecurity degree.


A number of best practices are emerging, to empower educators to successfully teach cybersecurity.  It is a rapidly changing domain, where skills and knowledge can become stale quickly.  The sheer breadth and depth is enormous.  This has led to a highly diverse and inconsistent student experience and resulting skills for graduates.  Some of these challenges are being addressed with the anticipated establishment of a “cyber sciences” degree to standardize terms and curriculum.


One of the Cyber Education Project (CEP) goals is to present a draft of undergraduate program criteria for disciplines within the cyber sciences umbrella to the Accreditation Board for Engineering and Technology (ABET) by midyear 2017. ABET is the primary accrediting body for computer engineering, computer science, information technology, and information security degree programs in higher education. 


The long term solution can only be achieved through the partnership of academic organizations to build cohesive structures to improve cybersecurity education. 




Twitter: @Matt_Rosenquist

Intel IT Peer Network: Collection of My Previous Blog Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Firstly I need to state that this blog is late.  Back in April of this year, I blogged about the World’s first 32 Node All Flash Virtual SAN with NVMe.  The reception to the demo we gave at EMC World was so enthusiastic we decided to turn up the wick and add 32 more nodes- getting us to the maximum allowed cluster size in VMware’s Virtual SAN product.  Hence the title of this blog and subtle nod to the Beatles.

This 64 node incarnation of Virtual SAN was more a less a doubling down of the hardware used in the 32 node version, and it was completed in time and shown at the Intel Developer Forum in August and VMworld US in September (I did mention this blog is late).

Next up on tour is VMworld Europe, hosted in Barcelona during the week of October 11th.  The cluster itself will be online during Solutions Expo hours in the Intel booth during the conference.  Additionally, there is a breakout session, STO4688, on Tuesday October 13th at 5:00 PM, where John Hubbard and I will provide a detailed overview of the cluster build specifics, performance characteristics, and key learnings stemming from building a Virtual SAN cluster at this scale.

Cluster BOM Overview


The cluster itself is contained in 2 separate 48U 19” server racks.  Each rack is comprised of 32 Intel® Server System R1208WTTGS 1U servers. Each server is equipped with:

Cluster Specifications at a Glance

  • 6,400 Virtual Machines With Windows® Server
  • 64 Hyper-Converged VMware® ESXi Hosts
  • 2,304 Xeon® Cores
  • 8 TB DDR4 Memory
  • 500 TB Raw Flash
  • 100+ TB of Virtual SAN Cache
  • 400+ TB of Raw Datastore Storage
  • 2x Cisco* Nexus 93128TX Switches deployed in top-of-rack fashion
  • 192x 10Gbase-T Ports
  • 12x 40Gb QSFP Ports
  • 20+ KW Under Load, 40 KW Available


Cluster Performance

Testing a cluster of this size is an art unto itself.  The charts below show results for 128 active VMs (2 per host) running various workloads under Iometer.  The scope of the testing included both measuring IOps of Random 4 KiB workloads of varying queue depths and read/write ratios, along with measuring bandwidth of sequential 128 KiB read and write workloads also with varying queue depths.


Final Thoughts

We continue to experiment with and refine reference Virtual SAN architectures at both large and small scales. If you have thoughts as to specific usages/workloads/benchmarks you’d like to see run under Virtual SAN, please leave me a note in the comments as we are always curious to see how people are using these technologies in the wild.


Amplify Your Value (1).jpg

Monday, January 6, 2014: Not since the blizzard of ‘78 had a day started in this way. After receiving close to 18” of snow on Sunday, temperatures plunged to windchills of 40 and 50 below. My phone rang early that morning. It was our CEO. We needed to have an executive conference call. The mayor had declared a travel ban. We needed to shut down all of our locations (80 some locations). We needed to let employees know. Very quickly we initiated our communication plan.

Within minutes of settling in on the couch to watch more of the morning news coverage of the Blizzard of ‘14, my phone rang again. This time it was our Senior Director of IT Operations. His words sent a chill down my spine, even though I was warm and cozy inside. “There is a major power outage in downtown Indianapolis. Our headquarters is without power. All of our servers have shut down, we are dead in the water.”

“Well, at least we are shut down today,” I thought.

“IPL says it will be 48 hours or longer until we have power. The team is on a Google Hangout right now discussing options.”

I jumped on the Hangout and got a quick recap of the situation. Everything at corporate is down (including our server room). UPS batteries are depleted and the servers have shut down. IPL says 48 - 72 hours. Three options 1) declare a disaster and begin recovery to our warm site on the west side 2) rent a generator, have it delivered and installed, and power the server room 3) wait. Knowing our history of disaster recovery testing, I advised the team to explore the feasibility of option 2, while I instituted phase I of our Business Continuity Plan by sending notice to the Executive Team and firing up a conference bridge (the Executives weren’t quite Google Hangout savvy).

With the Hangout still live in my home office, I explained the status to our CEO, COO and CFO. A disaster recovery would realistically take 24 hours AND it was one-directional. Once live at the warm site, it would take weeks or months of planning to “come back” to corporate. The generator option was a possibility, but we didn’t know yet. Or, we wait. The line was silent. None of those options were appealing. I quickly pointed out the good news: email was still working! (My boss, the CEO loves it when I point out the obvious, especially when it underscores what a great decision it was to move to Google and the cloud. It’s kind of like telling your wife “I told you so”! Goes over VERY well!)

Putting the conference bridge on hold, I jumped back to the Hangout just in time to hear someone ask, “Did anyone check the power at the Warm Site?” GREAT question. Since our warm site was eight miles due west of headquarters, and the storm had rolled in from the west. I could hear the click, click, clack, click of Daniel checking the status. “Scratch, option one. Warm Site is dead too”.

With that, I went back to the conference bridge. Since doing nothing seemed like a CLM (career limiting move), I informed them the Warm Site was down, and that we were executing option 2.

Fast forward to July. It was now about 130 degrees warmer. We were hosting visitors from another Goodwill organization in our board room. Down the hall, we were conducting a Disaster Recovery Test with our Mission Partners (that’s what we call our users...I hate that word...so we call them Mission Partners).

Let me repeat that in case you missed it. The CIO was sitting in a conference room talking with three or four executives from another Goodwill organization, while his team was conducting a disaster recovery test, complete with Mission Partner testing.

OK, if you are still not seeing the nuance, let me give you some background. In 2009, we went live with our Business Continuity and Disaster Recovery Plan, including our warm site. Our investment was about nine months work and $500,000. That fall, we conducted a Recovery Test, our users (because we called them that then) all gathered at the warm site to test their systems. Everything passed.

In 2010, we had a new CIO (me!), a new systems engineer, and a couple of other new staff members. It was closing in on time to do our annual Business Continuity Test (including a mock scenario). Our systems engineer reviewed the documentation, spent some time at the warm site, and then came into my office. “I don’t know how they did it. They had to fake it! It had to be smoke and mirrors. There is no WAY they recovered the systems! I need two months to prepare.”

GREAT, new CIO, successful test last year and we need two months to prepare this year. THAT is going to go over well. Guess I will have to use another punch on the New Guy Card! Two months later, we conducted another successful test.

October 2011, time for another test. I call the engineer into my office. “Are we ready?”, I asked.

“Well, we’ve made some changes to the environment that have not been replicated to the DR site, you see, we’ve been busy. I need a couple months to get ready.”

With steam coming out of my ears, I let him know we needed to be ready, we needed to document, and we needed to keep the environments in sync (shame on me, I thought we were doing all of that!).

A couple months later, we conducted the test. While it was declared successful, there were some bumps. At our lessons learned meeting, the team was, well, they were whining about not having enough time. After listening, I asked, “If we had a disaster today, would we be ready”. Again, after several minutes of this and that, I asked, “If we had a disaster today, would we be ready?” After about a minute of this and that, I interrupted, “I am declaring a disaster. This is a test and only a test. However, we are implementing our recovery NOW!”

After looking at me for several minutes and then realizing I was serious, the team headed out to the warm site to recover our systems...again.

It was now fall of 2012. I was sick of the words “Disaster Recovery Test”, yet it was that time again. We had a new systems engineer, the prior one leaving earlier in the year. I stopped by his desk to ask about our preparedness for our disaster recovery test. “I’ve been looking at it. I don’t know how it has ever worked. They must have faked it. It had to be smoke and mirrors. I need two months.”  Given he was now The New Guy, I let him punch his New Guy Card and gave him the two months. The test was successful.

Now do you see it? The CIO was sitting in a conference room talking with three or four executives from another Goodwill organization, while his team was conducting a disaster recovery test, complete with Mission Partner testing. After that history? How could DR testing be a non-event?

It started early in 2013. I was in my office with John Qualls of Bluelock and Steve Bullington of TWTelecom (Level 3). They were describing to me a new product and service from Bluelock and a partnership with TWTelecom. Bluelock was touting RaaS, or DRaaS if you will. Disaster Recovery as a Service; paired with TWTelecom’s new Dynamic Capacity bandwidth. What?!!? You mean I can get rid of the warm site? Replace it with the elastic capacity of DR in the cloud? Combined with a team of professionals to manage it all? Leveraging bandwidth that can be dynamic based on our needs? All, for less than I was spending today to depreciate the warm site investment? No more smoke and mirrors? No more two months to prepare? Seemed like a no-brainer! Where do I sign up?

As luck would have it, our initial investment would be fully depreciated in the 3rd quarter of 2013. We were faced with a forklift upgrade to replace our servers and SAN at the warm site. The ROI was overwhelming. Due to competing priorities, we slated this project to start in mid-December so it would be complete by early in 2014. (If only I had a crystal ball!)

The project itself was pretty straightforward: establish the connectivity between the sites, install the Zerto agents on our servers, replicate the data, and test! Easy-Peasy! We did experience some challenges (shocked to hear that, aren’t you?). The biggest challenges were the visibility into our own environment; the initial seeding of the replication; and design hangover.

The visibility issue really could be summed up by “we didn’t know, what we didn’t know” about our own environment. Over time there had been a lot of cooks in the kitchen. We had a lot of servers that we weren’t quite sure what they did, combined with terabytes of data that we weren’t sure of either led us to a lot of research to straighten the spaghetti (see how I did that? cooks in the kitchen...straighten the spaghetti...oh, nevermind, back to the story).

The next challenge was the initial seed. Even though we knew the amount of data that had to replicate, and we sized the pipe accordingly, it was still taking an inordinate amount of time to create the first replication. Leveraging the Dynamic Capacity feature, we tripled the size of the pipe. It still took longer than anticipated; our own infrastructure became the limiting factor.

The final challenge, one I like to call “design hangover”, was all about how to provide an environment to our Mission Partners in which they could adequately test their applications. After whiteboarding option after option, none of which really provided a great answer, I asked a couple of questions. “So, it sounds like we are jumping through huge hoops to give a window to our Mission Partners, what happens in a real disaster? Do we have to go through all this?”

“No, because prod won’t exist. You don’t have to worry about duplicate addressing, you don’t have to worry about changing IPs, you just see the recovered data. Look, I can show you right now, I can log into our portal at Bluelock and show you our data from my laptop.”

“So? We are going through all this, so our Mission Partners can go to the Business Continuity Site and test their applications? If it were a real disaster, they can go to the site and see their applications, no problem? What if we just let them come to this conference room, access their applications through your laptop and test? Would that be a valid test?”

“Well, yeah...we thought they had to test from the BC site.” (Translated that means “because we always did it that way”). I offered to raise it with the rest of the executive team, but I thought they would much rather have their teams walk down the hall to a conference room to test, than to drive across town and test.

Sure enough, they were all for it!

If only we had been done before the blizzard of 2014!! Our results were phenomenal. First, we had true network segregation between our DR environment and production. Second, our Recovery Time Objective (RTO) was under two hours!! (Disclaimer: our SLA was actually four hours on some servers and eight hours on others, but the whole thing finished in under two hours. 100 VMs; 15 tbs of data) Third, our Recovery Point Objective was THIRTY SECONDS! Yes, an RTO of two hours and an RPO of 30 seconds loss of data. Fourth, our system architect and our system admin did absolutely nothing! Our CFO called Bluelock...gave them his code...and hung up the phone. Two hours later, our System Architect’s phone rang. “Your recovery instance is ready to test”. BOOM! That’s it! I’ve been around long enough to know there is no such thing as a silver bullet in IT, but this was pretty damn close.

Oh, and one more benefit? The response time of the applications during the test, using our recovered instance sitting in Las Vegas, was FASTER than the response time of production sitting 30 feet away in our server room!

Another C-F Project off the books! We now spend longer dreaming up the scenario for the re-enactment than we do preparing for, and executing the DR test. So, what has our system architect and system admin done with their extra time? How about spending time in Retail to understand the business needs and designing solutions for a queuing system to speed up the checkout lines, or, designing the in-store digital displays for mission messaging throughout the store, or, redesigning the power delivery to the POS systems providing extra run-time for less money, or, designing SIP trunking for our VOIP system to provide call tree capabilities...or...or…

And what of that Blizzard of ‘14? We were lucky! Power was restored shortly after noon on the first day (thank YOU IPL!), before the generator was even connected. We dodged that bullet and now we are armed with a silver bullet!

Next month, we will explore a project that did more to take us to a Value-add revenue generating partner than just about any other project. Amplify Your Value: Just Another Spoke on the Wheel.

The series, “Amplify Your Value” explores our five year plan to move from an ad hoc reactionary IT department to a Value-add revenue generating partner. #AmplifyYourValue

Author’s note: In the interest of full transparency. To paraphrase the old Remington Shaver commercial from the 70’s, “I like it so much, I joined the company”. In October of this year, I will leave Goodwill to join Bluelock as the EVP of Product and Service Development. My vision is to help other companies experience the impact Goodwill has felt through this partnership.

We could not have made this journey without the support of several partners, including, but not limited to: Bluelock, Level 3 (TWTelecom), Lifeline Data Centers, Netfor, and CDW. (mentions of partner companies should be considered my personal endorsement based on our experience and on our projects and should NOT be considered an endorsement by my company or its affiliates).

Jeffrey Ton is the SVP of Chief Information Officer for Goodwill Industries of Central Indiana, providing vision and leadership in the continued development and implementation of the enterprise-wide information technology and marketing portfolios, including applications, information & data management, infrastructure, security and telecommunications.

Find him on LinkedIn.

Follow him on Twitter (@jtonindy)

Add him to your circles on Google+

Check out more of his posts on Intel's IT Peer Network

Read more from Jeff on Rivers of Thought


It never ceases to amaze me the way technology moves forward at an incredible rate and then, just seems to stop. Take the aeroplane industry, for example: in a span of 60 years, what started as the Wright brothers in Ohio demonstrating an idea has evolved into more than 90,000 people a day using air travel to reach their next destination. Yet, in the last 50 years, unless you are one of those people that like to spend their Saturdays at the end of an aeroplane runway, you would struggle to tell the difference between a 747 made today and one made in the 1960s. In many ways, standing still as an industry can be seen as an achievement, but I don’t think it is.

Office technology went through a similar revolution. In the mid-80s, if someone one had something urgent to relay, they could always send a Telex. However, this Telex system did come with constraints. You could only send a Telex if the person you were sending the message to had one. You also had to call beforehand for their phone number. Confirmation that Telex was switched on and had paper in it was also important. Unsurprisingly, there was a demand for change. By the 1990s everyone was happily typing away on desktop PCs. The ever-powerful backspace key was a welcome replacement to small pots of emulsion used to correct mistakes. We had seen the future and there was no stopping innovation…or so it seemed.

Unfortunately, as time passed it became clear that office automation technology wasn’t improving in the revolutionary ways it had before. The office technology industry, similar to the aeroplane industry, was changing very little since its first steps into innovation. It would be naïve to think there were no more efficiencies still to be had. It simply isn’t true. I myself have always been frustrated at how the first 10 minutes of every meeting is spent plugging and fiddling around with cables, followed shortly by meeting members holding their breath, willing an image to appear on the screen in front of them. These cables yield so much power that many of my own customers employ workers whose sole job is to plug cables into devices.


For the last few weeks, I have been using Widi Pro and Unite, two products designed to fix these irritations. Technology is always looking for problems to solve, and these two tools, Widi Pro and Unite, are just that—problem solvers. Widi Pro offers a better alternative to those clunky cords sitting on top of meeting room tables. WiDi Pro differs from previous versions in an important way: it works. The problems with its dysfunctional parents have been solved.

Unite is an application that implements meeting room collaboration the way it should be implemented. With the options of remote users coming and going and seamless sharing of voice, video, and whiteboards, it is a game changer. Unite supports all sorts of technologies, which until now were thought to be incompatible. Configuring these devices also couldn’t be easier –in less than a minute; prepare for a locked down corporate laptop to be a part of a meeting from the future. I am a natural sceptic about most things, but even I was impressed with these new technologies. I feel I have in some ways glimpsed the future. I am not alone in these sentiments either; so far every customer that has experienced the new Widi Pro and Unite has wholeheartedly agreed with me.

Perhaps office technology will follow a track similar to commercial aviation in terms of lack of innovation. But I suspect that will not be true. With Widi Pro and Unite, meeting rooms are about to change, and unlike the disappointing way aviation has evolved, I think it will be for the better.

Learn more about Widi Pro and Unite and revolutionize your office technology.

To continue this conversation, find me on LinkedIn or Twitter.

Even though 2015 is not yet over, I truly believe it’s been a successful year for Intel’s Non-Volatile Memory Solutions Group. At IDF 15 we had the opportunity to share an amazing story about 3D XPoint™ memory technology, including real life demos of upcoming products with Rob Crook on stage.


IDF 15 is now finished, and many people attended. For those who couldn't make it, we published all our content online with free access for everybody. This includes presentations and technical labs during the event. Check out the materials from all sessions here.


I’m proud of the lab that Zhdan Bybin and I delivered there. It is highly practical, technical, and mainly focused on how exactly to use NVMe products in Linux in a Windows environment. The beauty of NVMe is that it can be applied to todays Intel® SSD DC P3700/P3600/P3500 and P3608 Series, Intel® SSD 750 Series, as well as future generations of NVMe-based products. The lab mainly covers initial system setup and configuration, benchmarking with FIO (and FIO Visualizer www.01.org/fio-visualizer) and IOmeter, managing Intel® RSTe (MDADM SW RAID extensions for Linux), running block trace analysis (www.01.org/ioprof ), and much more.  Use it as a reference for deploying this exciting technology. My hope is this will guide teams to adopt NVMe quicker.


You may find the lab by the link provided above (type SSDL001 into the search bar) or in the PDF attached to this blog.

This is for the tireless enterprise security folks out there, working every day to protect the computing environment.  Do you feel you are making progress in climbing the mountain towards security nirvana or just spinning in the hamster wheel?  The corporate security strategy is the plan which determines your fate.


Does your enterprise have a security strategy and is it leveraging the security organization in an efficient and effective way?  Here is a quick slide-deck to challenge your security strategy.  It is based on a recent blog, 5 Questions to Prove a Cyber Strategy, but includes more insights, supporting data links, and frankly, it just looks much better.



The role of the CIO and IT professionals has significantly changed in the last decade. In today’s professional landscape, an IT professional is poised to lead the charge for technological innovation. As David A. Bray, CIO of the Federal Communications Commission, said in a interview with The Washington Post, “We need leaders who do more than keep the trains running on time. CIOs and CEOs can work together to digitally transform how an enterprise operates.”


But, according to CIO magazine’s 2015 State of the CIO survey, CIOs were viewed as business leaders by just 13 percent of colleagues outside of IT and only 30 percent of line-of-business leaders. Obviously, there’s still a significant gap in the C-suite perception of IT. But there’s also a significant opportunity. As any digital professional will tell you, the best way to solve a perception problem is to be more visible. Say goodbye to the IT professional tucked away in the basement and say hello to the age of the social techie.


Teaching Techies to be Socialites


It’s been clear for some time that social media is essential to successful businesses, providing the opportunity to not only serve their customers better, but to learn from them. The same is true for the social IT professional. Through social media, an IT professional is able to engage in and help shape the changing conversation around IT. They’re able to expand their knowledge and skills through peer collaboration and partnerships born online. And, by adopting a more open and collaborative mindset, the social IT professional is able to begin to solve their perception problem.

One CIO leading the charge to bring IT out of the shadows and into the social spotlight is Intel’s very own Kim Stevenson. Ranked as one of the most social CIOs by the Huffington Post in 2015, Kim has long been an advocate of shaking up the IT department and what’s expected of it. As she stated in a Forbes interview, “On the leadership front, I challenged IT to take bigger risks and to move beyond ‘what you know’ to ‘what’s possible.’ IT had gotten into a comfort zone taking small risks and only solving problems we knew how to solve, which yielded incremental improvements. If we were going to meet the needs of the business, we needed to be operating at a higher level of risk.”


Beyond changing the perception of IT, becoming social can provide hungry IT professionals with a personal classroom for learning and innovation, helping them to stay on the cutting edge of the latest technology.


Now that you now why you should get social, it’s time to learn how to get social. In my next blog, I’ll go into how you can kickstart your social persona.


Until then, check out this list of the most social CIOs in 2015. I’d love to hear your thoughts on the benefits of the social CIO and the hurdles that are preventing more CIOs from jumping in. Leave your comments below or continue the conversation on Twitter @jen_aust and on LinkedIn at www.linkedin.com/in/jenniferaust.


I’ve written previously about how a driving factor for refreshing hardware and software should go beyond the security, maintenance and productivity arguments and instead focus on the role IT can play in recruitment and the future success of your business by helping you become more attractive to the best and brightest talent available.


The truth of the matter is that lagging behind in your adoption of new technology could mean that 10 years from now you’re seeing the same faces around your office because all the smart, progressive young folk are going down the road to work for someone else; someone who allows them a more flexible approach to work that includes telecommuting, collaborating in coffee shops or more flexible schedules to enable a much sought-after work-life balance.


There are many forces driving change in today's workplaces and the push to attract talent is just one factor.  Increasing competition from global competitors and threats from disruptive entrants into your market are also causing fundamental changes to the environment in which we work.


To be successful today and into the future, I believe businesses will have to offer dynamic workplaces that provide options for mobile collaboration and the ability to tap into knowledge experts to deliver projects, services or solutions in a more ad hoc or fluid way.  And I believe this future is closer than you think.


I’ve been in this industry for a long time and I can remember when upon checking into my hotel room the first thing I would do was scramble around the desk to wire my laptop to the phone socket into the wall because there wasn’t Wi-Fi.  Being in Europe, I also had to travel with a bag of different phone adapters because every country had a different style of phone jack.


Today, we just open up our laptops and expect Wi-Fi to be there, whether we’re in a hotel, coffee shop or almost everywhere else we find ourselves. I would argue this was a fundamental shift in the way we work and it's one we now take for granted. Heightened connectivity, created and enabled by advances in technology, is being taken to a whole new level as we link to an Internet of Things and further transform how we interact and connect.


We are only at the thin edge of the wedge in terms of what's possible and what is poised to become our new norm.


Imagine walking into a boardroom and instantly but securely connecting to projection and collaboration portals. Now imagine meetings are instantly productive for both people on-site and remote workers because they can all instantly have the same access and visibility whether they're in the boardroom or connecting from off-site locations. I'm talking about delivering on the promise of true mobile collaboration without compromising security.


Next imagine never needing to run around stringing out cables to recharge your devices. We are already piloting wireless charging so when you enter a café or hotel, your laptop can start to charge while it sits next to you on the table. Advances in technology are extending battery life and soon charging cables will be a thing of the past allowing us to truly untether all of our devices.


I suspect that for many businesses one of their largest investments is the building in which they sit today and come together as an organization. As companies look to become more efficient and save costs in a more competitive business environment, I believe these bricks and mortar work environments are about to change dramatically.


But I am not talking of a completely virtual workplace where everyone works remotely and there is no office per se.


We are, fundamentally, social animals and employees and millennials thrive in environments where there are high levels of collaboration.  I posit that instead of a completely virtual workplace, we will see workspaces that offer a range of options from open collaboration spaces to closed rooms for quiet work, supported by work at home or remote options to provide employees with a custom-tailored environment in which they can be the most productive.


Now, this might be a little bit of a Nirvana image, but I think we could see a further evolution in a connected business model where collaboration goes beyond the corporate walls and brings together expertise from inside and outside the company in a unified way.


Subject matter experts can brought together to deliver projects in a secure, highly collaborative environment allowing smaller companies to tap into expertise they either can't afford to have on staff.  I can see a future where that specialized skill set or experience can be levered by multiple companies to more efficiently utilize the knowledge workers of the future.


I've seen small- and medium-sized businesses in Canada already starting down this path.  They are working seamlessly to appear as a large business to their customers when they are in fact a select group of smaller businesses working together. As Canada looks to increase its export portfolio beyond the US and compete against disruptive international players in their market, an inter-business collaboration model is one that I believe could become more and more prevalent.


Underpinning every discussion about the workplace of the future is a very real focus on security. Hacking and cyber-threats are of significant concern globally and risks are increasing for companies of all sizes. Advances in technology play a role here too with mechanisms to verify identities; to assign or change permissions based on location; to secure lost devices by remotely wiping technology; and to provide collaboration that can seamlessly bring together employees, customers and suppliers without compromising network integrity.


The tools for business transformation are already emerging and starting to shape the workplace of the future; one that leverages truly cross-device, integrated and real time communication; social media to connect communities together; mobility allowing "work" to become a thing not a place; analytics turning data into insights; and cloud computing to allow a secure extension of companies beyond a physical location. 

At its most basic, workplace transformation starts with people and providing them with the tools they need to be productive and effective, and I think we're closer to a major transformation than many might realize. Will you be ready?


Despite the fact that email is a part of daily life (it’s officially middle aged at 40+ years old), for many of us nothing beats the art of real human conversation. So it’s exciting to see how Intel vPro technology can transform the way we do business simply by giving many people what they want and need: the ability to have instant, natural discussions.

In the first of our Business Devices Webinar Series, we focused on the use of technology to increase workplace productivity. This concept, while not new, is now easily within reach thanks to ever more efficient conferencing solutions that include simple ways to connect, display, and collaborate. From wireless docking that makes incompatible wires and cumbersome dongles a thing of the past, to more secure, multifactor authentication on Wi-Fi for seamless login, to scalability across multiple devices and form factors, more robust communications solutions are suddenly possible.


Intel IT experts Corey Morris and Scott McWilliams shared how these improvements are transforming business across the enterprise. One great example of this productivity in action is at media and marketing services company Meredith Corporation. As the publisher of numerous magazines (including Better Homes & Garden and Martha Stewart Living), and owner of numerous local TV stations across the United States, Meredith needed to keep its 4,500 employees connected, especially in remote locations. Dan Danes, SCCM manager for Meredith, said Intel vPro technology helped boost the company’s IT infrastructure while also reducing worker downtime.


In the 30-minute interactive chat following the presentation, intrigued webinar attendees peppered the speakers with questions. Here are a few highlights from that conversation:


Q: Is this an enterprise-only [technology]? Or will a small business be able to leverage this?

A: Both enterprise and small business can support vPro.


Q: Is there also a vPro [tool] available like TeamViewer, in which the user can press a Help button, get a code over a firewall, and connect?

A: There is MeshCentral, which is similar, but TeamViewer and LogMeIn do not have vPro OOB [out-of-band] capabilities.


Q: How do I get started? How do I contact a vPro expert?

A: Visit https://communities.intel.com/community/itpeernetwork/vproexpert for vPro tips and tricks. Contact a vpro expert at advisors.intel.com

These interactive chats, an ongoing feature of our four-part webinar series, also offer an opportunity for each participant to win a cool prize just for asking a question. Congratulations to our first webinar winners: James Davis scored a new Dell Venue 8 Pro tablet and Joe Petzold is receiving SMS Audio BioSport smart earbuds with an integrated heart monitor.


Sound interesting? We hope you’ll join us for the second webinar, which will further explore how companies can reduce the total cost of ownership of a PC refresh via remote IT management. If you’ve already registered for the Business Devices Webinar Series, click on the link in the email reminder that you’ll receive a day or two before the event, and tune in October 15 at 10 a.m. PDT. If you want to register, you can do it here.


In the meantime, you can watch the Boost Business Productivity webinar here. Learn how you can boost your business productivity and get great offers at evolvework.intel.com.

Or, if you want to try before you buy, check out the Battle Pack from Insights.

Intel today unveiled the Intel® Solid State Drive (SSD) DC P3608 Series, its highest performing SSD for the Data Center to eliminate bottlenecks in HPC workflows, accelerate databases, and gain business insights through real time analytics.


Intel is already shipping the Intel® SSD DC P3608 Series in high volume to top OEMs, including Cray, who issued the following statement.


“For Cray, Intel® Solid State Drives (SSDs) are an outstanding solution for our high-performance computing customers running high-density, high-speed data applications, and workflows. Cray has a long history of incorporating the latest Intel technologies into our supercomputing solutions, and the DC P3608 Series is another example of how we can jointly address our customers’ most challenging problems.”


The Intel® SSD DC P3608 Series delivers high-performance and low latency with NVMe and 8 lanes of PCIe 3.0. Here is an example of a study Intel is conducting on a database application.


One thing most DBA’s know that is that column-based us better than row-based when it comes to index compression efficiency and much less IO on disk. Pulling together much faster analytics queries against very large databases into the Terabyte class. These indexes are extremely efficient. Well does this all pair well with better hardware?


The answer is yes. Better hardware always matters just like better engineering wins in automotive, for safety, efficiency and fun to drive.


The same is true with NVMe technology which is standards based PCIe Solid State Drive technology. NVMe-based SSDs are the only kind of PCIe based SSD that Intel provides. We did a lot to invent it.  Is it fun to run very large TPC-H like queries against this type of drive? Well let me show you.


Here is some data that we put together where we show the Maximum Throughput of our new x8 P3608 against our best x4 card, the P3700. Also to put this into perspective I share the SATA versus PCIe run time of the entire 22 queries that exist within the TPC-H specification that is within HammerDB.


At the bottom of the blog is the link to the entire data sheet of our testing.


PCIe x8 and 4TB's of usable capacity from Intel is now here. On September 23, 2015 we have released the new P3608. So how many Terabytes of SQL Server Warehouse you want to deploy with this technology? With 4 x8 cards, and 16TB, you'd be able to support over 40TB of compressed SQL Server data using the Fast Track Data Warehouse architectures, because of the excellent compression available with this architecture.


Here's the data comparing our new x8 and x4 Intel PCIe drives and giving you some perspective on how much faster PCIe is over SATA, I am including a graph of the entire suite of queries on PCIe (P3700) over SATA. (S3710).


Here we compare the P3608 to the P3700 for maximum throughput.




Here we compare the P3608 versus the P3700 for query time on the most IO intensive queries.



Finally to give you some perspective here is what a SATA drive could do with this kind of SQL Server Database. This graph consists of all 22 queries , not just the IO intensive ones as above, and it's the total time to run all queries within HammerDB.


Lower is better.



You can see all the data here.


Prediction capabilities can have tremendous value in the world of security.  It allows for better allocation of resources.  Instead of trying to defend everything from all types of attacks, it allows a smarter positioning of preventative, detective, and responsive investments to intersect where the attacks are likely to occur. 


There is a natural progression in security maturity.  First, organizations invest in preventative measures to stop the impacts of attacks.  Quickly they realize not all attacks are being stopped, so they invest in detective mechanisms to identify when an attack successfully bypasses the preventative controls.  Armed with alerts of incursions, response capabilities must be established to quickly interdict to minimize the losses and guide the environment back to a normal state of operation.  All these resources are important but must potentially cover a vast electronic and human ecosystem.  It simply becomes too large to demand every square inch be equally protected, updated, monitored and made recoverable.  The amount of resources would be untenable.  The epic failure of the Maginot Line is a great historic example of ineffective overspending. 

Strategic Cybersecurity Capability Process v2.jpg

Prioritization is what is needed to properly align security resources to where they are the most advantageous.  Part of the process is to understand which assets are valuable, but also which are being targeted.  As it turns out, the best strategy is not about protecting everything from every possible attack.  Rather it is focusing on protecting those important resources which are most likely to be attacked.  This is where predictive modeling comes into play.  It is all part of a strategic cybersecurity capability.


“He who defends everything, defends nothing” - Fredrick the Great


In short, being able to predict where the most likely attacks will occur, provides an advantage in the allocation of security resources for the maximum effect.  The right predictive model can be a force-multiplier in adversarial confrontations.  Many organizations are designed around the venerable Prevent/Detect/Recover model (or something similar).  The descriptions get changed a bit over the years, but the premise remains the same as a 3-part introspective defensive structure.  However, the very best organizations apply analytics and intelligence to include specific aspects of attacker’s methods and objectives for Predictive capabilities.  This completes the circular process with a continuous feedback loop to help optimize all the other areas.  Without it, Prevention attempts to block all possible attacks.  Detection and Response struggle to do the same for the entirety of their domains.  It is just not efficient, therefore not sustainable over time.  With good Predictive capabilities, Prevention can focus on the most likely or riskiest attacks.  Same for Detection and Response.  Overall, it aligns the security posture to best resist the threats it faces.


There are many different types of predictive models.  Some are actuary-learning models, baseline-anomaly analysis, and my favorite is threat intelligence.  One is not uniformly better than the others.  Each have strengths and weaknesses.  The real world has thousands of years of experience with such models.  The practice has been applied to warfare, politics, insurance, and a multitude of other areas.  Strategists have great use for such capabilities in understanding the best path forward in a shifting environment.


Actuary learning models are heavily used in the insurance industry, with prediction based upon historical averages of events.  Baseline anomaly analysis is leveraged in technology, research, and finance fields to identify outliers in expected performance and time-to-failure.  Threat agent intelligence, knowing your adversary, is strongly applied in warfare and adversarial situations where an intelligent attacker exists.  The digital security industry is just coming into a state of awareness where they see the potential and value.  Historically, such models suffered from a lack of data quantity and timeliness.  The digital world has both in abundance.  So much in fact, the quantity is a problem to manage.  But computer security has a different challenge, in the rapid advances of technology which leads to a staggering diversity in the avenues of which the attackers can exploit.  Environmental stability is a key success-criteria attribute to the accuracy of all such models.  It becomes very difficult to maintain a comprehensive analysis in a chaotic environment where very little remains consistent.  This is where the power of computing can help offset the complications and apply these concepts to the benefit of cybersecurity.


There is a reality which must first be addressed.  Predictive systems are best suited for environments which already have established a solid infrastructure and baseline capabilities.  The maturity state of most organizations have not yet evolved to a condition where an investment in predictive analytics is right for them.  You can’t run before you walk.  Many companies are still struggling with the basics of security and good hygiene (understanding their environment, closing the big attack vectors/vulnerabilities, effective training, regulatory compliance, data management, metrics, etc.).  For them, it is better to establish the basics before venturing into enhancement techniques.  But for those who are more advanced, capable, and stable, the next logical step may be to optimize the use of their security resources with predictive insights.  Although a small number of companies are ready and some are travelling down this path, I think over time, Managed Security Service Provider’s will lead the broader charge for wide-spread and cross-vertical market adoption. MSSP’s are in a great position to both establish the basics and implement predictive models across the breadth of their clients.


When it comes to building and configuring predictive threat tools, which tap into vast amounts of data, many hold to the belief that data scientists should be leading the programs to understand and locate obscure but relevant indicators leading to threats.  I disagree.  Data scientists are important in manipulating data and programming the design for search parameters, but they are not experts in understanding what is meaningful and what the systems should be looking for.  As such, they tend to get mired in the correlation-causation circular assumptions.  What can emerge are trends which are statistically interesting, yet do not actually have relevance or are in some cases misleading.  As an example, most law enforcement do NOT use the data correlation methods for crime prediction as it can lead to ‘profiling’ and then self-fulfilling prophecies.  The models they use are carefully defined by crime experts, not the data scientists.  Non-experts simply lack the knowledge of what to look for and why it might be important.  It is really the experienced security/law-enforcement professional which knows what to consider and therefore should lead the configuration aspects of the design.  With security expert’s insights and the data scientist’s ability to manipulate data, the right analytical search structures can be established.  So it must be a partnership between those who know what to look for (expert) and those who can manipulate the tools to find it (data scientist).


Expert systems can be tremendously valuable, but also a huge sink of time and resources.  Most successful models do their best when analyzing simple environments with a reasonable number of factors and a high degree of overall stability.  The models for international politics, asymmetric warfare attacks, serial killer profiling, etc. are far from precise.  But the value of being able to predict computer security issues is incredibly valuable and appears attainable.  Although much work and learning has still yet to be accomplished, the data and processing is there to support the exercise.  I think the cybersecurity domain might be a very good environment for such systems to eventually thrive to deliver better risk management, at scale for lower cost, and improve the overall experience of their beneficiaries.



Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Recently I was afforded the opportunity to collaborate with the Kroger Co. on a case study regarding their usage of VMware and their Virtual SAN product.  Having spent many a day and night enjoying 4x4 subs and Krunchers Jalapeño (no more wimpy) chips during my days at Virginia Tech courtesy of the local Kroger supermarket, I was both nostalgic and intrigued.  Couple that with the fact that I am responsible for qualifying the Intel® Solid State Drives (SSDs) for use in Virtual SAN, it was really a no-brainer to participate.


One of the many eye openers I learned from this experience was just how large an operation the Kroger Co. runs.  They are the largest grocery retailer in the United States, with over 400,000 employees spanning over 3,000 locations.  The company has been around since 1883, and had 2014 sales in excess of $108,000,000,000. I spent roughly ten years of my career here at Intel in IT, and this was a great opportunity to gain insight, commiserate, and compare notes with another large company that surely has challenges I can relate to.

As it turns out, unsurprisingly, the Kroger Co. is heavily invested in virtualization, with 10’s of 1,000’s of virtual machines deployed and internal cloud customers numbering in the 1,000’s.  Their virtualized environment is powering critical lines of business, including manufacturing & distribution, pharmacies, and customer loyalty programs.

Managing the storage for this virtualized environment using a traditional storage architecture with centralized storage backing the compute clusters presented issues at this scale. To achieve desired performance targets, Kroger had to resort to all-flash fiber channel SAN implementations rather than hybrid (tiered) SAN implementations.  To be clear, these functioned, but were in direct opposition to the goal of reducing capital costs. This led Kroger to begin looking at Software-Defined Storage solutions as an alternative.  The tenets of their desired storage implementation revolved around: the ability to scale quickly, provide consistent QoS and performance on par with existing SAN-based solutions, and reduce cost.  No small order to be sure.

All-Flash Fiber Channel SAN performance, at about 1/5th the cost

Kroger evaluated multiple technologies, and eventually settled on Virtual SAN from VMware running in an all-flash configuration.  Here is where the other eye opening findings came to light.  Kroger found that their building block solution for Virtual SAN, which includes the Intel® SSD Data Center Family for NVMe, offered IOPS performance within 8% of all-flash fiber channel SAN at about 1/5th the expense, illustrated by the chart below.

IOPS, Cost, and Data Center Footprint Comparison


This same solution also offered latency characteristics within 3% of all-flash fiber channel SAN, while using approximately 1/10th the footprint in their data centers.

Latency, Cost, and Data Center Footprint Comparison


Key Takeaways

For the Kroger Co., the benefits of their Virtual SAN-based solution are clear:

  • Hyper-converged: Virtual SAN yields a roughly 10x reduction in footprint
  • Performance: minimal delta of 8% compared to all-flash fiber channel SAN
  • Cost: approximately 20% of the alternative all-flash fiber channel SAN solution


I wish we had solutions like this on the table during my days in IT- these are exciting times to witness.

Making a large investment in buying new servers and adding computing power to your data center is not a good thing, if you are not able to maximize the return on investment. Have you ever considered that the software can be the bottleneck for your data center performance? I came across an interesting case about how Tencent achieved significant results in storage system performance through software and infrastructure optimization.


Most of you are probably familiar with Tencent, one of China’s top Internet service providers. Its popular products like QQ instant messenger*and Weixin*, as well as its online games, have become household names among active online users in the country.



With the popularity of its social media products and massive user base in hundreds of millions, it is not surprising that Tencent needs to process and store lots and lots data like images, video, mail and documents created by its users. If you are a user of Tencent’s products, you are likely contributing your photos and downloading your friends’, too. To manage such needs, Tencent uses a self-developed file system, the Tencent File System* (TFS*).


Previously, Tencent utilized a traditional triple redundancy backup solution which was not an efficient solution in storage utilization. The storage media was found to be a major cost factor of TFS. As a result, Tencent decided to implement an erasure-code solution using Jerasure* open source library running on Intel® architecture-based (IA-based) servers.


As Tencent engineers validated the new solution, they noticed the computing performance was lower than the I/O throughput of the storage and network subsystems. In other words, the storage servers were not able to compute the data as fast as the I/O subsystem could move them. Adding more compute power might appear as an obvious but costly solution. Instead, Tencent used software optimization tools like Intel® VTune™ Amplifier XE and Intel® Intelligent Storage Acceleration Library to identify inefficient codes in its system and optimize them for the Intel® Xeon® processor-based server. The results were very effective and the bottleneck of system performance moved to the I/O subsystems. This was then easily addressed by Tencent when it migrated to a 10 Gigabit network using Intel® Ethernet 10 Gigabit Converged Network Adapter.


As a result of the cost-effective optimization effort, Tencent was able to get the most out of the storage system it deployed. Tencent found the optimized erasure code solution effectively reduced storage space by 60 percent and storage performance enhanced by about 20 times, while the I/O performance of TFS improved by 2.8 times. With cold data now being processed using the new TFS system, Tencent has saved significant server resources and raised the performance-price ratio for storage.


The new solution not only contributed to performance and user experience. Tencent is also saving hundreds of kilowatts of energy as it no longer needed to purchase thousands of servers to meet its storage needs.


The next time you access Tencent’s product, you now know the efforts Tencent engineers have put into improving your experience. If you are interested to know in detail how Tencent optimized its software and removed bottlenecks in its storage system, and their results, you can read the complete case study.


Have you got any interesting software optimization story to share?

The world is facing a growing problem as people’s everyday lives are becoming more digital and increasing our reliance on cybersecurity to protect our interests, yet there are not enough security professionals to fulfill the rising demands.  This leaves gaps in the security of companies and organizations we share information with.  There is hope on the horizon.  Academia is adjusting to increase the training of graduates and there is a rising interest in students to study the variety of cybersecurity domains.  But more students are needed as demand is far outpacing the expected rise in available talent. 


All the right elements are in place.  Pay for cybersecurity is on the rise, the needs for an estimated 1.5 million jobs is already growing, and higher education institutions are working collaboratively to establish the training infrastructure necessary for the next generation of security professionals to be prepared for success.  What is missing are the necessary numbers of students.  There simply is not enough. 


The good news is millennials are interested, but need more information in order to commit.  Survey results from the Raytheon-NCSA Millennial report show the most prevalent factor for prospective students to increase their interest, is being provided data and expertise to explain what jobs entail. 


Interest in Cybersecurity Careers.jpgProviding basic career information is absolutely possible but not as simple as it may seem.  Job roles do morph very rapidly.  Some data suggests as often as every nine months security professionals see their role, expectations, and focus being shifted into new areas or vary radically.  With such a rapid rate of change, cybersecurity is truly a dynamic domain where responsibilities are fluid.  This is not likely to turn off prospective millennials, as they are a generation which embraces diversity.  It may in fact, contribute to the attractiveness of these careers.  Combined with a strong employability and excellent pay, the industry should have no problem filling desk seats in universities.


What is needed right now are for experienced professionals to step up and work with educational institutions to explain the roles and responsibilities to the pool of prospective students.  Open forums, virtual meetings, presentations, in-class instruction, and even simple question-and-answer sessions can go a long way in painting a vivid picture of our industry, opportunities, and challenges which await.  The community should work together to attract applicants to the cyber sciences, especially women and underrepresented minorities who can bring in fresh ideas and perspectives.  I urge higher education institutions to reach out to the security community professionals and ask for help.  Many are willing to share their perspectives and industry knowledge to help inform students and encourage those who might be interested in a career in cybersecurity.  Only together can the private sector and academia help fulfill the needs for the next generation of security professionals.



Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Protect us from Cybercrime.jpgGovernments are having to catch-up with the digital revolution to satisfy their role in providing protection for the common defense.  The world is changing.  Longstanding definitions of responsibilities, rules, and jurisdictions have not kept up with implementation of technology.  One of the traditional roles of government is to provide defense of its citizens and their property.  Constitutions, laws, and courts define these roles and place boundaries limiting them.  With the rapid onset of digital technology, people are communicating more and in new ways, creating massive amounts of information which is being collected and aggregated.  Digital assets and data is itself becoming valuable.  Traditional policies and controls are not suited or sufficient to protect citizen’s information.  Governments are reacting to address the gaps.  This adaptation is pushing the boundaries of scope and in some cases redefining the limitations and precedents derived from an analog era of time.  Flexing to encompass the digital domain within the scope of protection, is necessary to align with expectations of the people. 


Such change however, is slow.  One of the loudest criticisms is the speed in which governments can adapt to sufficiently protect its citizens.  Realistically, it must be as boundaries are tested and redrawn.  In representative rule, there exists a balance between the rights of the citizen and the powers of the government.  Moving too quickly can violate this balance to the detriment of liberty and result in unpleasant outcomes.  Move too slow and masses become victimized, building outcry and dissatisfaction in the state of security.  Bureaucracy is the gatekeeper to keep the pendulum from swinging too fast.


The only thing that saves us from the bureaucracy is its inefficiency – Eugene McCarthy       


The writing is on the wall. Citizens expect government to play a more active role in protecting their digital assets and privacy. Governments are responding. Change is coming across the industry and it will be fueled by litigation and eventually regulatory penalties. Every company, regardless of type, will need to pay much more focus to their cybersecurity.


There are regulatory standards and oversight roles which are being defined as part of the legal structure.  Government agencies are claiming and asserting more powers to establish and enforce cybersecurity standards.  Recently, the U.S Court of Appeals for the Third Circuit upheld the U.S. Federal Trade Commission’s action against companies who had data breaches and reaffirmed the FTC’s authority to hold companies accountable for failing to safeguard consumer data.  The judicial branch interpreted the law in a way which supports the FTC assertion of their role in the digital age. 


Litigation precedents, which act as guiding frameworks, are also being challenged and adapted to influence the responsibility and accountability of customer data.  The long term ramifications of potential misuse of digital assets and personal data are being considered and weighed toward the benefit of consumers.  In a recent case, defendants argued to dismiss a class action but were unsuccessful as the court cited a failure in the “duty to maintain adequate security” which justified the action to continue.  The defendant argued that the plaintiffs suffered no actual injury, but the court rejected those arguments, stating the loss of sensitive personal data was “…sufficient to establish a credible threat of real and immediate harm, or certainly impending injury.”.


In a separate case, the Seventh Circuit and the Ninth Circuit concluded that victims have a legal right to file a lawsuit over the long-term consequences of a data breach.  In addition to reimbursement for fraudulent charges, the court said even those in the class-action lawsuit who did not experience near-term damages have a likelihood of fraud in the future.  The court stated “customers should not have to wait until hackers commit identity theft or credit-card fraud in order to give the class standing."  Experts believe this shift in litigation precedent is likely to lead to an increase in data breach class actions in cases involving hacking.


This is the macro trend I see.  Governments are stepping up to fill the void where protective oversight does not exist or citizens are not empowered to hold accountable those who have been negligent in protecting their data.  The digital realm has grown so rapidly and encompasses citizens’ lives so deeply, governments are accepting they need to adapt legal structures to protect their populace, but struggling in how to make it a reality.  We will see more of this re-definition across governmental structures worldwide over the next several years as a legal path is forged and tempered.

Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Filter Blog

By date:
By tag:
Get Ahead of Innovation
Continue to stay connected to the technologies, trends, and ideas that are shaping the future of the workplace with the Intel IT Center