1 2 3 Previous Next

IT Peer Network

1,385 posts

Even though 2015 is not yet over, I truly believe it’s been a successful year for Intel’s Non-Volatile Memory Solutions Group. At IDF 15 we had the opportunity to share an amazing story about 3D XPoint™ memory technology, including real life demos of upcoming products with Rob Crook on stage.


IDF 15 is now finished, and many people attended. For those who couldn't make it, we published all our content online with free access for everybody. This includes presentations and technical labs during the event. Check out the materials from all sessions here.


I’m proud of the lab that Zhdan Bybin and I delivered there. It is highly practical, technical, and mainly focused on how exactly to use NVMe products in Linux in a Windows environment. The beauty of NVMe is that it can be applied to todays Intel® SSD DC P3700/P3600/P3500 and P3608 Series, Intel® SSD 750 Series, as well as future generations of NVMe-based products. The lab mainly covers initial system setup and configuration, benchmarking with FIO (and FIO Visualizer www.01.org/fio-visualizer) and IOmeter, managing Intel® RSTe (MDADM SW RAID extensions for Linux), running block trace analysis (www.01.org/ioprof ), and much more.  Use it as a reference for deploying this exciting technology. My hope is this will guide teams to adopt NVMe quicker.


You may find the lab by the link provided above (type SSDL001 into the search bar) or in the PDF attached to this blog.

This is for the tireless enterprise security folks out there, working every day to protect the computing environment.  Do you feel you are making progress in climbing the mountain towards security nirvana or just spinning in the hamster wheel?  The corporate security strategy is the plan which determines your fate.


Does your enterprise have a security strategy and is it leveraging the security organization in an efficient and effective way?  Here is a quick slide-deck to challenge your security strategy.  It is based on a recent blog, 5 Questions to Prove a Cyber Strategy, but includes more insights, supporting data links, and frankly, it just looks much better.



The role of the CIO and IT professionals has significantly changed in the last decade. In today’s professional landscape, an IT professional is poised to lead the charge for technological innovation. As David A. Bray, CIO of the Federal Communications Commission, said in a interview with The Washington Post, “We need leaders who do more than keep the trains running on time. CIOs and CEOs can work together to digitally transform how an enterprise operates.”


But, according to CIO magazine’s 2015 State of the CIO survey, CIOs were viewed as business leaders by just 13 percent of colleagues outside of IT and only 30 percent of line-of-business leaders. Obviously, there’s still a significant gap in the C-suite perception of IT. But there’s also a significant opportunity. As any digital professional will tell you, the best way to solve a perception problem is to be more visible. Say goodbye to the IT professional tucked away in the basement and say hello to the age of the social techie.


Teaching Techies to be Socialites


It’s been clear for some time that social media is essential to successful businesses, providing the opportunity to not only serve their customers better, but to learn from them. The same is true for the social IT professional. Through social media, an IT professional is able to engage in and help shape the changing conversation around IT. They’re able to expand their knowledge and skills through peer collaboration and partnerships born online. And, by adopting a more open and collaborative mindset, the social IT professional is able to begin to solve their perception problem.

One CIO leading the charge to bring IT out of the shadows and into the social spotlight is Intel’s very own Kim Stevenson. Ranked as one of the most social CIOs by the Huffington Post in 2015, Kim has long been an advocate of shaking up the IT department and what’s expected of it. As she stated in a Forbes interview, “On the leadership front, I challenged IT to take bigger risks and to move beyond ‘what you know’ to ‘what’s possible.’ IT had gotten into a comfort zone taking small risks and only solving problems we knew how to solve, which yielded incremental improvements. If we were going to meet the needs of the business, we needed to be operating at a higher level of risk.”


Beyond changing the perception of IT, becoming social can provide hungry IT professionals with a personal classroom for learning and innovation, helping them to stay on the cutting edge of the latest technology.


Now that you now why you should get social, it’s time to learn how to get social. In my next blog, I’ll go into how you can kickstart your social persona.


Until then, check out this list of the most social CIOs in 2015. I’d love to hear your thoughts on the benefits of the social CIO and the hurdles that are preventing more CIOs from jumping in. Leave your comments below or continue the conversation on Twitter @jen_aust and on LinkedIn at www.linkedin.com/in/jenniferaust.


I’ve written previously about how a driving factor for refreshing hardware and software should go beyond the security, maintenance and productivity arguments and instead focus on the role IT can play in recruitment and the future success of your business by helping you become more attractive to the best and brightest talent available.


The truth of the matter is that lagging behind in your adoption of new technology could mean that 10 years from now you’re seeing the same faces around your office because all the smart, progressive young folk are going down the road to work for someone else; someone who allows them a more flexible approach to work that includes telecommuting, collaborating in coffee shops or more flexible schedules to enable a much sought-after work-life balance.


There are many forces driving change in today's workplaces and the push to attract talent is just one factor.  Increasing competition from global competitors and threats from disruptive entrants into your market are also causing fundamental changes to the environment in which we work.


To be successful today and into the future, I believe businesses will have to offer dynamic workplaces that provide options for mobile collaboration and the ability to tap into knowledge experts to deliver projects, services or solutions in a more ad hoc or fluid way.  And I believe this future is closer than you think.


I’ve been in this industry for a long time and I can remember when upon checking into my hotel room the first thing I would do was scramble around the desk to wire my laptop to the phone socket into the wall because there wasn’t Wi-Fi.  Being in Europe, I also had to travel with a bag of different phone adapters because every country had a different style of phone jack.


Today, we just open up our laptops and expect Wi-Fi to be there, whether we’re in a hotel, coffee shop or almost everywhere else we find ourselves. I would argue this was a fundamental shift in the way we work and it's one we now take for granted. Heightened connectivity, created and enabled by advances in technology, is being taken to a whole new level as we link to an Internet of Things and further transform how we interact and connect.


We are only at the thin edge of the wedge in terms of what's possible and what is poised to become our new norm.


Imagine walking into a boardroom and instantly but securely connecting to projection and collaboration portals. Now imagine meetings are instantly productive for both people on-site and remote workers because they can all instantly have the same access and visibility whether they're in the boardroom or connecting from off-site locations. I'm talking about delivering on the promise of true mobile collaboration without compromising security.


Next imagine never needing to run around stringing out cables to recharge your devices. We are already piloting wireless charging so when you enter a café or hotel, your laptop can start to charge while it sits next to you on the table. Advances in technology are extending battery life and soon charging cables will be a thing of the past allowing us to truly untether all of our devices.


I suspect that for many businesses one of their largest investments is the building in which they sit today and come together as an organization. As companies look to become more efficient and save costs in a more competitive business environment, I believe these bricks and mortar work environments are about to change dramatically.


But I am not talking of a completely virtual workplace where everyone works remotely and there is no office per se.


We are, fundamentally, social animals and employees and millennials thrive in environments where there are high levels of collaboration.  I posit that instead of a completely virtual workplace, we will see workspaces that offer a range of options from open collaboration spaces to closed rooms for quiet work, supported by work at home or remote options to provide employees with a custom-tailored environment in which they can be the most productive.


Now, this might be a little bit of a Nirvana image, but I think we could see a further evolution in a connected business model where collaboration goes beyond the corporate walls and brings together expertise from inside and outside the company in a unified way.


Subject matter experts can brought together to deliver projects in a secure, highly collaborative environment allowing smaller companies to tap into expertise they either can't afford to have on staff.  I can see a future where that specialized skill set or experience can be levered by multiple companies to more efficiently utilize the knowledge workers of the future.


I've seen small- and medium-sized businesses in Canada already starting down this path.  They are working seamlessly to appear as a large business to their customers when they are in fact a select group of smaller businesses working together. As Canada looks to increase its export portfolio beyond the US and compete against disruptive international players in their market, an inter-business collaboration model is one that I believe could become more and more prevalent.


Underpinning every discussion about the workplace of the future is a very real focus on security. Hacking and cyber-threats are of significant concern globally and risks are increasing for companies of all sizes. Advances in technology play a role here too with mechanisms to verify identities; to assign or change permissions based on location; to secure lost devices by remotely wiping technology; and to provide collaboration that can seamlessly bring together employees, customers and suppliers without compromising network integrity.


The tools for business transformation are already emerging and starting to shape the workplace of the future; one that leverages truly cross-device, integrated and real time communication; social media to connect communities together; mobility allowing "work" to become a thing not a place; analytics turning data into insights; and cloud computing to allow a secure extension of companies beyond a physical location. 

At its most basic, workplace transformation starts with people and providing them with the tools they need to be productive and effective, and I think we're closer to a major transformation than many might realize. Will you be ready?


Despite the fact that email is a part of daily life (it’s officially middle aged at 40+ years old), for many of us nothing beats the art of real human conversation. So it’s exciting to see how Intel vPro technology can transform the way we do business simply by giving many people what they want and need: the ability to have instant, natural discussions.

In the first of our Business Devices Webinar Series, we focused on the use of technology to increase workplace productivity. This concept, while not new, is now easily within reach thanks to ever more efficient conferencing solutions that include simple ways to connect, display, and collaborate. From wireless docking that makes incompatible wires and cumbersome dongles a thing of the past, to more secure, multifactor authentication on Wi-Fi for seamless login, to scalability across multiple devices and form factors, more robust communications solutions are suddenly possible.


Intel IT experts Corey Morris and Scott McWilliams shared how these improvements are transforming business across the enterprise. One great example of this productivity in action is at media and marketing services company Meredith Corporation. As the publisher of numerous magazines (including Better Homes & Garden and Martha Stewart Living), and owner of numerous local TV stations across the United States, Meredith needed to keep its 4,500 employees connected, especially in remote locations. Dan Danes, SCCM manager for Meredith, said Intel vPro technology helped boost the company’s IT infrastructure while also reducing worker downtime.


In the 30-minute interactive chat following the presentation, intrigued webinar attendees peppered the speakers with questions. Here are a few highlights from that conversation:


Q: Is this an enterprise-only [technology]? Or will a small business be able to leverage this?

A: Both enterprise and small business can support vPro.


Q: Is there also a vPro [tool] available like TeamViewer, in which the user can press a Help button, get a code over a firewall, and connect?

A: There is MeshCentral, which is similar, but TeamViewer and LogMeIn do not have vPro OOB [out-of-band] capabilities.


Q: How do I get started? How do I contact a vPro expert?

A: Visit https://communities.intel.com/community/itpeernetwork/vproexpert for vPro tips and tricks. Contact a vpro expert at advisors.intel.com

These interactive chats, an ongoing feature of our four-part webinar series, also offer an opportunity for each participant to win a cool prize just for asking a question. Congratulations to our first webinar winners: James Davis scored a new Dell Venue 8 Pro tablet and Joe Petzold is receiving SMS Audio BioSport smart earbuds with an integrated heart monitor.


Sound interesting? We hope you’ll join us for the second webinar, which will further explore how companies can reduce the total cost of ownership of a PC refresh via remote IT management. If you’ve already registered for the Business Devices Webinar Series, click on the link in the email reminder that you’ll receive a day or two before the event, and tune in October 15 at 10 a.m. PDT. If you want to register, you can do it here.


In the meantime, you can watch the Boost Business Productivity webinar here. Learn how you can boost your business productivity and get great offers at evolvework.intel.com.

Or, if you want to try before you buy, check out the Battle Pack from Insights.

Intel today unveiled the Intel® Solid State Drive (SSD) DC P3608 Series, its highest performing SSD for the Data Center to eliminate bottlenecks in HPC workflows, accelerate databases, and gain business insights through real time analytics.


Intel is already shipping the Intel® SSD DC P3608 Series in high volume to top OEMs, including Cray, who issued the following statement.


“For Cray, Intel® Solid State Drives (SSDs) are an outstanding solution for our high-performance computing customers running high-density, high-speed data applications, and workflows. Cray has a long history of incorporating the latest Intel technologies into our supercomputing solutions, and the DC P3608 Series is another example of how we can jointly address our customers’ most challenging problems.”


The Intel® SSD DC P3608 Series delivers high-performance and low latency with NVMe and 8 lanes of PCIe 3.0. Here is an example of a study Intel is conducting on a database application.


One thing most DBA’s know that is that column-based us better than row-based when it comes to index compression efficiency and much less IO on disk. Pulling together much faster analytics queries against very large databases into the Terabyte class. These indexes are extremely efficient. Well does this all pair well with better hardware?


The answer is yes. Better hardware always matters just like better engineering wins in automotive, for safety, efficiency and fun to drive.


The same is true with NVMe technology which is standards based PCIe Solid State Drive technology. NVMe-based SSDs are the only kind of PCIe based SSD that Intel provides. We did a lot to invent it.  Is it fun to run very large TPC-H like queries against this type of drive? Well let me show you.


Here is some data that we put together where we show the Maximum Throughput of our new x8 P3608 against our best x4 card, the P3700. Also to put this into perspective I share the SATA versus PCIe run time of the entire 22 queries that exist within the TPC-H specification that is within HammerDB.


At the bottom of the blog is the link to the entire data sheet of our testing.


PCIe x8 and 4TB's of usable capacity from Intel is now here. On September 23, 2015 we have released the new P3608. So how many Terabytes of SQL Server Warehouse you want to deploy with this technology? With 4 x8 cards, and 16TB, you'd be able to support over 40TB of compressed SQL Server data using the Fast Track Data Warehouse architectures, because of the excellent compression available with this architecture.


Here's the data comparing our new x8 and x4 Intel PCIe drives and giving you some perspective on how much faster PCIe is over SATA, I am including a graph of the entire suite of queries on PCIe (P3700) over SATA. (S3710).


Here we compare the P3608 to the P3700 for maximum throughput.




Here we compare the P3608 versus the P3700 for query time on the most IO intensive queries.



Finally to give you some perspective here is what a SATA drive could do with this kind of SQL Server Database. This graph consists of all 22 queries , not just the IO intensive ones as above, and it's the total time to run all queries within HammerDB.


Lower is better.



You can see all the data here.


Prediction capabilities can have tremendous value in the world of security.  It allows for better allocation of resources.  Instead of trying to defend everything from all types of attacks, it allows a smarter positioning of preventative, detective, and responsive investments to intersect where the attacks are likely to occur. 


There is a natural progression in security maturity.  First, organizations invest in preventative measures to stop the impacts of attacks.  Quickly they realize not all attacks are being stopped, so they invest in detective mechanisms to identify when an attack successfully bypasses the preventative controls.  Armed with alerts of incursions, response capabilities must be established to quickly interdict to minimize the losses and guide the environment back to a normal state of operation.  All these resources are important but must potentially cover a vast electronic and human ecosystem.  It simply becomes too large to demand every square inch be equally protected, updated, monitored and made recoverable.  The amount of resources would be untenable.  The epic failure of the Maginot Line is a great historic example of ineffective overspending. 

Strategic Cybersecurity Capability Process v2.jpg

Prioritization is what is needed to properly align security resources to where they are the most advantageous.  Part of the process is to understand which assets are valuable, but also which are being targeted.  As it turns out, the best strategy is not about protecting everything from every possible attack.  Rather it is focusing on protecting those important resources which are most likely to be attacked.  This is where predictive modeling comes into play.  It is all part of a strategic cybersecurity capability.


“He who defends everything, defends nothing” - Fredrick the Great


In short, being able to predict where the most likely attacks will occur, provides an advantage in the allocation of security resources for the maximum effect.  The right predictive model can be a force-multiplier in adversarial confrontations.  Many organizations are designed around the venerable Prevent/Detect/Recover model (or something similar).  The descriptions get changed a bit over the years, but the premise remains the same as a 3-part introspective defensive structure.  However, the very best organizations apply analytics and intelligence to include specific aspects of attacker’s methods and objectives for Predictive capabilities.  This completes the circular process with a continuous feedback loop to help optimize all the other areas.  Without it, Prevention attempts to block all possible attacks.  Detection and Response struggle to do the same for the entirety of their domains.  It is just not efficient, therefore not sustainable over time.  With good Predictive capabilities, Prevention can focus on the most likely or riskiest attacks.  Same for Detection and Response.  Overall, it aligns the security posture to best resist the threats it faces.


There are many different types of predictive models.  Some are actuary-learning models, baseline-anomaly analysis, and my favorite is threat intelligence.  One is not uniformly better than the others.  Each have strengths and weaknesses.  The real world has thousands of years of experience with such models.  The practice has been applied to warfare, politics, insurance, and a multitude of other areas.  Strategists have great use for such capabilities in understanding the best path forward in a shifting environment.


Actuary learning models are heavily used in the insurance industry, with prediction based upon historical averages of events.  Baseline anomaly analysis is leveraged in technology, research, and finance fields to identify outliers in expected performance and time-to-failure.  Threat agent intelligence, knowing your adversary, is strongly applied in warfare and adversarial situations where an intelligent attacker exists.  The digital security industry is just coming into a state of awareness where they see the potential and value.  Historically, such models suffered from a lack of data quantity and timeliness.  The digital world has both in abundance.  So much in fact, the quantity is a problem to manage.  But computer security has a different challenge, in the rapid advances of technology which leads to a staggering diversity in the avenues of which the attackers can exploit.  Environmental stability is a key success-criteria attribute to the accuracy of all such models.  It becomes very difficult to maintain a comprehensive analysis in a chaotic environment where very little remains consistent.  This is where the power of computing can help offset the complications and apply these concepts to the benefit of cybersecurity.


There is a reality which must first be addressed.  Predictive systems are best suited for environments which already have established a solid infrastructure and baseline capabilities.  The maturity state of most organizations have not yet evolved to a condition where an investment in predictive analytics is right for them.  You can’t run before you walk.  Many companies are still struggling with the basics of security and good hygiene (understanding their environment, closing the big attack vectors/vulnerabilities, effective training, regulatory compliance, data management, metrics, etc.).  For them, it is better to establish the basics before venturing into enhancement techniques.  But for those who are more advanced, capable, and stable, the next logical step may be to optimize the use of their security resources with predictive insights.  Although a small number of companies are ready and some are travelling down this path, I think over time, Managed Security Service Provider’s will lead the broader charge for wide-spread and cross-vertical market adoption. MSSP’s are in a great position to both establish the basics and implement predictive models across the breadth of their clients.


When it comes to building and configuring predictive threat tools, which tap into vast amounts of data, many hold to the belief that data scientists should be leading the programs to understand and locate obscure but relevant indicators leading to threats.  I disagree.  Data scientists are important in manipulating data and programming the design for search parameters, but they are not experts in understanding what is meaningful and what the systems should be looking for.  As such, they tend to get mired in the correlation-causation circular assumptions.  What can emerge are trends which are statistically interesting, yet do not actually have relevance or are in some cases misleading.  As an example, most law enforcement do NOT use the data correlation methods for crime prediction as it can lead to ‘profiling’ and then self-fulfilling prophecies.  The models they use are carefully defined by crime experts, not the data scientists.  Non-experts simply lack the knowledge of what to look for and why it might be important.  It is really the experienced security/law-enforcement professional which knows what to consider and therefore should lead the configuration aspects of the design.  With security expert’s insights and the data scientist’s ability to manipulate data, the right analytical search structures can be established.  So it must be a partnership between those who know what to look for (expert) and those who can manipulate the tools to find it (data scientist).


Expert systems can be tremendously valuable, but also a huge sink of time and resources.  Most successful models do their best when analyzing simple environments with a reasonable number of factors and a high degree of overall stability.  The models for international politics, asymmetric warfare attacks, serial killer profiling, etc. are far from precise.  But the value of being able to predict computer security issues is incredibly valuable and appears attainable.  Although much work and learning has still yet to be accomplished, the data and processing is there to support the exercise.  I think the cybersecurity domain might be a very good environment for such systems to eventually thrive to deliver better risk management, at scale for lower cost, and improve the overall experience of their beneficiaries.



Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Recently I was afforded the opportunity to collaborate with the Kroger Co. on a case study regarding their usage of VMware and their Virtual SAN product.  Having spent many a day and night enjoying 4x4 subs and Krunchers Jalapeño (no more wimpy) chips during my days at Virginia Tech courtesy of the local Kroger supermarket, I was both nostalgic and intrigued.  Couple that with the fact that I am responsible for qualifying the Intel® Solid State Drives (SSDs) for use in Virtual SAN, it was really a no-brainer to participate.


One of the many eye openers I learned from this experience was just how large an operation the Kroger Co. runs.  They are the largest grocery retailer in the United States, with over 400,000 employees spanning over 3,000 locations.  The company has been around since 1883, and had 2014 sales in excess of $108,000,000. I spent roughly ten years of my career here at Intel in IT, and this was a great opportunity to gain insight, commiserate, and compare notes with another large company that surely has challenges I can relate to.

As it turns out, unsurprisingly, the Kroger Co. is heavily invested in virtualization, with 10’s of 1,000’s of virtual machines deployed and internal cloud customers numbering in the 1,000’s.  Their virtualized environment is powering critical lines of business, including manufacturing & distribution, pharmacies, and customer loyalty programs.

Managing the storage for this virtualized environment using a traditional storage architecture with centralized storage backing the compute clusters presented issues at this scale. To achieve desired performance targets, Kroger had to resort to all-flash fiber channel SAN implementations rather than hybrid (tiered) SAN implementations.  To be clear, these functioned, but were in direct opposition to the goal of reducing capital costs. This led Kroger to begin looking at Software-Defined Storage solutions as an alternative.  The tenets of their desired storage implementation revolved around: the ability to scale quickly, provide consistent QoS and performance on par with existing SAN-based solutions, and reduce cost.  No small order to be sure.

All-Flash Fiber Channel SAN performance, at about 1/5th the cost

Kroger evaluated multiple technologies, and eventually settled on Virtual SAN from VMware running in an all-flash configuration.  Here is where the other eye opening findings came to light.  Kroger found that their building block solution for Virtual SAN, which includes the Intel® SSD Data Center Family for NVMe, offered IOPS performance within 8% of all-flash fiber channel SAN at about 1/5th the expense, illustrated by the chart below.

IOPS, Cost, and Data Center Footprint Comparison


This same solution also offered latency characteristics within 3% of all-flash fiber channel SAN, while using approximately 1/10th the footprint in their data centers.

Latency, Cost, and Data Center Footprint Comparison


Key Takeaways

For the Kroger Co., the benefits of their Virtual SAN-based solution are clear:

  • Hyper-converged: Virtual SAN yields a roughly 10x reduction in footprint
  • Performance: minimal delta of 8% compared to all-flash fiber channel SAN
  • Cost: approximately 20% of the alternative all-flash fiber channel SAN solution


I wish we had solutions like this on the table during my days in IT- these are exciting times to witness.

Making a large investment in buying new servers and adding computing power to your data center is not a good thing, if you are not able to maximize the return on investment. Have you ever considered that the software can be the bottleneck for your data center performance? I came across an interesting case about how Tencent achieved significant results in storage system performance through software and infrastructure optimization.


Most of you are probably familiar with Tencent, one of China’s top Internet service providers. Its popular products like QQ instant messenger*and Weixin*, as well as its online games, have become household names among active online users in the country.



With the popularity of its social media products and massive user base in hundreds of millions, it is not surprising that Tencent needs to process and store lots and lots data like images, video, mail and documents created by its users. If you are a user of Tencent’s products, you are likely contributing your photos and downloading your friends’, too. To manage such needs, Tencent uses a self-developed file system, the Tencent File System* (TFS*).


Previously, Tencent utilized a traditional triple redundancy backup solution which was not an efficient solution in storage utilization. The storage media was found to be a major cost factor of TFS. As a result, Tencent decided to implement an erasure-code solution using Jerasure* open source library running on Intel® architecture-based (IA-based) servers.


As Tencent engineers validated the new solution, they noticed the computing performance was lower than the I/O throughput of the storage and network subsystems. In other words, the storage servers were not able to compute the data as fast as the I/O subsystem could move them. Adding more compute power might appear as an obvious but costly solution. Instead, Tencent used software optimization tools like Intel® VTune™ Amplifier XE and Intel® Intelligent Storage Acceleration Library to identify inefficient codes in its system and optimize them for the Intel® Xeon® processor-based server. The results were very effective and the bottleneck of system performance moved to the I/O subsystems. This was then easily addressed by Tencent when it migrated to a 10 Gigabit network using Intel® Ethernet 10 Gigabit Converged Network Adapter.


As a result of the cost-effective optimization effort, Tencent was able to get the most out of the storage system it deployed. Tencent found the optimized erasure code solution effectively reduced storage space by 60 percent and storage performance enhanced by about 20 times, while the I/O performance of TFS improved by 2.8 times. With cold data now being processed using the new TFS system, Tencent has saved significant server resources and raised the performance-price ratio for storage.


The new solution not only contributed to performance and user experience. Tencent is also saving hundreds of kilowatts of energy as it no longer needed to purchase thousands of servers to meet its storage needs.


The next time you access Tencent’s product, you now know the efforts Tencent engineers have put into improving your experience. If you are interested to know in detail how Tencent optimized its software and removed bottlenecks in its storage system, and their results, you can read the complete case study.


Have you got any interesting software optimization story to share?

The world is facing a growing problem as people’s everyday lives are becoming more digital and increasing our reliance on cybersecurity to protect our interests, yet there are not enough security professionals to fulfill the rising demands.  This leaves gaps in the security of companies and organizations we share information with.  There is hope on the horizon.  Academia is adjusting to increase the training of graduates and there is a rising interest in students to study the variety of cybersecurity domains.  But more students are needed as demand is far outpacing the expected rise in available talent. 


All the right elements are in place.  Pay for cybersecurity is on the rise, the needs for an estimated 1.5 million jobs is already growing, and higher education institutions are working collaboratively to establish the training infrastructure necessary for the next generation of security professionals to be prepared for success.  What is missing are the necessary numbers of students.  There simply is not enough. 


The good news is millennials are interested, but need more information in order to commit.  Survey results from the Raytheon-NCSA Millennial report show the most prevalent factor for prospective students to increase their interest, is being provided data and expertise to explain what jobs entail. 


Interest in Cybersecurity Careers.jpgProviding basic career information is absolutely possible but not as simple as it may seem.  Job roles do morph very rapidly.  Some data suggests as often as every nine months security professionals see their role, expectations, and focus being shifted into new areas or vary radically.  With such a rapid rate of change, cybersecurity is truly a dynamic domain where responsibilities are fluid.  This is not likely to turn off prospective millennials, as they are a generation which embraces diversity.  It may in fact, contribute to the attractiveness of these careers.  Combined with a strong employability and excellent pay, the industry should have no problem filling desk seats in universities.


What is needed right now are for experienced professionals to step up and work with educational institutions to explain the roles and responsibilities to the pool of prospective students.  Open forums, virtual meetings, presentations, in-class instruction, and even simple question-and-answer sessions can go a long way in painting a vivid picture of our industry, opportunities, and challenges which await.  The community should work together to attract applicants to the cyber sciences, especially women and underrepresented minorities who can bring in fresh ideas and perspectives.  I urge higher education institutions to reach out to the security community professionals and ask for help.  Many are willing to share their perspectives and industry knowledge to help inform students and encourage those who might be interested in a career in cybersecurity.  Only together can the private sector and academia help fulfill the needs for the next generation of security professionals.



Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Protect us from Cybercrime.jpgGovernments are having to catch-up with the digital revolution to satisfy their role in providing protection for the common defense.  The world is changing.  Longstanding definitions of responsibilities, rules, and jurisdictions have not kept up with implementation of technology.  One of the traditional roles of government is to provide defense of its citizens and their property.  Constitutions, laws, and courts define these roles and place boundaries limiting them.  With the rapid onset of digital technology, people are communicating more and in new ways, creating massive amounts of information which is being collected and aggregated.  Digital assets and data is itself becoming valuable.  Traditional policies and controls are not suited or sufficient to protect citizen’s information.  Governments are reacting to address the gaps.  This adaptation is pushing the boundaries of scope and in some cases redefining the limitations and precedents derived from an analog era of time.  Flexing to encompass the digital domain within the scope of protection, is necessary to align with expectations of the people. 


Such change however, is slow.  One of the loudest criticisms is the speed in which governments can adapt to sufficiently protect its citizens.  Realistically, it must be as boundaries are tested and redrawn.  In representative rule, there exists a balance between the rights of the citizen and the powers of the government.  Moving too quickly can violate this balance to the detriment of liberty and result in unpleasant outcomes.  Move too slow and masses become victimized, building outcry and dissatisfaction in the state of security.  Bureaucracy is the gatekeeper to keep the pendulum from swinging too fast.


The only thing that saves us from the bureaucracy is its inefficiency – Eugene McCarthy       


The writing is on the wall. Citizens expect government to play a more active role in protecting their digital assets and privacy. Governments are responding. Change is coming across the industry and it will be fueled by litigation and eventually regulatory penalties. Every company, regardless of type, will need to pay much more focus to their cybersecurity.


There are regulatory standards and oversight roles which are being defined as part of the legal structure.  Government agencies are claiming and asserting more powers to establish and enforce cybersecurity standards.  Recently, the U.S Court of Appeals for the Third Circuit upheld the U.S. Federal Trade Commission’s action against companies who had data breaches and reaffirmed the FTC’s authority to hold companies accountable for failing to safeguard consumer data.  The judicial branch interpreted the law in a way which supports the FTC assertion of their role in the digital age. 


Litigation precedents, which act as guiding frameworks, are also being challenged and adapted to influence the responsibility and accountability of customer data.  The long term ramifications of potential misuse of digital assets and personal data are being considered and weighed toward the benefit of consumers.  In a recent case, defendants argued to dismiss a class action but were unsuccessful as the court cited a failure in the “duty to maintain adequate security” which justified the action to continue.  The defendant argued that the plaintiffs suffered no actual injury, but the court rejected those arguments, stating the loss of sensitive personal data was “…sufficient to establish a credible threat of real and immediate harm, or certainly impending injury.”.


In a separate case, the Seventh Circuit and the Ninth Circuit concluded that victims have a legal right to file a lawsuit over the long-term consequences of a data breach.  In addition to reimbursement for fraudulent charges, the court said even those in the class-action lawsuit who did not experience near-term damages have a likelihood of fraud in the future.  The court stated “customers should not have to wait until hackers commit identity theft or credit-card fraud in order to give the class standing."  Experts believe this shift in litigation precedent is likely to lead to an increase in data breach class actions in cases involving hacking.


This is the macro trend I see.  Governments are stepping up to fill the void where protective oversight does not exist or citizens are not empowered to hold accountable those who have been negligent in protecting their data.  The digital realm has grown so rapidly and encompasses citizens’ lives so deeply, governments are accepting they need to adapt legal structures to protect their populace, but struggling in how to make it a reality.  We will see more of this re-definition across governmental structures worldwide over the next several years as a legal path is forged and tempered.

Twitter: @Matt_Rosenquist

Intel Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist


Whether you’re planning a project for a mobile business app or developing a mobile business intelligence (BI) strategy, it’s critical to gauge your users’ overall mobile readiness. Even though sales of mobile devices continue to increase, some mobile users show chronic use of PC-era habits.


Yes, the mobile savvy millennial generation is taking the workforce by storm, but they don’t necessarily represent the largest portion of business users. Mobile-ready users, on the other hand, will display at least some of the following characteristics.


DISCLAIMER: All characters appearing in this blog post are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.


10. They Own Smartphones and More

The limited real estate on the smart phone makes the tablet a better candidate for many business applications, especially in mobile BI. Therefore, they may also own a tablet provided by their employers as well as a lot of accessories to improve the usability, such as data entry.


9. They Remember the Password to Unlock Their Screen or App


As funny as this may sound, it usually is a good test of whether the device is being used frequently. Many businesses use device management systems to prevent unauthorized access to enterprise apps and/or corporate data on mobile devices. Therefore, the password to unlock the screen won’t be the only password they will need to remember. Mobile-ready users employ methods to remember different passwords similar to those they use on their PCs.


8. They Use Their Devices as More than a Paperweight

Clearly the decision to purchase tablets or smartphones is a considerable investment for any business. Though mobile devices may be fun to watch movies on, using these devices to their maximum capacity results not only in higher returns on investment (ROIs), but also new opportunities for growth and profitability.


7. They Have Apps Other than Angry Birds Installed


Apps are a good indicator of the basic usage. Whether the device is provided by the business or it’s part of a bring-your-own-device (BYOD) arrangement, there’s nothing wrong with having more personal apps installed than business apps. However, it’s important that required business apps for the user’s role are installed, working correctly, and being used. Delivering these devices to users pre-configured or completing set up remotely will help considerably.


6. They Own Multiple Chargers (and Go Nowhere Without One)


Although mobile device batteries have improved significantly over the years, the more the device is used, the more quickly the battery will need a charge – especially for battery draining business apps (watching movies doesn’t count). A mobile-ready user who heavily depends on his/her device will typically have several chargers and have them placed in strategic locations such as the car, briefcase, or the office. If they stick to a single charger, as some do, they won’t travel anywhere without it.


5. They Meticulously Keep Their Apps Up-To-Date


This is yet another indicator about the usage. Business people are very busy – especially road warriors – and may not have a chance to constantly keep an eye on the updates, opting out for the “Update All” option. However, if the device is not being used frequently, this is one of many neglected areas. As a result, the app may not work because it’s an older version. The idea is not that users should update every hour (or so many times a day), but that they do so at least once a week.


4. They Know How to Back Up Their Device

Although some businesses make the option of backing up corporate apps and data easier, many mobile users may be left on their own to deal with this task. It gets even more complicated in scenarios where the employee uses the device both for personal and business reasons. But the savvy user knows how to back up their data adequately.


3. They Can Afford to Come to Meetings with Only Their Tablet


This is, without a doubt, a good sign of an advanced mobile-ready user. To begin with, the number of days they may forget their mobile device at home will need to stay in single digits. They also come to meetings ready to take notes on their device and/or connect it to the projector. If it’s an online meeting, sharing their mobile device screen won’t be a limitation.


2. They Get Annoyed When They’re Asked to Use Their PCs


These mobile lovers compare PCs to manual typewriters and, simply put, they don’t like anything unless it’s available on a mobile device. They can’t understand why an app doesn’t exist or can’t be developed for it.


1. They Get Upset When People Give Them Paper Copy of Reports


For the users who have really “drunk the mobile Kool-Aid,” anything on paper represents old times and they don’t care much for nostalgia. They argue with equal fervor that paper is not only bad for business but also bad for the environment.


What other signs do you see mobile-ready users exhibit? (Please, no names or dirty jokes.)


This is the final blog in the Mobile BI Strategy Series. Click here to see the others! 


Connect with me on Twitter at @KaanTurnali and LinkedIn.


This story originally appeared on the SAP Analytics Blog.

Anti-Virus.jpgThe term Anti-Virus or AV is a misnomer and largely misleading to those who are following the cybersecurity industry but unaware of the history of this misused term. Over the years it has become an easy target for marketers to twist into a paper tiger, in hopes of supporting arguments to sell their wares.  It seems to be customary, whenever a vendor comes out with a new host anti-malware product, for them to claim “AV is dead” and their product is superior to signature matching!  Well, such practices are simply dated straw-man arguments, as those venerable anti-virus solutions have evolved in scope and methods, greatly expanded their capabilities, and do so much more than just AV. 


“The report of my death was an exaggeration” – Mark Twain


I have been hearing AV is Dead for years!  I blogged about it in 2012 and it was already an old story, with origins dating back to at least 2006!  The term “AV” was once relevant, but nowadays it is an artifact.  A legacy term which describes early products and their way of protecting endpoints from malicious code.  The term has survived, largely due the marketing value of end-user recognition.  People are familiar with the term “AV” and it is easy to generalize vendors and products under this banner.  But the technology and methods have dramatically changed and solutions no longer exist as they once were.  It references quite old technology when host based anti-malware emerged to detect and clean personal computers from viruses.  Back then, most of the threats were viruses, a specific type of malicious code.  Those viruses were eventually joined by trojans, bots, macros, worms, rootkits, RAT’s, click-jackers, keyloggers, malvertizing, and other unsavory bits of code which could infect a device.  Today we collectively call them ‘malware’. 


Back when AV was a relevant term, the tools typically detected viruses by matching them to known samples.  These signatures, were periodically updated and the AV tool would be run on a regular cadence to check the system for any matches.  Nearly two decades ago, I can remember the weekly virus scan would consume so much of the system resources the user could not do any work.  Scans could take 30 minutes to several hours to complete, depending on the settings and system.  Most people would start the scan and go to lunch or initiate it on their workstation before going home for the evening.  Yes, we all had desktops in those days!  Not very efficient nor user friendly, but then again there were not too many actual viruses to contend with.   


Yes, original AV software relied solely on static signatures and scheduled scans, but those days are long gone.  Times have changed with the explosive growth, pervasiveness, and specialization of malware.  Protection systems run continuously and can receive updates of the latest threats as often as needed throughout the day.  Performance has improved and is unnoticeable most of the time by users.  The sheer quantity of threats is mesmerizing.  The total number of malware has steadily grown at a 150% annual rate and now over 400 million unique samples are known to exist.  As a result, security vendors had to adapt to meet the growing challenge and complexities


Modern client based anti-malware has evolved to include a number of different processes, tools, and techniques to identify harmful and unwanted activities.  It would be unwieldly to rely solely on static signatures of all 400m pieces of known malware and attempt to scan every file against the library.  Computing would grind to a halt.  Instead, current products in the industry leverage a host of different methods and resources to protect endpoints, finding a balance between efficacy, speed, cost, manageability, and user impact.  They will continue to evolve as they always have over time (signature matching, polymorphism, heuristics, machine-learning attribute inspection, peer consensus, community reporting, cloud analysis, file reputation, sandboxing analysis, exploit detection, signature validation, whitelisting, etc.) to meet emerging challenges and customer expectations.  The big players in the industry have the resources to stay at the forefront by organic innovation or through acquisitions. 


New players in the industry, the wonderful startups, are critically important as they spawn and infuse new ideas which will eventually either fizzle-out or prove their worth and find their way into bigger products as companies acquire the technology.  This is the history we have seen and the future we can predict, as even the newest capabilities will eventually be outmaneuvered by malware writers and someday also viewed with an eye of inadequacy. 


Nowadays, when people normally talk about AV, they are really talking about is the use of endpoint anti-malware, which is not going away.  There was a push many years ago to actually abandon client based anti-malware in lieu of network-only controls.  The argument was simple, malware and attackers had to go through the network, therefore a focus on filtering bad traffic would solve the problem.  Droves of industry pundits, myself included, listed a number of reasons why this poorly conceived stratagem was doomed to fail.  Which it did.  At the time, those same “AV is dead” arguments were used in an attempt to convince the public and shift users.  But the fundamentals of security don’t change due to marketing and in the end, to be truly effective, a capability must exist on the endpoint to help protect it. 


Even recently I see stories in the news, talking about the death of AV and how some companies are abandoning AV altogether.  When in fact, as far as I can tell, they are not forsaking endpoint anti-malware but rather simply changing endpoint vendors.  This may include a shift in the mix of different techniques or technologies, but still focused on protecting the host from malicious code.  Practically speaking this is not really a huge deal.  Change is part of adaptation and optimization, but the truth probably fails to get the desired headlines.  Claiming a major transition or the death of a technology is far more attention grabbing.  I see this tactic as a marketing ploy by new product companies and news outlets vying for reader’s eyeballs.  It is a pity as many new innovative companies really have something to add to the market and can stand on their own merits without needed to misrepresent others.  After all, the professional security community is working towards the same goal. 


So I believe it is time to retire the “AV” terminology.  Instead, let’s be more specific and use host or network based anti-malware or just anti-malware for short.  This might limit the creativity of marketing folks who periodically dust off the “AV is Dead” stories for a few more views.  Shifting away from the “AV” terminology to more accurate depictions of modern anti-malware is really for the best, for everyone.


Sound off and let me know what you think?



Twitter: @Matt_Rosenquist

Intel Social Network: My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist


When we comtechnical.jpgposed Intel IT’s original strategic plan for using the Cloud Computing more than six years ago, we adopted a strategy of “Growing the Cloud from the Inside Out.” This means that Intel IT would develop an internal private cloud for many applications and eventually move more and more work out to public clouds. A recent Open Data Center Alliance (ODCA) survey shows that deploying applications to a private cloud is a top priority for many organizations – much higher than deploying to public clouds.  Since that is an enterprise priority, what should a private cloud look like?  In a paper published by the ODCA, Intel’s Cathy Spence gives unveils ICApp, Intel IT’s private cloud Platform as a Service (PaaS) and details the architecture that is made available to internal Intel application developers.


ICApp is built on top of a PaaS framework that is in turn based on a Software Defined Infrastructure.   It is built on two open source projects:  Cloud Foundry (CloudFoundry.org) and Iron Foundry (IronFoundry.org).    The former is built for Linux and the latter is an extension for Windows, which allows a single platform to support multiple development languages.  Users can interact with ICApp through a web site, a Command Line Interface (CLI), or through a RESTful API. The architecture is show in Figure 3 of the paper, included to the right.


ICApp is now deployed inside of Intel, and number of applications have been built on top of it. These applications include a supplier rating system, a self-service application monitor, and a development repository.  As one of the authors of the original strategy/white paper I find it very gratifying that the plan we originated is still being followed for the most part.  Also, since I worked on PlanetLab, an influential predecessor to today’s Cloud, I find that ICapp’s deployment platform web interface looks like one of PlanetLab’s application deployment tools.  You can see that interface in the white paper, which I encourage people to look at for more detail.

The shopper is now firmly in control. This is the central premise determining the direction of retail as we move into the future. Empowered with information that’s easily accessible on smartphones, tablets and other devices, shoppers can quickly find what they want at a price they know to be fair. If you don’t have it, it’s simple enough for them to give their business to your competitor instead. Retailers, well aware of who’s driving, are working hard to identify not only what shoppers want today but what they will want tomorrow. At Intel, we believe that advances in computing technology have a key role to play in helping retailers identify and satisfy consumers’ as yet unmet needs.


So, first things first. What DO shoppers want?

Increasingly, the answer is customization. And it’s worth pausing here to distinguish between customization and personalization. People often use those terms interchangeably, but they’re not the same thing. Here’s how I see the difference: “Customized” is when a customer controls how the product or service is changed. For example, when you go to a coffee shop and order a double soy latte, extra hot with chocolate sprinkles, what you get is a customized product. You, the customer, have determined the end result. “Personalized” is when the decisions about a product or service or experience are made based on knowledge that the retailer has about you. The retailer uses data to make decisions on your behalf. A prime example (forgive the pun) would be Amazon. When Amazon gives you recommendations, they’re giving you a personalized set of suggestions based on your purchase history. They have made the choices for you. This is a key distinction. With help from Big Data analytics, personalization of the shopping experience has begun in retail—we get personalized offers, reminders and so forth But what shoppers say they also want is more customized products.

A Cassandra Report survey1 of 15- to 35-year olds conducted last year reported that 79% of those surveyed said they would like to buy customized products, and they are not able to get them today. This is a huge unmet need—one that we in retail are coming closer to being able to meet. Once we get to more widely available automated manufacturing and 3D printing, we will be able to deliver a lot more customized products. The likelihood is that we’re going to see a combination of customized products delivered with personalized retail experiences. Retailers who are ready for this transformation will win.

What’s Ahead?

A preview of what’s ahead can be found in Tokyo subway stations. There, vending machines with cameras inside look at the person standing in front of the machine, figure out that person’s gender and approximate age and, based on that data, highlight the product that the vending machine thinks the person is most likely to want to buy. Of course, the shopper still has a full choice—if they don’t want that product, they can choose something else—but the machine’s smart technology makes the buying process that much easier. There’s less searching, less waiting—less friction.

Shoppers want minimal friction

They don’t want to wait in lines. They don’t want to have to enter their information to buy online. Shoppers will be loyal to your brand until the moment that they find an alternative where there’s one less step or one less click required to do what they want to do. So, removing friction in the system has become a key focus, and technology has a critical role to play.

For examples of retailers who are successfully reducing friction and redefining retail value with customized products and personalized experiences, check back in this space in the coming weeks.



1 Gen Z: Winter/Spring 2015 Cassandra Report. (30 Mar 2015). Retrieved from www.cassandra.com/report/.


Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

* Other names and brands may be claimed as the property of others.

© 2015 Intel Corporation

Filter Blog

By date:
By tag:
Get Ahead of Innovation
Continue to stay connected to the technologies, trends, and ideas that are shaping the future of the workplace with the Intel IT Center