1 2 3 Previous Next

IT Peer Network

1,420 posts

Snackable-3000apps.pngOne thing is for sure – there’s never a dull moment in the IT industry, because nothing stays the same for long. As a result, my job as Intel IT’s Enterprise Operating System program manager is constantly changing. Most recently I directed application readiness for our Windows* 8.1 migration in 2014, and currently I’m responsible for developing Intel IT’s enterprise OS deployment strategy and roadmap.


Keeping Up with Change

This enterprise strategy and roadmap (which is separate from our mobile OS strategy) needs to accommodate trends in the industry, such as the emerging OS-as-a-service delivery model. As described in a recent IT@Intel white paper, “Best Practices for Operating System Deployment,” Intel IT is adapting our best practices to deal with continuous OS updates, in contrast to the previous and traditional once-every-few-years migration. Some of the changes we are making include establishing a permanent virtual machine testing environment and layering applications so that personalization and apps are not affected by changes to the underlying OS. This layering technique provides an optimal user experience across devices and operating systems and enables employees to be more productive, unaffected by the shifting sands of technology.


Determining Which Applications to Focus On

A permanent testing environment implies continually evaluating application readiness. But there are more than 3,000 internal applications in use at Intel – you might wonder how we keep tabs on all those applications. And if we didn’t have a well-developed application readiness plan, the process might indeed keep us up at night. But instead, as documented in the previously mentioned paper, we have a portfolio of best practices that help us keep application readiness on track. One important tool we use is an internal application repository, which enables us to catalog and track all internal applications.


Using Software Solutions

Another approach we take is to use software solutions to analyze applications to determine if they are using legacy code that could cause browser incompatibility issues. When we find such applications, we add them into a legacy application list. We also keep a list of technologies we consider “legacy” – these are technologies that have known limitations when implementing across IT-supported, multiple OS and browser environments, or will increase support and maintenance costs. While the use of legacy technologies will not necessarily prevent an application from working across multiple OS and browser environments, they require additional code and in many cases additional solutions to enable users in those environments. By segmenting browser-based legacy applications from non-legacy, as well as non-browser-based applications, we can proactively identify applications that are likely to have compatibility issues.


Using Crowd Knowledge

Crowdsourcing, too, is a component of our application readiness effort. There are a lot of browser-based applications that are owned by various Intel business groups – we may not know about them or have access to them. Therefore, we communicate regularly with Intel employees through internal publications and the Intel intranet, asking if they know of any such applications that need to be added to the list of browser-based legacy applications. We also promote a collaborative online forum where they can report browser-based application URLs and report what they think is the cause of the compatibility issue. You might be surprised at how helpful this can be. Even though our last major migration was 18 months ago, we still get a steady trickle of reports – environment monitoring for legacy applications is a continuous activity, just like our testing environment is now permanent.


Bringing Legacy Applications Up to Speed

Tracking and updating legacy applications ties directly into Intel IT’s application modernization efforts, referred to as our “Five-Star Program.” We have established coding standards that new applications being developed must pass, and we communicate those standards to Intel’s application developers. Our ultimate goal is an enterprise-wide portfolio of applications that are easy to support, can withstand continuous updates, run in all six supported browsers, promote employee productivity through touch and other means, and are easy to use.


To achieve this, Intel IT’s Architecture team has defined “blueprints” – a decision matrix that helps application developers avoid legacy technologies and choose the appropriate technology for each web application layer. The blueprints include standards for front-end components (user interface and user experience), middle-tier components (web services, middleware, file transfer, and enterprise integration service), and back-end components such as data access, business intelligence, software-as-a service, and so on. The blueprints also define standards for building applications that are not only optimized for the platform but also capable of running on multiple platforms through the use of open source technologies such as HTML5 and other standard technologies.


Although the blueprints are primarily designed to guide new application development, we encourage application owners to address legacy browser-based applications’ compatibility issues by upgrading the applications according to the blueprint design standards. Once an application is reported, we verify the problem and communicate directly with application owners to remediate the issue. When the application is updated to the standards, we remove it from the legacy list.


What’s Going On in Your World?

As Intel IT moves into the next era of operating system deployment, with continual updates and layered applications, I’d be interested to hear what other IT professionals are doing. Please join the conversation by sharing your thoughts and experiences in a comment below.




Is it just me, or does the printer run out of paper, or ink, every time you need a document in a rush - usually a couple of seconds before a meeting or important conference call? Good, I didn’t think it was just me.


In the UK we refer to this phenomenon as Sod’s Law or, more commonly, Murphy’s Law or, as I like to call it, the Infuriating Office Peripheral’s Law.


I wonder how many hours a year are lost in this way in offices up and down the country: how much time it takes to discover the document you sent to print isn’t coming through, let alone the time it takes to search for the paper to refill the printer. I’m guessing hundreds, if not thousands, of hours wasted.


But it needn’t be that way. Thanks to the Internet of Things (IoT), our offices are set to become a whole lot smarter.




Internet of Things in the office

The number of Internet-connected devices is growing at an exponential rate. Gartner predicts that there will be 4.9 billion connected things in use in 2015, up 30 percent from 2014, with this figure rising to 25 billion by 2020.


We’re already pretty used to people talking about the Smart Home: smart fridges, TVs and thermostats, and many of us will already have these devices in our home. Internet-connected vehicles and wearables like smart watches are also finding their way into our lives. Less-often discussed, however, is the power of the IoT to transform the office environment, yet the potential is huge.

Facilities management service provider Coor is just one of the companies leading this charge.

Streamlining facilities management

When you arrive at the office you expect everything you need during your working day to be functioning properly, whether it’s the projector in the meeting room or the towel dispenser in the washroom. If it’s not, it can create inconvenience and delays, which means disruption and lost productivity.


This is where facilities management comes in. However, for companies like Coor to keep on top of every printer, copier, paper towel dispenser and vending machine, they would need to dedicate someone to checking every device, numerous times a day – an unsustainable solution.

The IoT offers a more convenient alternative.

Accelerating the smarter office


Together with Swedish IoT specialist Yanzi, Coor developed an easy-to-use IoT solution powered by Intel® technology. Read the case study here.

Customers are able to attach Yanzi sensors to whatever device it is they want to monitor, out of the box with no complicated set-up. Yanzi gateways, powered by the Intel® Atom™ processor, are equipped with TeliaSonera SIM cards to connect the device to the Yanzi cloud which runs on the Intel® Xeon® processor E5 family. Facilities managers are able to see the data collected by the sensors through an app, either iOS or Android-based, enabling them to keep an eye on all office devices in real time on a single dashboard.

Improving efficiency


Making even a small change can result in huge efficiency gains. For example, say in a meeting, the projector bulb is faulty, resulting in about half an hour spent getting it fixed before the presentation can continue. If there are ten people in that meeting, that’s actually five working hours wasted. By ensuring facilities managers are aware of the fault in the equipment first, the Coor IoT solution helps avoid situations like this and ensure those five hours get put to better use.


Coor has also demonstrated how the solution can help improve overall business efficiency. Sensors placed on desks can track how often they are used and identify any areas that are underutilised. With the cost of office real estate rising across Europe, the advantages of this are obvious.


Already the Smart Office is about way more than placating printer rage. And, in the future, it will be about even more. Coor, Yanzi and Intel are now in discussions with hand dryer manufacturers, vending machine producers and office machinery companies to explore opportunities to further extend the use of the IoT solution.



Ceph Day.jpgCeph shows a strong trend as opensource scale out storage adoption in worldwide market and we are observing strong customer requirements for high performance storage, from must-have SSD as Journal and caching to all SSD flash solutions, ranging from CSP, FSI, Telecom, HPC/government etc to OEM/ODM.


On the weekend of October 18, as the top-level Ceph community and industry conference on Ceph technology in china, themed “Ceph: The future of storage”, 2015 Shanghai Ceph day attracted 33 companies and over 140 developers, IT experts, academic leaders, business and technical managers etc, Intel delivered 1 opening and 4 key technical presentations along with Redhat, Suse, Mellanox, H3C and other industry partners delivered 10 other technical sessions.


On behalf of Intel, I along with two other Intel engineers presented “SSD/NVM technology Boosting Ceph performance”, see attached pdf, we propose first ever all SSD Ceph configuration, combination of 1x NVMe SSD (Intel P3700 800GB) as Journal + 4 x Low cost high capacity SATA SSD (Intel S3510 1.6TB) as OSD data drives, this configuration has dramatically increased random write performance ~100K iops (4 node cluster), which is >32x compares to SSD as journal + 40 HDDs configuration, or in order to reach 100K iops, you need total of 1300 HDDs, this has not counted power consumption, HDD fail rate, space, maintenance cost etc, you can image the total cost ownership (TCO) for using all SSD would be dramatically lower than HDDs…


We also presented Intel iCAS + Intel PCIe/NVMe SSD P3700 accelerates Ceph performance.


In addition, I propose three Ceph configurations: 1)standard/good Ceph configuration, PCIe/NVMe SSD as Journal and caching, plus HDDs as data drives, the ratio is 1:16/20, example is 1 x Intel P3700 800GB SSD + 20 HDDs, P3700 as both Journal and caching (with Intel iCAS), 2)advance/better configuration: NVMe/PCIe SSD as Journal + large capacity SATA SSDs, example is above 1xP3700 + 4 S3510; 3)the best performance configuration will be ALL NVMe/PCIe SSDs, example is 6xP3700 2TB SSD per node.


SSD vs HDD.png   

Cyber Iceberg.jpgThe real costs of cyber attacks are difficult to understand.  The impacts of cybersecurity are terribly challenging to measure, which creates significant problems for organizations seeking to optimize their risk posture. To properly prioritize security investments, it is crucial to understand the overall risk of loss. 


Although managing security is complex, the principles of determining value are relatively straightforward.  Every organization, small to large, wants to avoid more loss than the amount of money they spend on security.  If for example, a thief is stealing $10 from you and protection from the theft is $20, then you are left with an economic imbalance where security costs more than the risk of loss.  This is obviously not desirable.  If however, the thief is stealing $100 and the protection still only costs $20, then there is a clear economic benefit net-gain of $80.  The same principle scales to even the most complex organization regardless of the type of loss, whether it be downtime, competitiveness, reputation, or loss of assets. 


Without knowing the overall impacts, value calculations are near impossible which leaves the Return-on-Investment (ROI) a vague assumption at best.  Possessing a better picture of the costs and the risk of loss is key to understanding the value of investments which reduce such unpleasant ambiguity. 


The bad news.  Cybersecurity is complex and the damages and opportunity costs are difficult to quantify.  So we do what we can, with what we have, and attempt to apply a common-sense filter as a sanity check.  But a lack of proficiency leads to inaccuracy which can result in unfavorable security investments.  For example, in early 2015 the FBI estimated the impact of the CryptoWall ransomware by adding up all the complaints submitted to the Internet Crime Complaint Center (IC3).  The complaints and reported losses for CryptoWall totaled over $18 million.  At the time, it seemed reasonable, even sizeable, given it was a single piece of malware causing so much damage. 


The experts, myself included, were wrong.  We lacked comprehensive data and similar examples for comparison.  In this case, the methodology was not comprehensive and everyone knew it.  Not every person being extorted would report their woes to IC3.  We all expected an underestimate based upon this model but could not do the mental math necessary to generate a more accurate figure.  So we held to the data we had.  In reality, the estimate was more than an order of magnitude off.


Cyber Threat Alliance.jpgJust a few months later, the Cyber Threat Alliance released a CryptoWall report where they tracked the actual money flowing from the malware to BitCoin wallets, the payment mechanism used by the criminals for victims to pay the ransom.  One benefit of cryptocurrencies is the transactions are public, even though the identities of the parties are obscured.  Their analysis shows, thanks to the public nature of the blockchain transactions, that CryptoWall was earning $325 million. 


That is a huge difference!  From believing $18m in damages to having superior data showing $325m in paid ransoms is a great improvement.  It provides a much clearer portrait of the problem and gives people better data to decide the value of security measures.  But we must still recognize this is not the full story.  Although the Cyber Threat Alliance did a great job of showing the ill-gotten-gains of the ransomware campaign, it still falls short of the even larger realization of loss and impact.  It does not capture the harms to those who chose not to pay, the amount of time and frustration every infected person experienced, costs to recover from the attacks and prevent similar future malware infections, and the loss of business, trust, and productivity due to the operational impairments.  There is far more pieces to the puzzle if we are to comprehend the loss in totality. 


It all comes back to value.  If a clearer understanding of the total loss and impact were consistently available, would people and organizations invest in more effective security?  Perhaps, but maybe not.  Regardless, it would give everyone better information to make informed choices.  Managing risk is about making good decisions and finding the optimal level of security.  Absent a realistic picture of the overall detriments, the community cannot hope to properly weigh their options in a logical way.  The shortfalls in measuring Crytpowall is just one droplet in a sea of examples where analysts struggle to find the hidden costs of cyber attacks.  Multiply these accounting misperceptions across the entire cyber ecosystem and we find ourselves standing on a huge iceberg, scurrying about only worried about what is on the surface. 


In cybersecurity we must question what we believe.  It is almost a certainty we are severely underestimating the overall impact and costs of cyber attacks at a macro scale.  If this is true, then our response and investment are also insufficient at the same scale.  The industry must uncover the true hidden costs in order for the right level of security and strategic direction to be justified.  Only then will cybersecurity achieve effectiveness and sustainability.




Twitter: @Matt_Rosenquist

Intel IT Network: Collection of My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

cyber monday.pngEarly shoppers are not the only ones excited about Cyber Monday.  Online criminals are also eagerly awaiting the biggest online shopping day of the year! 

Cyber Monday sales are expected to rise 12% from last year, to reach a staggering $3 billion, according to Adobe Digital Index.  Online retailers are preparing with glee to service every customer to take their share of this year’s sales windfall.  The number of transactions will be staggering, with people scouring the web for deals, entering credit card data, using electronic currency, providing email and shipping addresses, and spending money in large amounts.  Websites, credit cards, and shipping companies will compensate for this expected surge.

From a cyber criminal’s point of view, these activities are all huge opportunities to steal your credit, obtain your personal information, infect your system, and extort money from you.  Cyber threats can turn your best shopping day into a terrible nightmare. 

Let me introduce you to the threats:

Data Harvesters

Data Harvesters work to collect and aggregate personal data, so they can sell it to criminals and advertising networks.  They lure people in with ads, spam, phishing, and fake websites offering insanely good deals, so victims will happily input their personal information and credit card information.

Credit Card Fraudsters

Credit Card Fraudsters use stolen account information to order products and services.  With the vast number of transactions, many of high value, it is a great time for them to obtain pilfered card numbers and use them across the web.  It is more difficult for merchants and credit companies to identify fraudulent transactions on Cyber Monday, even for big ticket items. 

Phishing & Spam Masters

Phishing and Spam Masters assist Bot Herder, Malware Distributors, and Ransomware Extortionists.  Phishing is typically conducted via email, text message, or social media post.  It works by luring the victim to visit a maliciously crafted webpage, open a file, install an application, or click a link.  The result can infect the system with all manner of malware or get the unsuspecting victim to voluntarily provide sensitive data. 

Bot Herders

Bot Herders are continually looking to grow the number of systems they can control via a remote connection.  They install malware on victim’s devices which allow them to harvest information and use the group of controlled systems to collectively attack other targets.  Bot Herders participate in Distributed Denial of Service (DDOS) attacks to bring down websites and click-jack scams to generate ad-revenue.  They can also turn their victims into communication relays to hide other attacks or generate and distribute spam/phishing campaigns. 

Ransomware Extortionists

Ransomware Extortionists conduct one of the more vicious types of online attacks.  These criminals use malware to hijack important files on a user’s systems and encrypt them.  This makes them unusable by the owner.  They may target financial records, important documents, family photos, online game accounts, etc. then demand payments of hundreds of dollars to unencrypt them.  This is a savvy and very popular extortion method which is becoming a growing problem worldwide.  They are always looking for new victims. 

DDOS Extortionists

Website owners need to be online and functioning properly to showcase their wares, service prospective customers, and process digital transactions.  No retail company wants their site to be down.  Cyber Monday, of all the days is the most critical.  Denial of service extortionists capitalize on this fear and demand protection money from sites, with the threat of launching an attack which will interfere with their customers’ ability to reach the site.  These criminals may employ Bot Herders to send a flood of traffic or they may simply find internal weaknesses on the hosting site to corrupt the web portal.  Either way, even being down for a few precious minutes can cost a retailer a significant number of sales.

Don’t be a victim!  Here are some recommendations to avoid the drama and have a great Cyber Monday:


1. Shop only with trusted vendors.

Search for ratings and reviews from people you trust.  Make sure they have a secure website.


2. Use a credit card instead of a debit card.

Credit cards offer more security and protection.  Keep a close eye on your statements for fraudulent charges and immediately report any unauthorized transactions to your credit card company.


3. Don’t fall for phishing or spam in emails, texts, or social media apps.

Never click on embedded elements or links to open an attachment in messages that are sent to you.  These links can hide their real destination.  Instead, open a new browser tab and type in the vendor site yourself (no cut/paste cheating).  Be suspicious of emails and texts from your bank, a retail vendor, or police/FBI.  These are favorite entities that spammers love to impersonate.  Criminals will even title their messages with warnings of “fraud alert”, “order confirmation”, and “transaction validation” to get victims to open and click on links.  Be careful!

4. Avoid online scams.

If the deal seems too good to be true, it probably is.  Beware of ads, and links in email/texts.  You won’t know where they will take you until it is too late.  Use common sense as incredibly inexpensive products may be counterfeit or non-existent altogether.

5. Only download mobile shopping applications from a trusted source and vendor.

Stick to the approved Apple and Android stores.   Avoid any application which asks for your credit card number to help you shop.

6. Beware of Ransomware.

Make sure you have a good and up-to-date anti-malware solution installed.  Backup your files, passwords, important documents and treasured pictures to an offline drive.  I prefer convenient USB drives which are very inexpensive (and also make a great stocking-stuffer holiday gifts).


7. If you have to establish an account on a shopping site, create a unique password (don’t reuse) just for that site.

Use a password manager if needed and backup that password file offline.

And there you have it.  Keep yourself safe and secure for this year’s Cyber Monday bonanza. 

Have more tips or looking for additional insight?  Join me on Twitter to continue the conversation @Matt_Rosenquist


Safe shopping,

Matthew Rosenquist

Let’s say you have been tasked with architecting a Cassandra platform.  One component of the platform is (naturally) storage. During the course of designing your platform, one of life’s axioms will likely rear its unwelcome head: The Project Triangle and the “Pick any two” values from the following vertices: Fast, Good, and Cheap.  While that situation is normally associated with project management, I’ve found that it is also true when it comes to provisioning IT infrastructure.  Simply associate fast with performance, good with capacity, and cheap with cost, and you have some nice parallels.


In one vertex, you have cost (can it be cheap?).  At one end of the spectrum, you have HDDs, and at the other end, you have an all flash solution.  Splitting the difference would present a solution using a mix of HDDs and SSDs.


It follows that, these storage characteristics, directly impact each other if one is optimized to a higher degree than the others (with optimizing all three not possible):

  • Prioritize on cost, and you’ll likely get the needed capacity, but will the solution perform?
  • Prioritize on performance, and run the risk of too little capacity at a higher cost
  • Prioritize on capacity, but can costs be kept low with performance being adequate?


The purpose of this particular blog isn’t really to debate the merits of SSDs vs HDDs – when it comes to Cassandra, Al Tobey has written an excellent tuning guide that covers this. The question I want to pose, assuming SSDs will be part of your Cassandra implementation, is this:  Does the choice of SSD matter?


I like to believe that it does- SSDs vary widely on their cost, performance, capacity, and also what I will put under the umbrella of robustness:  durability (operating in a wide variety of conditions), consistent performance (does the drive perform the same out of the box as when near EOL?  Is performance the same when the drive is empty as when full?), endurance (will the drive media last under the workload it will be subjected to?), and data integrity (has the drive been to shown to back up claims about preventing soft errors?).

There are several variables in the mix here- but what if you’re business model hinged up on making a sound architectural decision on this very topic?  Well, Network Redux has introduced a new service called Seastar, which offers managed Cassandra hosting.  You can read about Seastar’s own process for understanding Cassandra’s storage requirements and their conclusions here.  The TLDR version if you’re in a rush: they selected the Intel® SSD Data Center S3710 Series stating “Our criterion was optimal price performance while weighing capacity, latency, endurance, and manufacturer reputation as important factors in our decision.”


Being an Intel employee, I’m happy with their analysis and selection.  And being a human being, I will admit that with all things in life, YMMV, and I’d be interested in hearing stories about readers’ own considerations and experiences, not just with Cassandra, but your own infrastructure challenges and successes (and failures) over the years.

About a year ago, I was excited to see SGI announcing its UV300 system at SC14. At that stage the system was in the prototype stage, not productized, but what they demonstrated at their booth amazed me. The system included 64 Intel Solid State Drive Data Center P3700 Series NVMe drives, which were launched in June 2014. Cutting edge single image system with corresponding drives.

Besides the unique features of the SGI300 platform that combines 32 sockets of Intel Xeon E7 with NUMAlink interconnect, that’s an amazing platform to see how far we can scale the performance of the multiple NVMe SSDs in the system. That’s exactly what the company did at SC14- demonstrated how far we can scale raw I/O performance by a factor of NVMe SSDs in the system.  With the NUMA optimized NVMe driver SGI was able to achieve a record number of 30 Million IOPS on a 4k Random Reads (64 SSDs) and prove linear performance scalability.


01.png               SOURCE:  https://communities.intel.com/community/itpeernetwork/blog/2014/11/15/sgi-has-built-a-revolutionary-system-for-nvme-storage-scaling-at-sc14


Next step, which is very logical, was to understand the limitations of a file system level, determine its overhead for the maximum bandwidth and IOPS bottleneck on the 4k random workloads. Obviously, there are some challenges here. Having a single file system across a number of NVMe SSDs requires a way to combine it into a single volume. This can be a part of file system functionality LVM or a separate SW RAID built for the purpose. MD Raid can be an option here. It’s a generic Linux SW RAID implementation. In fact, Intel has implemented the extensions to it in “imsm” container options for SATA and recently introduced it for NVMe.

From here I want to refer to SGI’s blog (http://blog.sgi.com/delivering-performance-of-modern-storage-hardware-to-applications) on a recent MD Raid study and XFS file system modifications. They identified the bottleneck for a standard Raid0 implementation in the way of I/O submission, which can be an issue to scale massive NVMe configurations. This resulted in proprietary SGI’s extension in XFS, which is a part of eXFS with extended support of MD RAID with NVMe SSDs. This allows users to continue scaling of I/O parallelism introduced in NVMe specifications.


02.png               SOURCE:  http://blog.sgi.com/delivering-performance-of-modern-storage-hardware-to-applications


Hard to believe? Come to SGI’s booth at SC15 and talk to them about it.

Malicious Teamwork2.jpg

Cyber threats have evolved over time to communicate, cooperate, and in some cases directly collaborate among themselves, giving them a distinct advantage over their security counterparts.  Hackers possess a culture which is comparatively open, mutually supportive, and largely opportunistic.  This has in part contributed to their ability to outpace their security minded adversaries.  It is an advantage the cybersecurity industry largely lacks and must learn to overcome.


Security product and service companies don’t like to share.  They are in the business of protecting their customers against a highly diverse and complex intelligent adversary.  Any information or insights they gain is inherently viewed as a competitive advantage against other vendors.  Sharing such knowledge with others security firms seems counterintuitive from a business perspective.  It is that mindset which has greatly limited cooperation and the pace of innovation.


Businesses and government entities are also unwilling to share how they have been attacked, exploited, or had to respond to such incidents.  It is viewed as a poor public relations choice and also opens the door to other hackers who may attempt a similar attack on a susceptible victim.  Attackers do love to know when something works and then simply duplicate or iterate. 


In essence, the security industry and targets being attacked prefer to remain silent about threat intelligence, best known practices, active exploits, successful attacks, ongoing investigations, and crises they are managing.  The data they do share tends to be sanitized, redacted, and stale.  This greatly limits the value and applicability.   


Attackers are not limited by these compartmentalized practices.  They share code, methods, and readily offer advice.  This has become so rich and valuable, services are now emerging to meet broadening demands.  A variety of different activities are available for a price.  Dark and grey markets offer more than just illicit drugs.  They enable the purchase or lease of knowledge, code, independent contractors, and supporting resources for shady ventures.  Vulnerability brokers act as middle-men to buy and sell weaknesses in software and protocols.  Some offer tantalizing bounties, of up to a million dollars, to entice researchers to deliver valuable 0-day exploits. 


Malware-as-a-Service will author custom malware, sell popular packages, or offer hands-off rental services where they run the malware on your behalf and point it at a target of your choosing.  Along the same lines, hacking services are essentially proficient penetration teams who will breach or provide explicit capabilities to bypass a specific target’s digital defenses for a price.  Distributed Denial of Service (DDOS) packages and platforms can be rented, with prices varying based upon the duration and saturation amount which will be directed at the target. 


Looking for legitimate identities and credentials?  Identity hackers and brokers do all hard data-breach work and sell the end results in nice packages and offer bulk discounts.  Spam and phishing engines can be rented to generate and distribute mind numbing amounts of emails, texts, and links to malicious sites which manipulate or infect visitors.  For those seeking a reputation, social accolades are for sale, where positive-reviews for sites, sellers, vendors, and businesses will be written and posted for your benefit.  Some professional for-hire reviewers, with many followers themselves, will write glowing customized testaments on whatever you want, for as little as a few dollars.  Social media ‘likes’, fake accounts, and bulk ‘followers’ are also available for a price.


Code repositories exist for hackers to share and collaborate on software.  Often different independent parties will download code and make incremental improvements then re-upload it for others to use and repeat the process.  This creates rapid iteration of improved software, with novel features, fewer bugs, and fosters a continuous exploration of new ideas.  There are even malware quality-assurance services which will test your toxic software to make sure it will not be detected by all the major anti-malware software packages and that it will get by the code review protocols of various digital stores.


Human resources are also available.  There are call-centers for hire which can service fraudulent transactions, CAPTCHA verification services for fake account creation verification, mule recruiting for money laundering, digital currency handlers, and package-forwarding people for accepting fraudulent online purchases and then forwarding them to another destination. 



The world of cyber threats has morphed into a specialty economy.  Communication is the grease which allows the wheels to turn.  No longer does an attacker need to be an expert in all areas of hacking.  In fact, attackers no longer need high degrees of technical skills.  They can simply hire-out specialists and orchestrate the pieces into a customized solution to victimize targets and cause havoc with a worldwide reach. 



Threats are evolving ever faster and the security industry must adapt to keep pace.  Teamwork among security professionals, against a common enemy, is no longer an option, but a necessity.  We are collectively better when we actively work together as a community against those who seek to undermine digital security.  Competition in the security industry must not impede providers from recognizing who the real enemy is: the cyber threats.


Cyber Threat Alliance.jpgThis is why initiatives like the Cyber Threat Alliance are so important.  The Cyber Threat Alliance is an organization co-founded by Fortinet, Intel Security, Palo Alto Networks, and Symantec.  (full disclosure: I proudly work for Intel)  The alliance is open to all security vendors serious about sharing relevant and valuable threat information.  Such partnerships across domains and providers is crucial.


Top security organizations with vast sensor and threat intelligence capabilities can paint a better picture and stand together in the fight against sophisticated cyber threats.  These leaders can share and collectively leverage data necessary to gain the insights for better predictions, more effective preventions, improved detection accuracy, and faster response procedures.  Cooperation is both a tactical and strategic security advantage.


Cybersecurity must evolve and learn from its adversaries.  Communication and collaboration are key to rapid innovation and maximizing knowledge.  We are stronger together than separately.




Twitter: @Matt_Rosenquist

Intel IT Network: Collection of My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

Nexterday 2015.jpg

The NexterDay North conference brought together great keynote speakers who talked about Technology Innovation, the Magic of Emotion, and Trust in Business to attendees from nearly 50 countries and a wide variety of industries, with a notable presence from major telecom providers who collectively serve over a billion customers.  These important themes resonated with the audience and set the tone for a conference covering a wide breadth of fresh topics and disruptive thinking.  Security was an underlying premise as speakers discussed the future of digital services and how to delight customers.  Security was expressed as foundational fundamental, necessary for success.  The messages of innovation, emotion, and trust are all heavily intertwined as we look into the growing challenges and opportunities of digital technology.


Greg Williams, from WIRED magazine, provided a dizzying collage of new technologies and the rapid growth of the digital world.  Patrick Dixon, a noted futurist and speaker with boundless energy, talked about how the greatest determining factor of success for digital services is if they can evoke the right emotions in users.  Risto Siilasmaa, the chairman of Nokia Corp, showcased how trust is the single-most crucial aspect in leading people and companies.

I had the pleasure of presenting a Future of Cybersecurity ImpactTalk to the audience. The rapid rise in sophistication of threats has improved their methods, leading to attacks which are growing in effectiveness and scope.  Cooperation among attackers is outpacing what any individual security provider can match.  With the relentless expansion, integration, and reliance of new technology, the cumulative long-term impacts of cyber threats are not easily measured or understood.  I did give hope as collaboration among security organizations is emerging and experts are beginning to comprehend the systemic problems which affect worldwide businesses, governments, and individuals.  The problem is much more significant than previously thought.  In parallel to the risks of rising cyber threats, the expectations of consumers and enterprises are also increasing, placing ever greater demands on security.  The infusion of technology in IoT, transportation, telecommunications, retail, and banking brings incredible change, opportunities, and ultimately risks.

Cybersecurity is fast becoming a recognized prerequisite for success in technology.  To compensate for the acceleration of risks, organizations must recognize how they are viewed by attackers, have proper security leadership, and infuse trust in their products and services.  The challenges and opportunities are similarly exciting, but without strategic insights and leadership, attackers will outpace defenders. 


Download The Future of Cyber Security presentation: http://www.slideshare.net/MatthewRosenquist/the-future-of-cyber-security-matthew-rosenquist




Twitter: @Matt_Rosenquist

Intel IT Network: Collection of My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

2016 Threat Predictions.jpgCybersecurity changes rapidly.  Those with valuable insights can better prepare for the shifting risks and opportunities.  An Intel team, led by the McAfee Labs group, has released a whitepaper covering both the 2016 cybersecurity predictions as well as a five year look-ahead.  Collectively, it paints a picture of a growing technology landscape and the attackers who are maneuvering for an unfair advantage at the expense of others. 


I am honored to have contributed to this year’s exercise, collaborating with a stellar group of experienced security experts.  Many of the predictions are logical extensions of current attacks, newsworthy events, or tied closely to the growth of technology. 


One prediction in particular may surprise the industry.  The growth of Integrity-attacks could be the unexpected shift which will fuel significant change in perspectives, expectations, and controls.


Unlike denial-of-service attacks which undermine the availability of entire systems or data breaches which steal away confidential data, integrity focused attacks maliciously modify data or transactions. 


We have seen a number of cases where attackers with financial motivations are undermining the integrity of data for their benefit.  These types of attacks can be very selective and discrete, making them extremely difficult to detect, prevent, and correct.  Perhaps most importantly, such maneuvers have shown to generate an unexpectedly shocking amount of loss and victim angst.  


Banking infrastructure malware Carbanak, which was discovered in 2015, infected banks and selectively modified systems to create a small number of fraudulent transactions which fleeced hundreds of millions of dollars in a single coordinated campaign. 


In separate attacks, business victims have seen their email systems tampered with.  Fraudulent messages crafted from executive’s accounts to account-payable departments, instructing money transfers be made immediately to a 3rd party.  These of course were not actually from the executives, but rather attackers who were able to gain administrative access to the communication tools and use them to orchestrate funds being sent to entities they control.  


Crypto based ransomware is another huge example where select files of an infected system are encrypted by the attackers and held for ransom.  Consumers, businesses, and even government agencies have been victimized.  We talk more specifically about the prolific rise of ransomware in the 2016 predictions report.  The Cyber Threat Alliance, which includes Intel Security, recently published a detailed analysis showing how one such ransomware, CryptoWall, is responsible for taking a staggering $325 million from victims.


Attacks designed to undermine the Integrity of systems and data tend to create emotional distress in victims as they perceive being specifically targeted in a very personal way.  It is their family pictures being held for ransom, emails with their address are being forged, and select transactions from their company are being tampered with.  From a security perspective, the current generation of available tools are not designed or optimized to protect from such attacks.  The resulting impacts may be enough to fundamentally change opinions and expectations of security.


Overall, we at Intel Security believe integrity based attacks will continue to rise in 2016 and beyond, as they are proving lucrative for attackers and troublesome for defenders.


To protect technology, users, data, and digital services, we all must understand the challenges we will face in the future.  Download the free whitepaper and gain the insights of experts.  http://www.mcafee.com/us/resources/reports/rp-threats-predictions-2016.pdf




Twitter: @Matt_Rosenquist

Intel IT Network: Collection of My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist


Have you ever pulled into a gas station and noticed that shiny pump around the side…  The one with the racing fuel sign?  Can I even put that racing fuel into my car?  Will it make my car go faster?


That’s the question at hand about Intel Optane Technology in the context of Oracle.  Just why did @bkrunner want to show off this new computing fuel at Oracle Open World a couple weeks ago?  Will Optane make the Oracle Challenger II biplane fly as fast as an F/A 18?  Quite possibly…


Database experts know how painful it is to have a cache miss. This is why most Oracle systems are built using as much DRAM as possible.  More data in the fastest tier means less time for the CPU to sit, waiting for the IO, and more time for the CPU to, well, compute.  Any transaction which needs to go out to storage will make the CPU wait. A storage data access moves the access latency from the low 100s of nanoseconds into to the 10s or even 100s of milliseconds for an HDD, anywhere between 10,000 and 100,000 times slower!  That’s like comparing an F/A 18 to a garden snail!


In many environments, including the Oracle HW designs, people are adding in SSDs to augment the memory and boost the speed of the spinning rust (or HDD, aka the snail).  Oracle makes it easy for you with the Smart Flash Cache feature which allows you to extend your buffer pool many times over with NVMe based SSD devices. NVMe is the perfect match for how Oracle extends the buffer pool because of its extremely low latency profile.  Moving as much data as possible into this flash tier with NAND SSDs improves scalability for larger data sets, and improves cache miss performance. NAND SSDs move the access into the low 100s of microseconds domain.  This is only ~1000 times slower than memory, so that super-fast NAND is like the fastest Tiger Beetle.  Zoom! Not quite.  Still slow.


What if you could move the IO wait below 10 microseconds?  The Optane SSD demo at OOW showed a read latency of 9 microseconds!  Awesome!  This is still quite a bit slower than a data access from main system memory, but now we are talking a factor closer to 60 times slower.  Now we’re moving at the pace of a Roadrunner.  That's pretty fast compared to the snail (HDD).  I would still prefer to be sitting in the cockpit of an F/A-18, than running at the pace of the fastest bird, but it beats the pants off the Snail or even the Tiger Beetle!


The inside scoop, though, is that the NVMe interface for the Optane SSD demo is by far the largest part of that 9 microsecond latency.  @bkrunner also showed off a mechanical sample of the 3D Xpoint memory inside a DIMM form factor. Just imagine what could happen if the latency drops a bunch more.  We may not be sitting in the cockpit of an F/A-18, but, we may still get to hang with Sean Tucker in that Team Oracle Biplane… Or, in this case, a better analogy would be a bright red Oracle Airbus A380-800 where we may not win the race, but we do get to bring 799 of our closest friends along for the wild ride!


Do you think Sean Tucker could do some cool tricks while flying an Airbus?  I’ll bet he can…  Buckle up Sean, because Optane is coming soon!

Cash2.jpgIt is a million dollar payday for vulnerability researchers who found an iPhone 0-Day hack.  Recently, Zerodium recently announced the sizeable payout to a team of hackers who successfully developed a technique which can target the Apple operating system.  This exploitable code-weakness is reported to be able to undermine the security of any iPhone or iPad which visits a maliciously created website. 


With such monetary rewards, the reflection of a true market value, this trend of high profile research will continue.  It is a double-edged sword and both helps bolster long term product security as well as undermining it in the short term.  Vulnerability discoveries bring to light existing weaknesses and tend to motivate developers to invest more resources in improving security in their products.  It is much less expensive and embarrassing to squash such bugs before products are released versus after they are in the hands of customers.  On the darker side, such discoveries may allow exclusive access to the highest bidder for a period of time, until the vendor can figure out the problem and apply a suitable fix. 


The cybersecurity industry is still in its infancy and vulnerability research is a hotly debated topic.  We have a lot to learn and experience before we reach a healthy state.  One thing is for sure: the economic impacts are growing.  Just ask those who will be cashing a million dollar check or those tasked with finding a way to protect vulnerable systems.  The costs, impacts, and opportunities of security are going up.  Consider if such hacks are worth a million dollars today, what price will such desirable vulnerabilities command in a year from now? 



Twitter: @Matt_Rosenquist

Intel IT Network: Collection of My Previous Posts

LinkedIn: http://linkedin.com/in/matthewrosenquist

On the weekend  of October 24-25,  the database and applications world that for decades has depended on Oracle software, its features and stability, descended on San Francisco for Oracle Open World 2015. Intel a long time Innovation sponsor was at the show like we always are. In fact Sunday night, the festivities got underway with the always enlightening Larry Ellison keynote. Intel's CEO Brian Krzanich also had a part in the opening keynotes. Brian extolled many of the benefits of the Intel Oracle partnership in his keynote address, and Brian has insisted on providing only one demo at this event. That demo relates to memory architecture and new breakthrough non-volatile memory technology from Intel. That memory is called 3D XPoint technology. In August at Intel's Developer Forum, Rob Crooke and Brian Krzanich demo'd Intel's first 3D XPoint technology-based SSD under the branding of Intel Optane SSD.


What we showed on October 25 was a live, working prototype of our drive, in an Oracle Linux-based Intel Xeon Server, in an unfair competition against our best Intel SSD on the market today, the Intel DC P3700, NVMe based drive. This Optane SSD is also running on open standards NVMe with no special hooks or ladders. This system is an off-the-shelf Oracle Linux build, that you could put together yourself. This is the first Linux based demo that the our new 3D XPoint memory has ever been shown in. Exciting stuff indeed. Because more reliable, faster, and most of all consistent drives at low queue depths is exactly what your applications need. Especially in elastic Cloud, big data set scenarios.


Below is one of the screen shots comparing the two technologies. Similar to what was shown at Oracle Open World on October 25.


Here is the on demand video link to the presentation where at minute 23 the presentation and demo of 3D XPoint begins.

On Demand | Oracle OpenWorld 2015



It’s not uncommon for a company to test their own products before releasing them to the market. Intel IT has been doing this for years with positive results. What’s changed in recent years and is still exponentially growing is the scale of new products, new versions, new infrastructures, and new customer behaviors and expectations. All these factors now need to be incorporated into a test plan.


The Challenge!
In the past when we had one or two products that required testing and were similar in function, testing was easier because the product quality testing could easily be done in a lab. Over time, the IT environment became more complex and Intel started creating more varieties of products in order to meet customer needs. 

This exponential growth in product offering and ecosystem can no longer fit in a lab, especially when people’s behaviors and use of our products are no longer confined to just their office – nowadays people use diverse compute devices, products, and services in every aspect of their life, whether at home, on the road, or in the office.


Facing the Challenges
This ”new era” of computing – including the evolution of the Internet of Things – got us thinking about how we could improve our test processes, and ultimately Intel’s product quality. How could we assure excellent product quality in this brave new world? How could we properly simulate connectivity for every type of behavior and environment? How could we help deliver great products faster to the market while reducing costs?

In-House Testing – A New Approach
As described in our recent white paper, we’ve recently taken a closer look at our current in-house testing process for connectivity products and realized a few things. We have strong processes and people who like to help with testing, but we don’t have as many environments and testers as we’d like to have and we rely a lot on human feedback, which can be time-consuming to evaluate.

So what can be done differently? We set a goal to improve our in-house testing that involved three steps:

  • First – develop a new process that would engage more participants and improve their productivity during testing.
  • Second – eliminate or reduce a lot of the manual processes involved with the actual testing itself: provisioning software, performing configurations, etc.
  • Third – data, data, and more data – get as much meaningful data as possible and reduce some of the reliance on human feedback.


BLOG-RingTesting.pngThe Rings Concept
We use the commonly known rings approach: you establish x number of rings (usually three). These represent your testing deployment process. You start off with a small set of people (Small Ring) for a certain period of time – if that test succeeds you move on to deploy to a larger audience (Medium Ring). Eventually, through the same process, you deploy to a massive audience (Large Ring), which in some cases could even be your entire company.

So What’s Different, You Ask?
Well – the way we applied the processes was very different from how we operated before. For one thing, we automated the data-gathering process. We enabled and initiated collection and automatic analysis of blue screen crash dumps (mini and full dumps). We also collected specific connectivity trace events and automatically correlated them with certain behaviors, while also enabling full visibility into some operational statistics that can support release decisions.

We also enabled a very light and almost fully automated deployment mechanism that allows control over ring associations and doesn’t require re-packaging between cycles.

These two systems enabled us to take the rings concept to a completely different level. Today, we are able to easily associate systems to rings and throttle up and down ring deployments and content with a few clicks. This allows us to collect a lot of valuable data that we share with the business groups to help improve Intel® products.

What’s Next?

We’re still evolving the architecture and the automations, and we keep identifying opportunities to grow in scale and content.

In subsequent blog posts I will discuss the in-depth details of the architecture and automations we put in place, and later on will include some details about the results and benefits (and side effects) of this evolution of in-house testing for wireless products.

You can find more details in the Delivering Strategic Business Value with Automated In-House Testing paper, and we encourage you to engage in the conversation by leaving a comment below. We look forward to hearing from you!

SNACKABLE-Productivity-Black-Hole.pngChances are, you’ve already checked your email several times today. And you’ve probably sent 10 replies, or more. Email is ubiquitous in the business world—so much so that it is becoming a productivity sponge, soaking up employees’ time and energy. At Intel, it’s no different, and our CIO wanted to reduce email usage at Intel. The problem was that we had hundreds of unprocessed, unanalyzed email log files—we had no way to show the CIO what Intel’s email usage really was, let alone understand how to reduce it.


To solve this problem we created an email analytics solution in just two months. The solution is based entirely on the Apache Hadoop* ecosystem: Cloudera Distribution of Apache Hadoop plus Hive*, Pig*, Impala*, and Sqoop*. The solution includes front-end reporting with visualizations and dashboards that provide access to eight metrics. Using the email analytics solution, we took the first step in driving down email usage at Intel: establish a baseline and bring Intel’s email usage to light for the first time.


Aside from the technical details of structuring data from the email servers and integrating that data with the tools, several other aspects of the project were interesting.


Maintaining employee privacy was a high priority. Intel is strongly committed to Privacy by Design—a framework that takes privacy into account and builds in protections at each phase of a product or service development process. Therefore our first step, even before doing a proof of concept, was to work with Intel’s Human Resources and Legal departments to develop a Privacy Plan. For example, the plan stipulated that we could analyze only email header information; we could not look at email content or the subject line, and we could not disclose employee ID numbers. Header information includes the sender and receiver’s names, the date, the file size, the email server name and a few other bits of data.


Intel is a global corporation, with offices all over the world. Privacy laws differ by region, so we had to take that into account during the project. For example, European privacy regulations are quite different from those in the United States. Because Europe accounts for only about 20 percent of Intel’s email traffic, we decided to exclude European email from our analysis, because it would have taken more time than it was worth to investigate the regulations.


Data aggregation was another way we protected privacy. We could not disclose the data if there were less than 100 employees. So we excluded small offices from the study.


From a purely logistical standpoint, the project required creative approaches to staffing. During the proof of concept, we had no funding, because the project was not on the list of budgeted projects. I pulled in a few volunteers (including myself). We thought “hey, this is good thing for Intel, and it’s a great chance to learn new technology.” Our email messaging architect was in India, so scheduling meetings across geos and time zones was sometimes challenging.

SNACKABLE-Shrinking-Data.pngDespite these technical and logistical hurdles, the results from our project had business value. We obtained funding for going to production, and email analytic jobs are now running every day. I have presented the project and its results at a few internal roadshows and on several internal venues, such as our Big Data forum and Analytics forum. It has generated quite a bit of interest, with application developers asking questions about privacy and insights we gained.


Speaking of insights, we intend to expand the project. We want to connect email and social media, and further explore email analytics, such as studying what job roles use more email than others. We’re hoping that by further exploring the impact of email on business, we can better collocate infrastructure for better performance and also work with business groups and teams to help them reduce email usage.


I’d be interested in hearing from other IT professionals about the impact of email on business, email analytics, and other related topics. Share your expertise and experiences by leaving a comment below. Or, if you have a question about our email study and future plans, I’d be happy to answer it. Join the conversation!

Filter Blog

By date:
By tag:
Get Ahead of Innovation
Continue to stay connected to the technologies, trends, and ideas that are shaping the future of the workplace with the Intel IT Center