Skip navigation

Hacking The OS v2.jpg

In cybersecurity, hacking people is much easier than overcoming advanced technical defenses.  Attackers are refining their social engineering techniques, the practice of exploiting people to compromise a system, to deploy their malicious capabilities and do harm.  Social engineering continues to be a significant security problem for the industry, even in the face of improving security technologies. 

As Sun Tsu stated over two thousand years ago, “It is best to win without fighting”.  Why effort overcoming all the technical barriers, when people are the easiest avenue to success?  Attackers are smart.  They tend to follow the path of least resistance in pursuit of their goals.  With ~80% of workers unable to detect the most common and frequently used phishing scams, attackers are winning when they target human behaviors.

Even the most serious investments in security technology can be undermined by poor human behaviors.  Making the castle walls tall and thick will be meaningless if the guards at the gate let everyone in.  This is exactly why attackers have historically maneuvered to manipulate victims into making bad decisions.  In the digital world, it can be as simple as luring an unsuspecting target to click on a malicious link in an email or visit an infected website, which initiates a chain of events to undermine the security and unravel an entire network.  It is that easy.

Raj Samani, Intel Security EMEA CTO and Charles McFarland have released a report Hacking the Human Operating System which outlines the challenges to the cybersecurity community.  They describe the hunting and farming techniques, discuss the social engineering attack lifecycle, and provide a number of defenses against these types of attacks.

Social engineering attacks are not going away anytime soon.  They are evolving to become more effective and represent a significant risk to the security of every person and organization connected to the Internet.  Security fundamentals include a combination of both technical and behavioral controls.  People are part of the battlefield and can be the greatest weakness or asset.  We all must make hacking humans a more difficult proposition for cyber attackers.


Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts


Google Bullseye 2.jpgGoogle has been criticized recently for its vulnerability disclosure policy.  They have taken an aggressive, some would argue haphazard, position of publicly releasing details on a rigid schedule and sometimes before software vendors have distributed patches to customers.  Software companies, including the likes of Microsoft, are given a 90 day notice when Google security researchers find and provide details of a weakness they discover.  This can result publicizing a window of opportunity for attackers to develop hacks, causing exposures of unprotected organizations and individuals. 


Software vendors are not happy.  Ninety days is not much time to understand the problem, develop a fix, test it thoroughly, and deploy to their customers.  For enterprises, patches are a disruption which require resources, testing, and potentially downtime.  Coming from this world, I can attest to having a regular patch cycle helps tremendously with resource planning and to minimize overall disruption.  But these are all tactical issues.


As one never lacking an opinion or afraid to fight against the current, let me say it simply: I LIKE what Google is doing! 


Yes, this is my opinion and I know it is not shared by many, perhaps most, of my colleagues.  But, here is the thing, the software industry must change to better adapt to cyber threats.  The status quo of pushing software products into the market as fast as possible with inadequate security and counting on future updates to close holes, is short-sighted and increasingly less effective.  Patching is an aberration of good product development practices.  It has become a necessary evil to supplement, in many cases, the weak security design and testing of software.  Some patching and updates of software makes sense, but many of the numerous security patches pushed out every year could have been identified and resolved before the product was released, if the developer chose that path.


The industry, being very competitive, has evolved to a point where products are rushed to market.  I’m not arguing if this is good or bad, as it has elements of both, but that is the world we live in.  Security tends to be an afterthought.  Any software developer who takes their time to thoughtfully design, code, and thoroughly test their software to find the types of vulnerabilities being discovered, would probably not have many products in the market and not survive as a business.  The industry itself has evolved in such a way as to deprioritize security and instead rely heavily on post-release patches.  Good security design practices are being penalized in this paradigm.


Something much change.  Attackers are now teaching us this is not a sustainable model.  The technology industry must adapt and make products better.  Google is acting like a cattle prod, painful and disruptive, but driving the heard in a better direction. 


Google is driving change and picking up the cost themselves.  Google employs top-notch talent as part of the Google Zero project to find sophisticated zero-day vulnerabilities in other vendors’ products.  The program has a high degree of transparency and aligns to a noble objective which benefits us all.  This is not Google’s first foray into safer and secure software.  In 2010 they started a highly successful bug bounty program, which has paid out over $4 million to hundreds of different researchers for finding vulnerabilities in their software.  Google may in fact, be one of the few companies which has enough clout, resources, and talent to change the software industry for the better. 


Google’s policy has a number of short term drawbacks, but as a strategist, I am most interested in the long term effects.  Google is funding a top-notch research team to identify critical vulnerabilities in software we all depend upon.   An unwavering position is needed to drive better software development and put necessary tension in the system for developers to respond quickly with fixes when vulnerabilities are discovered.

The reality is, a 90 day window is a major headache for software developers.  Make no mistake, this pain is intentional and by design!  


It will force software developers to do 2 things:

  1. Better quality assurance security testing on products before they ship.  Otherwise they will not be able to handle the ‘vulnerability fix’ workload after and release
  2. Improved responsiveness for better sustaining management of products by listening to vulnerability reports and developing quality fixes rapidly.  Otherwise, their customers will be at greater risk of being victimized publically.  Some vendors take YEARS to develop fixes, even after exploits are in the wild.


One element not being discussed by the community is the fact attackers are also looking for vulnerabilities.  If they find them independently of Google, they do not publicize their findings, give developers time to shore up weaknesses, or show any mercy.  One of the roles of white-hat security researchers it to undermine zero-day exploits.  Google’s short deadlines may result in vulnerabilities, already discovered by attackers, being closed before catastrophic damage occurs. 


So Google, if you are reading, keep up the good work.  It benefits us all through the encouragement of the software industry to embrace better practices, prioritize security in their products, and enhance the trust of technology.  I for one, see your commitment to a largely ungrateful community.  Stay on the path. 



Twitter: @Matt_Rosenquist

IT Peer Network: My Previous Posts