1 2 3 Previous Next

Intel Health & Life Sciences

31 Posts authored by: David Houlding

In my second blog focusing on Bring Your Own Device (BYOD) in EMEA I’ll be taking a look at the positives and negatives of introducing a BYOD culture into a healthcare organisation. All too often we hear of blanket bans on clinicians and administrators using their personal devices at work, but with the right security protocols in place and enhanced training there is a huge opportunity for BYOD to help solve many of the challenges facing healthcare.


Much of the negativity surrounding BYOD occurs because of the resulting impact to both patients (privacy) and healthcare organisations (business/financial) of data breaches in EMEA. While I’d agree that the headline numbers outlined in my first blog are alarming, they do need to be considered in the context of the size of the wider national healthcare systems.


A great example I’ve seen of an organisation seeking to operate a more efficient health service through the implementation of BYOD is the Madrid Community Health Department in Spain. Intel and security expert Stack Overflow assessed several mobile operating systems with a view to supporting BYOD for physicians in hospitals within their organisation. I highly recommend you read more about how Madrid Community Health Department is managing mobile with Microsoft Windows-based tablets.



The Upside of BYOD

There’s no doubt that BYOD is a fantastic enabler in modern healthcare systems. But why? We’ll look at some best practice tips in a later blog but suffice to say here that much of the list below should be underpinned by a robust but flexible BYOD policy, an enhanced level of staff training, and a holistic and multi-layered approach to security.


1) Reduces Cost of IT

Perhaps the most obvious benefit to healthcare organisations is a reduction in the cost of purchasing IT equipment. Not only that, it’s likely that employees will take greater care of their own devices than they would of a corporate device, thus reducing wastage and replacement costs.


2) Upgrade and Update

Product refresh rates are likely to be more rapid for personal devices, enabling employees to take advantage of the latest technologies such as enhanced encryption and improved processing power. And with personal devices we also expect individuals to update software/apps more regularly, ensuring that the latest security updates are installed.


3) Knowledge & Understanding

Training employees on new devices or software can be costly and a significant drain on time, notwithstanding being able to schedule in time with busy clinicians and healthcare administrators. I believe that allowing employees to use their personal everyday device, with which they are familiar, reduces the need for device-level training.  There may still be a requirement to have app-level training but that very much depends on the intuitiveness of the apps/services being used.


4) More Mobile Workforce

The holy grail of a modern healthcare organisation – a truly mobile workforce. My points above all lead to clinicians and administrators being equipped with the latest mobile technology to be able to work anytime and anywhere to deliver a fantastic patient experience.



The Downside of BYOD

As I’ve mentioned previously, much of the comment around BYOD is negative and very much driven by headline news of medical records lost or stolen, the ensuing privacy ramifications and significant fines for healthcare organisations following a data breach.


It would be remiss of me to ignore the flip-side of the BYOD story but I would hasten to add that much of the risk associated with the list below can be mitigated with a multi-layered approach that not only combines multiple technical safeguards but also recognises the need to apply these with a holistic approach including administrative safeguards such as policy, training, audit and compliance, as well as physical safeguards such as locks and secure use, transport and storage.

1)  Encourages a laissez-faire approach to security

We’ve all heard the phrase ‘familiarity breeds contempt’ and there’s a good argument to apply this to BYOD in healthcare. It’s all too easy for employees to use some of the same workarounds used in their personal life when it comes to handling sensitive health data on their personal device. The most obvious example is sharing via the multitude of wireless options available today.

2) Unauthorised sharing of information

Data held at rest on a personal devices is at a high risk of loss or theft and is consequently also at high risk of unauthorized access or breach. Consumers are increasingly adopting cloud services to store personal information including photos and documents.


When a clinician or healthcare administrator is in a pressured working situation with their focus primarily on the care of the patient there is a temptation to use a workaround – the most obvious being the use of a familiar and personal cloud-based file sharing service to transmit data. In most cases this is a breach of BYOD and wider data protection policies, and increases risk to the confidentiality of sensitive healthcare data.

3) Loss of Devices

The loss of a personal mobile device can be distressing for the owner but it’s likely that they’ll simply upgrade or purchase a new model. Loss of personal data is quickly forgotten but loss of healthcare data on a personal device can have far-reaching and costly consequences both for patients whose privacy is compromised and for the healthcare organisation employer of the healthcare worker. An effective BYOD policy should explicitly deal with loss of devices used by healthcare employees and their responsibilities in terms of securing such devices, responsible use, and timely reporting in the event of loss or theft of such devices.

4) Integration / Compatibility

I speak regularly with healthcare organisations and I know that IT managers see BYOD as a mixed blessing. On the one hand the cost-savings can be tremendous but on the other they are often left with having to integrate multiple devices and OS into the corporate IT environment. What I often see is a fragmented BYOD policy which excludes certain devices and OS, leaving some employees disgruntled and feeling left out. A side-effect of this is that it can lead to sharing of devices which can compromise audit and compliance controls and also brings us back to point 2 above.


These are just some of the positives and negatives around implementing BYOD in a healthcare setting. I firmly sit on the positive side of the fence when it comes to BYOD and here at Intel Security we have solutions to help you overcome the challenges in your organisation, such as Multi-Factor Authentication (MFA) and SSDs Solid State Drives including in-built encryption which complement the administrative and physical safeguards you use in your holistic approach to managing risk.


Don’t forget to check out the great example from the Madrid Community Health Department to see how our work is having a positive impact on healthcare in Spain. We’d love to hear your own views on BYOD so do leave us a comment below or if you have a question I’d be happy to answer it.



David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Today I begin a series of blogs which take an in-depth look at the issues surrounding what is commonly known as ‘Bring Your Own Device’, with a focus on the Healthcare and Life Sciences sector in EMEA. Bring Your Own Device or BYOD, is the catch-all phrase attributed to the use of personal technology devices in a corporate environment for business use.

I’ll look at the scale of BYOD in specific territories, get under the skin of exactly why this trend has taken off over the past few years and dig into the detail of the opportunities and costs of allowing a BYOD culture to develop within your healthcare setting. I’ll conclude the series with some practical advice on how advances in technology can help you safeguard your systems and data.


I’d also be interested to get your views too (leave a comment below or tweet us via @intelhealth), whether you’re a clinician working at the sharp end of care delivery and benefiting from using a personal device at work, or you’re an IT administrator tackling the often thorny and complex issue of implementing a BYOD policy in your organisation.


Cost of Data Breaches in EMEA

Providing context to the scale of BYOD across EMEA inevitably means looking at the cost of data breaches but I’d stress that not all of the consequences of allowing BYOD are negative, as I’ll explain in a later blog. A great point of reference though is the Ponemon Institute, which has produced detailed reports on the cost of data security breaches for many years.




Cost (m)

Ave per

Capita Cost

Negligence Cause (%)


Attacks (%)

System Biz Process Failures (%)



















UAE/Saudi Arabia
























*Source: 2014 Cost of Data Breach Study (country specific -. December 2014 – Ponemon Institute


The table above shows the significant costs associated with data breaches across a number of sectors including pharmaceuticals, energy and public (which includes health). BYOD sits under the term ‘Negligence Cause’ in the table above and for some countries in EMEA it accounts for a significant portion of overall breaches. The organisational costs are significant and reflect not only the consequential increase in investment to safeguard security weaknesses but also fines levied by national and pan-regional government.


I’ll drill down into specific examples of Bring Your Own Device in healthcare in more detail in the future but as a brief indicator we know, for example, that in England the National Health Service (NHS) suffered 7,255 personal data breaches over a 3 year period. These breaches of healthcare information security include data being lost, stolen or inappropriately shared with third parties, and in the case of inappropriate sharing this often includes a workaround using a personal device.


Opportunities presented by Bring Your Own Device

The negative comments around BYOD and associated costs to healthcare organisations as a result of data breaches often mask what are some fantastic upsides. I’m keen to emphasise in this series that with the right security solutions, both at rest and in transit, and across the entire network-client-device continuum, there are significant advantages to healthcare organisations in allowing individuals to use personal devices at work.


I hope this first blog has piqued your interest in what is a hot topic within the health and life sciences sector across the EMEA region. If you’ve successfully implemented a BYOD policy in your healthcare organisation or you want to highlight why and how you are using your personal device to deliver better patient care we’d be grateful to hear from you.


It would be fantastic to share some great examples from EMEA to help our community learn together. If you want to be the first to know when the next blog in this series will be published then sign-up to our Health and Life Sciences Community.


David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

I recently spoke to the Apps Alliance, a non-profit global membership organization that supports developers as creators, innovators, and entrepreneurs, on the latest trends in healthcare security.


It was a fascinating 40 minutes and a great opportunity to take a look at security issues not just from the healthcare professional or patient perspective, but also from a developers’ point of view. In this podcast, we take a look at what's important to all three groups when it comes to privacy, security and risk around healthcare data.

Listen to the podcast here


We discussed:


  • Best practices for developers looking to secure healthcare data
  • Security challenges that stem from the flow of data from mobile healthcare devices
  • The relationship between usability and security


I recently wrote a blog looking at the perceived trade-off between usability and security in healthcare IT and how you can mitigate risks in your own organisation. We have solutions to help you overcome these challenges, many of which are outlined in our Healthcare Friendly Security whitepaper.


We'd love to get your feedback on the issues discussed in the podcast so please leave a comment below - we're happy to answer questions you may have too.


Thanks for listening.

David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

A consequence of the unprecedented rate of advances in technology has brought the topic of usability of devices in the workplace to the fore. Usability used to be a 'nice to have' but with experiences and expectations heightened by the fantastic usability of personal mobile devices it has become a 'must-have'. The corporate healthcare IT environment is faced with a challenge.


Taming the BYOD culture

Either they invest in great corporate IT user experiences for employees or they'll be exposed to the dangers of the 'Bring Your Own Device' (BYOD) to work movement. And healthcare workers are amongst the first to look for workarounds such as BYOD when usability of their IT is having a negative impact on their workflow.


If organisations allow a BYOD culture to become established they face heightened security and privacy risks which can often result in data breaches. Since 2010, the Information Commissioner's Office (ICO) in the UK has fined organisations more than £6.7m for data protection breaches. Of this, the healthcare sector suffered fines of some £1.3m alone, which accounts for nearly 30% of the British public sector penalties.


These costs highlight the importance of avoiding data breaches, particularly as the UK's public sector health organisations rapidly moved towards cloud-based electronic health records under the Personalised Health and Care 2020 framework. If data security is lacking because of workarounds it may well negate the predicted cost-effective benefits of moving to electronic health records for both patient and provider.


The 2020 framework acknowledges that, "In part, some of the barriers to reaping those benefits are comparatively mundane: a lack of universal Wi-Fi access, a failure to provide computers or tablets to ward or community-based staff, and outmoded security procedures that, by frustrating health and care professionals, encourage inappropriate ‘workarounds.’”


Mitigating risk of loss or theft

Loss or theft of devices is another common cause of data breaches in healthcare. An audit of 19 UK health-related organisations by the ICO concluded that "a number of organisations visited did not have effective asset management in place for IT hardware and software; this raises the risk of the business not knowing what devices are in circulation and therefore not becoming aware if one is lost or stolen."


There are a number of options to mitigate risk in these circumstances. First, usability and security can be vastly enhanced using Multi-Factor Authentication (MFA), which when combined with Single Sign On (SSO) reduces the overall number of device logins required. Second, replacing unencrypted conventional hard drives with SSDs (Solid State Drives) + encryption lowers the risk in the event of theft or loss but also improves data access performance. And that's a win-win result for all healthcare professionals.


Effective security is like a chain, it requires the securing of all points and either removing or repairing the weak links. Intel Security Group's solutions has security covered from mobile devices, through networks to back-end servers. We're already helping healthcare organisations across the globe to embrace the rapidly changing face of technology in the healthcare sector while managing risk and improving that all-important usability.


We've produced a whitepaper on Healthcare Friendly Security which will help you strike the balance between fantastic usability and industry-leading security in your organisation. Grab your free download today.



David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

As recently as 10 years ago, healthcare IT was mostly corporate provisioned, less diverse and there were slower refresh rates. Up to this point, usability was treated as a “nice to have” and significantly lower priority than features or functionality of solutions.


In this more homogeneous and slower changing environment there was, for the most part, one way to get the job done. Fast forward to today where most healthcare IT environments are much more heterogeneous with a myriad of devices, both corporate and personal BYOD (Bring Your Own Device), operating systems, apps, versions, social media, and now we have wearables and Internet of Things rapidly growing. Furthermore, refresh rates are much faster, especially with personal / BYOD devices, and apps. In today’s environment, usability is very much a “must have” because if it is not present research shows healthcare workers find workarounds, like using personal devices, and these workarounds drive non-compliance and additional security and privacy risk and can often be the source of breaches.


Traditionally we have approached usability and security as a tug of war or tradeoff … where having more security meant less usability and vice versa.


For more information on Healthcare Friendly Security see this new whitepaper.


Unfortunately, breaches have reached alarming levels in both business impact and likelihood. The total average cost of a data breach in 2014 was US $3.5 million. This average is global and across several industries including healthcare. Looking more specifically at healthcare, the global average cost of a data breach per patient is US $359, the highest across all industries. With this kind of cost avoiding, breaches are of paramount importance for healthcare organizations. But how can we add security without compromising usability, and inadvertently driving workarounds that actually cause non-compliance and risk?


What is desperately needed is security that preserves or even improves usability, where risks are significantly mitigated without driving healthcare workers to use workarounds. On the surface this may seem impossible, yet there are several security safeguards today that do just that. Many breaches occur due to loss or theft of mobile devices. A very good safeguard to help mitigate this risk are self-encrypting SSD’s (Solid State Drives). If one takes a conventional hard drive, unencrypted and at risk of causing breach if lost or stolen, and replaces it with an SSD + encryption this can often have better data access performance than the original conventional unencrypted drive. Another example of a safeguard that improves usability and security is MFA (Multi-Factor Authentication) combined with SSO (Single Sign On), which improves both the usability and security of each login, as well as reduces the overall number of logins.


Intel Security Group is focused on creating innovative security safeguards that combine security software vertically integrated with security hardware that improve usability and harden the overall solution to make it more resilient to increasingly sophisticated attacks, such as from cybercrime. With cloud and mobile, and health information exchange, security becomes like a chain, and effective security requires securing all points and avoiding weakest links. Intel Security Group solutions span right from mobile devices, through networks to backend servers. This paves the way for healthcare to adopt, embrace, and realize benefits of new technologies while managing risk, and improving usability.


What questions about healthcare IT security do you have?


For more information on Healthcare Friendly Security see this new whitepaper.


David Houlding, MSc, CISSP, CIPP is a Healthcare Privacy and Security lead at Intel Corporation and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

With increasing variety, volume and velocity of sensitive patient data, healthcare organizations are increasingly challenged with compliance with regulations and data protection laws, and avoiding breaches. The total average cost of a data breach reached US $5.9M in the United States (2014 Ponemon Cost of a Data Breach), representing an average of $316 per patient record. The prospect of random audits to enforce compliance with regulations, such as the OCR HIPAA privacy, security and breach notification audits, continues to loom large.


Healthcare.jpgUnderstanding what sensitive data you have is absolute prerequisite to securing yourself, and has never been more important. Only with an accurate understanding of what sensitive data is at rest, in use, and in transit, can a healthcare organization successfully secure itself. If a healthcare data inventory misses some sensitive data it can go unsecured and lead to a security incident such as a breach, or a finding of non-compliance with regulations or data protection laws in the event of an audit.


Ten years ago, healthcare environments were more homogeneous with fewer types of clients, mostly corporate provisioned and more uniform, and with a slower refresh rate. Software used by healthcare workers was also mostly corporate provisioned, leading to a more consistent, less diverse, and more slowly changing software IT environment. In this more homogeneous and slower changing IT environment an annual manual data inventory may have been sufficient where a security and privacy team worked with documentation, IT management tools, and healthcare workers to conduct inventories.


Today, most healthcare organizations are much more heterogeneous with a mix of clients or endpoints: smartphones, tablets, laptops, wearables, and Internet of Things. Furthermore, healthcare networks today are a mix of personal, BYOD, and corporate provisioned devices, and have a faster refresh rate, especially for personal and BYOD devices such as smartphone that are often upgraded within two years or less. Exacerbating this diversity is a myriad of operating systems, versions, apps and online services including social media that are collecting, using and storing new types of sensitive data, and moving it over the network in new ways. The bottom line is that healthcare environments have a major challenge tracking all the sensitive data they have at rest, in use, and in transit. Given these challenges, a conventional annual data inventory is generally not sufficient.


Today, it is critical for healthcare organizations to understand what sensitive data they have on their networks in near real-time. Once a healthcare organization identifies new unprotected sensitive data on their network they can proactively initiate remediation which can include:


  1. Delete sensitive data in an unsecured location,
  2. Encrypting sensitive data in place,
  3. Move sensitive data in an unsecured location somewhere more secure, and
  4. Educate healthcare workers on preferred alternatives to avoid future non-compliance and privacy and security risks.


Data Loss Prevention is a mature security safeguard solution that includes the ability to discover sensitive data at rest and in transit. With the rapidly increasing diversity of healthcare IT environments and variety of sensitive data they are collecting, using, storing, and moving, the value proposition of DLP and in particular in its ability to discover sensitive healthcare information has never been greater. This provides a key safeguard to supplement other data inventory initiatives within a modern healthcare organization. Intel Security Group provides network and endpoint DLP solutions that include this discovery capability. Furthermore, these can be vertically integrated with Intel hardware assisted security including AES-NI for hardware accelerated encryption (Data Loss Prevention Best Practices for Healthcare). An effective near real-time inventory of sensitive data, combined with a proactive approach to secure any unsecured sensitive data, enables healthcare organizations to embrace and realize the benefits of new technologies while keeping privacy, security and non-compliance risks manageable.


Does your healthcare organization have DLP, and if so do you have the processes in place to use it effectively and realize its full value for near real-time discovery and protection of sensitive data on your network?

Several research studies, most recently Curbing Healthcare Workarounds for Coworker Collaboration, show that healthcare workers are increasingly using workarounds or procedures out of compliance with policy. Lack of usability, cumbersome security and slow or overly restrictive IT departments all drive the use of smartphones / tablets and apps, texting, file transfer, USB sticks, personal e-mail, social media, etc with sensitive information. The majority of healthcare workers are not malicious, but rather well intentioned, and motivated to improve the quality and reduce the cost of healthcare.


However, increasingly these healthcare workers are empowered with information power tools, including mobile devices, apps, social media, wearables and Internet of Things, and mostly without the privacy and security savvy to enable their safer use. While these workarounds offer more usable and exciting new alternatives, they also bring major privacy and security risks and non-compliance with policy. BYOD becoming mainstream, apps and devices are gaining power, and the rapid growth of wearables and the Internet of Things further exacerbate these risks.


Some “black and white” workarounds, clearly out of compliance with policy, are today effectively detected and mitigated by MDM and DLP. For example if policy forbids the use of USB sticks with certain endpoint devices and this usage is detected by safeguards on such an endpoint device it is straightforward to prevent. However, compliance with policy is often much more difficult to establish in practice with many other user actions. For example if a user is using a file transfer app to transfer a photo from their personal device at a healthcare facility is this out of compliance? Well, it typically really depends on the content of that photo. Is it a photo of a patient, representing PHI and non-compliance, or is it a photo from last weekends hike, being shared with a healthcare friend co-worker, and representing acceptable use.


Unfortunately, classification of PHI is challenged with many media types including images, audio, video and often even free form text. Further, many of these media types can be directly acquired on an endpoint such as a personal smartphone, and exchanged over networks such as 3G or 4G that bypass healthcare organization secure servers, challenging existing safeguards including thin client solutions that secure healthcare data on secure managed backend servers.


This “gray region” of the risk spectrum is rapidly growing with increasing empowerment of healthcare workers. If you have a personal device and participate in your organizations BYOD program you most likely had to install some MDM software on your device to enable this. However, to illustrate the magnitude of residual risk even after installation of MDM, consider the range of risky actions the healthcare worker can still perform on the device, including taking photos, recording video / audio, texting, file transfer, personal e-mail, social media, and so forth. Many organizations use annual security awareness training to help mitigate this risk. However, this training is often ineffective, and the technology and risk landscape fast evolving.


Join us at IAPP Privacy Academy and CSA Congress 2014 in San Jose, Calif., September 17-19, for a lively interactive session on Healthcare Workarounds: Managing Risk in the Age of User Empowerment, sharing 2014 HIMSS research covering the extent of healthcare workarounds, motivations, types and mitigations. We will then highlights practical strategies to enable new technology use while effectively mitigating risk and improving compliance with policy and regulations.


What you’ll take away:

  • Increasing empowerment of healthcare workers with technology
  • Growing source of risk from workarounds
  • Practical strategies to embrace and benefit from new technology while effectively mitigating risk


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

In a previous blog, I highlighted the increasing empowerment of end users, whether healthcare workers or patients, with information power tools, delivering powerful new capabilities, but also new privacy and security risks. Health and fitness apps are rapidly proliferating across mainstream app ecosystems, driven by compelling benefits.


A sampling of the Google Play Store for Health and Fitness apps shows a wide spectrum of apps ranging from calorie counters, to workout managers, pregnancy assistants, health managers, mental health apps, dieting apps and more. They are collecting increasing variety and volume of sensitive personal information, and often near realtime, presenting a growing privacy challenge. Many users are concerned about privacy and security on some level. However, many are not sure where the specific privacy risks are, or what alternatives they have to continue to engage while also reducing risks. In this blog I explore specific privacy risks in the app space.


There are many ways one can reduce such privacy risks. Below I explore six of the most practical actions one can take to continue to engage in using these kinds of apps, while also reducing risks.


1. Finding Safer Alternative Apps
In mature mainstream app ecosystems, there are many choices of apps in various categories, including health and fitness. When choosing, paying careful attention to number of users, app rating, reviews, privacy policy, and app permissions can help inform you of the associated privacy risk. Many app developers offer free baseline versions of apps which often use advertising to monetize. These free advertising supported apps often contain embedded ad network libraries that are privacy intrusive. Sometimes it is better to opt for the 99c version to avoid such privacy intrusions. After you have made your choice and installed an app, periodically running an app privacy & security scanner helps detect apps that use dangerous permissions, poor reputations, or are privacy intrusive.


2. Configuring Apps for Privacy
Many apps contain settings such as opt-outs that control your privacy cost in using them. They may also have passwords or encryption options. Once you have installed an app be sure to look through settings, be aware of what configuration controls you have, and set them in a way that enables you to use the benefits of the app you want, while avoiding any additional unnecessary privacy cost.


3. Disabling Apps You Still Want, But Use Only Occasionally
Beyond the types of personal info apps collect, privacy cost can also depend on the length of time your personal info is collected. In many cases we use an app only periodically, or rarely. However, many apps have background services that are continuously collecting your personal info. Mainstream operating systems including Android enable you to disable apps you use only occasionally, and have those apps automatically re-activated the next time you use them. For example, to do this in Android use the app manager to find the app you want to disable, then select it and “Force Stop” it. This disable endures through a reboot of the device. Only when you next run the app will it be automatically and seamlessly restarted.


4. Uninstalling Apps You No Longer Use
Most of us have had the experience of trying an app, not using it afterwards, while forgetting to uninstall it. Such apps can have background services that collect your personal info indefinitely, even though you are not actively using them. This not only has a privacy cost, but also consumes storage, CPU, battery, radio bandwidth and so forth. Periodically, perhaps monthly, reviewing the apps you have installed and uninstalling ones you no longer use is highly recommended to reduce you privacy cost and free up your device resources to ensure best performance.


5. Be Careful about the Type of Personal Info You Share, and Who You Share With
The privacy cost of using an app is dependent on what types of personal info you are sharing with it, and gets access to it as a result. Some of this sharing of your personal info automatic, and enabled through the permissions you grant to the app at install time as discussed above. However, much of this is the type of personal info you explicitly opts to share while using the app. Be especially careful of sharing anything that can identify you, be used to contact you, be used to locate you, or information that could be embarrassing or abused in the wrong hands. Any kind of financial or insurance information is also risky as it is highly sought after by hackers since it is easily monetized. Be aware of the audience you are sharing with through the app. For example when posting to social media there is often a choice of the particular forum or group you are sharing with on a post by post basis. Be sure the ones you are sharing with are the ones that have a need to know, and that you want to share with.


6. Pay Your Privacy Savvy Forward
Privacy is not an evenly distributed expertise. As you find value in using these practical approaches to reducing your privacy cost in using apps, pay it forward to your friends, family and colleagues to help them engage and have fun while avoiding privacy intrusions.


What other practical approaches are you using to protect your privacy and security while using apps?


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Healthcare workers are being empowered with more and more information power tools including apps, smartphones, tablets, social media, wearables and Internet of Things. These tools are fast evolving with new software capabilities appearing daily, and refresh rates on hardware much shorter than seen in the past with PCs.


With each of these information power tools comes new privacy and security risks. While existing security safeguards help, they only partly mitigate risk, and significant residual risk remains even after application of MDM or other safeguards, as evidenced by the range of risky actions users can still perform even with these safeguards in place. Effective training is direly needed to address this and help healthcare workers understand privacy and security risks of their actions and alternatives available to them to engage while reducing risk.


2014 HIMSS Analytics global research on healthcare privacy and security reveals most organizations provide security awareness training annually 70 percent or during new employee orientation 55 percent, with less than 40 percent providing such training on demand as needed.


David Graphic.jpg


Given new risks appearing daily with the fast evolving information power tools landscape, this highlights a gap. Even in a best case where a healthcare organization fully comprehends all privacy and security risks, and training is completely up to date at time of delivery, six months down the line the information power tools healthcare workers are using is very different, exposing completely new privacy and security risks.


To enable healthcare to embrace and realize the benefits of new technology, including information power tools, we need a near realtime way of engaging healthcare workers with on the job privacy and security training that tracks the evolving risk landscape, fits into their workflow and context, highlights privacy and security risks of their specific actions in teachable moments, and helps them understand safer alternatives available to both engage while reducing risk.


For example a healthcare worker may take a picture of a patient with their smartphone and initiate sharing with coworkers using a file transfer app. While convenient this can expose new risks to confidentiality, integrity, availability, and trans-border data flow. What if the healthcare organization privacy and security team could reach that healthcare worker in the teachable moment of this use case, highlight the risks, and viable alternatives available for them to achieve their goals while reducing risks?


This would effectively empower the healthcare workers with the privacy and security savvy they need to counterbalance the new risks they are exposed to with the new information power tools at hand, and enable healthcare organizations to embrace and realize benefits of new technologies for better patient care, while keeping risks of privacy and security incidents such as breaches manageable.


What approach are you using for effective privacy and security training of your healthcare workers?


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Healthcare workers are being empowered with more and more information power tools, from apps, to smartphones, tablets and other devices, to social media, and now wearables and Internet of Things.


These tools deliver great benefits that can improve the quality of patient care, and reduce the cost of healthcare. However, they also bring new risks of accidental breaches and other security and privacy incidents. 2014 HIMSS Analytics global research on healthcare security shows healthcare workers use workarounds (out of compliance with policy) either daily (32%) or sometimes (25%). For example a workaround could be texting patient information to a healthcare co-worker, using a file sharing app with patient information, and so forth.


Any one of these could result in a breach, and the staggering cost of a data breach averaging around US $5.85 million in the 2014 Cost of a Data Breach Study. The prevalence of workarounds and impact of security incidents such as breaches highlights the alarming probability and impact of this type of privacy and security risk from healthcare worker user actions. These types of risks and impacts are also set to increase going forward as healthcare workers are further empowered. In most cases, healthcare workers are well-intentioned and try to do the right thing.  However, they inadvertently add risk using new information power tools, often using them under time or cost reduction pressure. Exacerbating this, security and privacy awareness training provided by healthcare organizations is often limited in effectiveness, and even in a best case where training is up to date and well delivered, the technology landscape is fast evolving so the technology and risk landscape is significantly different even a few months later.


To date, much of the emphasis on responsibility for privacy and security has been placed on tool and service providers, enforced by regulators. This is analogous to safety regulators regulating the safety features of power tools used in workshops and for construction: even with the tool’s safety features, users know that they could inflict significant harm on themselves or others if they use the tools incorrectly.


In other words, they are responsible for using the power tools and incorporated safety features in a way that delivers the benefits while keeping risks of accidents minimal. What we are seeing in the information technology landscape is healthcare workers being empowered with information power tools such as apps, mobile devices, social media, wearables and Internet of Things, with little or no concurrent effective empowerment of privacy and security savvy on how to use these to get benefits while also minimizing risks of security incidents such as breaches.


To enable healthcare to rapidly realize the benefits of new technologies while keeping privacy and security risks manageable, we must find better ways of effectively empowering healthcare workers with the privacy and security savvy they need to use these information power tools safely.


What privacy and security risks are you seeing with healthcare workers using information power tools? I’m also curious about your thoughts, strategies, and best practices on how to manage these risks?


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

A recent Reuters / Ipsos poll finds that 51 percent of Americans are spooked by Internet companies and their growing ambitions, “with a majority worried that Internet companies are encroaching too much upon their lives.”


Clearly, users are increasingly concerned about privacy, including healthcare workers and patients. This will likely be exacerbated going forward by new powerful apps and devices, health and wellness devices, medical devices, social media, wearables, and the Internet of Things.


However, many users don’t know what to do about it, and feel the situation is hopeless, as can be seen by widespread sentiment that “privacy is dead.” I assert that if privacy were dead, and all of our personal information was available to anyone who wanted it, including malicious individuals that would seek to abuse it, then we would be much worse off than we currently are.


In a prior blog, I discussed Viewing Health IT Privacy as a Purchase Decision. From this standpoint, privacy is far from dead, and we are not “privacy broke,” but rather many users are currently spending too much of their privacy in online engagements, given the benefits they are receiving, and there is a growing need to help users find more “privacy cost effective” solutions to meet their needs.


In many forms of online engagement, whether apps or social media, there is a natural privacy “give” required for the benefit or “get.” For example, if one wants to get live traffic information one must share one’s location history since this is a critical input used to calculate the live traffic information. Similarly, if a patient wants health and wellness advice he/she must be willing to share personal health and wellness information with apps and organizations that have the big data/analytics to collect, analyze and derive knowledge from raw health and wellness data and present it to the patient to help them make better choices.


However, in many online engagements there is an unnecessary privacy “give,” not required for the benefit the user is receiving. An example may include a flashlight app that has ad network libraries in it tracking the user’s location and other personal information – clearly not required for the user to get the function of the flashlight app providing light, and especially considering that there are many other functionally equivalent flashlight apps out there that do not require this unnecessary privacy “give.”


In many cases, there are simple actions users can use to achieve their goals while reducing or minimizing unnecessary privacy “give.” These could include changing configuration settings of their apps and devices – for example unchecking opt-outs, replacing privacy intrusive apps with safer alternatives such as in the flashlight example above, changing the type of information they share in specific online engagements, or in a worst case uninstalling privacy intrusive apps.


In many cases users, are unaware of the privacy “give” in online engagements. To really help users with privacy, beyond raising the alarm … helping them actually improve their privacy posture, we need to first increase users awareness of their unnecessary privacy “give” and then guide them to viable alternative actions that achieve their goals while significantly reducing or eliminating the unnecessary privacy “give.” This is no easy task with the rapidly changing technology landscape especially in the exploding ecosystem of health and wellness apps and online services, but critical if we are to maintain users trust and confidence in the privacy safety of new technology, and their willingness to adopt it and use it.


Are you concerned about your privacy online? What solutions do you see to address these concerns?


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.


Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Estimates of the number of IoT (Internet of Things) project 1.9 billion devices today growing to 9 billion by 2018. Already, healthcare has made major strides into the Internet of Things with a myriad of healthcare specific Internet connected devices, or “things” for managing health and wellness through vital signs.


For example, multiple healthcare “things” can measure everything from patient activity through multiple vital signs such as blood pressure, glucose levels and so forth. Connecting these “things” to the Internet enables the data to be analyzed, for example, for diagnostics. This has potential to radically transform healthcare enabling better, faster diagnostics, and personalized medicine.


Patient conditions can be detected proactively and early, personalized treatment provided, and patients allowed to return home for recovery faster with post treatment monitoring. Healthcare IoT is also poised to empower patients with their data, which historically has been locked inside healthcare organizations and difficult for patients to acquire. Clearly, potential benefits of healthcare IoT are great.


Security of IoT

Concurrently, privacy and security incidents such as breaches have reached alarming levels globally, both in frequency and impact. Privacy concerns have also been exacerbated in recent years by concerns over surveillance and privacy intrusions from online service providers such as social media platforms. Realizing the benefits of healthcare IoT sans the privacy and security incidents, and doing so in a way that preserves and builds patient trust, requires a proactive approach where privacy and security is built in by healthcare IoT device and service providers.


Many healthcare IoT service providers today stream sensitive patient data from the devices, securely over the Internet, to repositories they maintain for secure storage. These repositories enable analytics on the patient data, empowering patients with new insights, knowledge, and enabling them to make better informed decisions on their health and wellness. However, in a sense, these repositories are silos, storing the data from the specific healthcare IoT device and enabling analytics just on that data. Unfortunately for the patient, this data is not automatically available for co-mingling with other data from other healthcare IoT devices provided by other organizations. The result is a limitation in the analytics that can be done and benefits that can be delivered back to the patient.


Privacy through separation

Interestingly, one of the unintended benefits of silo’ing patient data across separate secure clouds maintained by different healthcare IoT service providers is that privacy and security risk is reduced through separation. If one of the providers is breached, there is a limit to the variety and quantity of sensitive healthcare data at risk. While industry is generally currently in the phase of building out the healthcare IoT, proliferating devices and silos, proactive attention to privacy and security demands that we think ahead to the inevitable next phase.


This is where data from different healthcare IoT providers is brought together, further enabling greatly increased benefits, while also greatly increasing privacy and security risks. An intrusion of such an integrated repository of patient data could breach a much greater variety and quantity of sensitive data. Preventing cybercrime in healthcare requires a holistic approach where a combination of administrative, physical, and technical safeguards are used to mitigate privacy and security risks. With cybercriminals using increasingly sophisticated techniques for intrusions, technical controls need to protect the whole stack, from various layers of software right down to the hardware level. With patients and healthcare workers being increasingly empowered with more sensitive data, and tools such as smart devices, apps, social media, wearables and IoT, we need to recognize that many breaches occur from inadvertent user actions that while well intentioned, subject sensitive data to greatly increased privacy and security risks.


In addition to securing the hardware and software, we need to secure the user, also empowering them with new visibility into privacy and security risks of their actions, as well as actionable alternatives available to them that both achieve their goals while reducing or eliminating risks.


What privacy and security challenges and risks are you seeing from healthcare IoT, and how are you planning to address these?


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Healthcare workers are increasingly empowered with apps, devices, social media, wearables and the Internet of Things. Concurrent with this trend is the widespread adoption of BYOD (Bring Your Own Device) in healthcare, enabling healthcare workers to use personal devices to complete work tasks. These tools enable great improvements in patient care, but also bring new privacy and security risks.


Research shows that when usability is lacking, security too cumbersome, or the IT department too slow or overly restrictive in healthcare organizations, healthcare workers can and do use workarounds, or procedures out of compliance with policy, to complete their work. Examples of workarounds include using personal devices with apps or websites, personal email, USB sticks, texting and so forth. This may be exacerbated where healthcare workers are increasingly under time and cost reduction pressure.


Some of this risk can be mitigated with safeguards such as MDM (Mobile Device Management) and DLP (Data Loss Prevention). In a sense, these tools are mitigating “black and white” risks where user actions are clearly out of compliance with privacy and security policy, and can detect and prevent incidents such as breaches. However, with many user actions compliance is harder to determine. An example is where a healthcare worker is using a personal BYOD smartphone to post an image to social media. On one hand this could be an image of a patient and represent a clear non-compliance. On the other hand, it may just be a non-sensitive, personal picture the user took last weekend that they are sharing with friends. Another example is an SMS text between healthcare workers that could be a patient update introducing risk, or could be benign and just setting up a lunch date. Many other examples exist of actions users can take that may or may not be in compliance with policy.


In a sense, this is a “grey region” of the healthcare enterprise privacy and security risk spectrum, where compliance really depends on the context, and is difficult to establish technically. Note that in this type of risk the healthcare worker is typically not malicious, and actions that inadvertently add risk are intended to improve patient care. Given technical difficulty in establishing (non)compliance, administrative controls such as policy, training and audit and compliance are often used to mitigate this type of risk.


Unfortunately, training is very often the Achilles' heel in this approach, very limited in effectiveness and typically taking the form of “once a year scroll to the bottom and click accept” that is more of a regulatory compliance checkbox activity than something that empowers healthcare workers with the right knowledge and skills to make better choices that both achieve their goals as well as minimize risk.


Further empowerment of healthcare workers with new apps, devices, wearables and Internet of Things promises great new benefits to patient care, while also exacerbating this growing inadvertent “grey region” of risk. To enable healthcare to embrace new technologies while minimizing privacy and security risk we must find better ways of providing healthcare workers timely training and choices that enable them to navigate hidden potholes of risk associated with new technologies.


What strategies are is your organization using to tackle this challenge?


David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.


Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

To fully realize the benefits of personalized medicine while avoiding negative impacts such as breaches, we must minimize the associated privacy and security risks. Personal information, including a patients genetic data, used to support personalized information is considered sensitive information, and is regulated in the US by the Genetic Information Non-Discrimination Act (GINA) and the HIPAA Privacy Rule. This prevents abuse of this information, for example for discrimination based on genetic information for employment or health coverage, or breaches.


A best practice in identifying and mitigating such risks is to follow the sensitive information through its lifecycle, identifying and assessing risks, and implementing safeguards to mitigate at each stage. In previous blogs we discussed the collection, use, retention, and disclosure stages. In this blog I’ll focus on the disposal stage. This last stage is often overlooked in privacy and security risk assessments, and can be the source of security incidents such as breaches. Several examples of breaches resulting from improper disposal of protected health information can be seen on the HSS Breaches Affecting 500 or More Individuals, by searching on “disposal.”


More examples can be found globally, for example in Britain: Buy A Computer On eBay, Find Sensitive Health-Care Records!, where computers containing sensitive patient health information (that as not properly disposed of) were sold on eBay. As we can see from this last reference, impacts of such breaches can easily run into several hundreds of thousands of US dollars. In fact, the impact of such breaches can even run into millions of dollars as reflected by the Ponemon 2013 Cost of a Data Breach Study which found that in the US breaches on average cost US $5.4 million.


In order to minimize these kinds of risks, a best practice is to securely dispose of patient information used for personalized medicine when it is no longer required for the purpose to which the patient has consented, and is outside of any regulatory/legally or policy imposed mandatory retention periods. Disposal could also be explicitly requested by a patient. In this case the healthcare organization should inform the patient of the benefits of retaining their information, for example to ensure the completeness of their longitudenal patient record. However, in the event that the patient record must be securely disposed of, the last thing a healthcare covered entity or data controller wants is to have a breach and then have it further exacerbated by the scope of the breach include patient information they should no longer have.


To accomplish secure disposal, all of the sensitive data for a given patient, throughout the personalized medicine process needs to be securely disposed of. It is helpful to review some of the key data records created in personalized medicine process.


This starts with blood or saliva samples taken from patients, then the raw genetic data produced from sequencing the DNA in these samples. A variance file is then produced from the raw genetic data, in comparison with baseline genetic data, to produce a variance file highlighting specific variations in the patient genetics from the norm. Lastly a risk factors report is produced from the variance file that identifies patient propensities to specific traits such as diseases, and pharmacogenetics or the efficacy or toxicity of specific medicines to the patient based on their genetics. We also need to consider any personal information in backups, archives, or offsite for example to support business continuity/disaster recovery.


Any information shared with third parties, known as Business Associates in the US, or data processors in Europe, should also be securely disposed of. Disposal methods can range from incinerating samples, to shredding paper records, to secure wipe of storage media, physical destruction of hardware devices, encrypting and securely disposing of the key, and so forth. In the case of backups and archives it may not be practical to delete a specific record. However, in such cases if the patient record is disposed of in the online tier 1 storage, eventually within a set time period as backups / archives reach end of life, for example after 6 months, the deletion of the patient record will effectively propagate to those backups/archives as well.


There are several places a patients personal information can hide to make this job even tougher. An example is caches, for example in web applications, proxies, performance caches and so forth. Another example is the patients personal health information exchanged with other healthcare organizations through health information exchanges. Fortunately, once exchanged through such HIE’s the patient information retained by another healthcare organization is subject to their regulatory compliance.


Unfortunately for the patient this may mean that they need to go to the various independent entities holding their information and explicitly request disposal of their information if their goal is deletion of their record more broadly than a single healthcare organization. As healthcare workers are increasingly empowered with more devices, apps, online services, and also wearables and Internet of Things, the risk of sensitive patient personal information being retained or transmitted in places or ways that it should not be, increase considerably. Examples today can be seen in Workarounds in Healthcare, a Risky Trend, driven by healthcare workers use of workarounds. DLP (Data Loss Prevention) can be an effective tool in discovering such personal information at rest or in transit, enabling a healthcare organization to securely dispose of it or move it somewhere more secure as needed.


Lastly, but not least, one should keep a good audit log of such disposal activities, to enable effective audit and compliance and implementation of policy, as well as demonstrate due diligence should you ever need to in the event of a breach.


What kinds of challenges are you seeing with securely disposing of health information used for personalized medicine?

Recent privacy storms around government surveillance, big data / analytics, social media and so forth have led many media publications to proclaim “privacy is dead.” To cope with these trends, as well as wearables, drones, Internet of Things (IoT) and other technologies just around the corner we need to move beyond a view of privacy in absolutes. If we truly had no privacy, and all of our personal information was available to anyone that wanted it then we would be in much worse shape from a privacy standpoint than we currently are.


Research studies have shown that users are increasingly empowered with mobile devices, apps, social media, and new trends around wearables and IoT are sure to compound this. This empowerment has enabled users to be productive in ways we couldn’t imagine a decade ago. However, this has also provided a lot of rope with which users can, mostly inadvertently, hurt themselves and others from a privacy standpoint. This is evident in studies such as Workarounds in Healthcare, a Risky Trend that shows that when usability is lacking in solutions or security, or IT departments get in the way of healthcare workers they find alternative workarounds that get the job done, unfortunately also adding non-compliance issues and additional privacy and security risk. This trend is particularly acute in healthcare where personal information can be very sensitive and is heavily regulated, for example by HIPAA, and healthcare and wellness apps working with such information are proliferating at an amazing pace.


To cope with this increasing empowerment of users, and the fact that user behavior is a major and growing source of privacy risk, users need to make better decisions regarding how to engage in technologies. Consumers make purchasing decisions every day, where they evaluate the value of the purchase against the cost and make a decision whether to buy or not. Viewing decisions whether to engage in technologies through this metaphor we can think of the purchase as the potential engagement in technology, the value as the benefit of the engagement, and the cost as what privacy we are giving up by engaging which depends on the personal information that will be shared as part of the engagement.


In many technology engagements today users pay little to no attention to the “privacy cost” as evidenced by studies that show little attention to permissions granted to apps being installed on mobile devices. To address this we need to improve technologies that show end users the “privacy cost” of their decisions. Further, effective privacy and security awareness training for users is much needed. We can learn from the gaming industry where gamers, including young children, learn highly complex games “on the go” without ever reading a manual.


Technologies such as app permission watchers, ad network detectors, site advisors, endpoint DLP have started to shine a light on “privacy cost” and risks and thereby influence users to make better decisions regarding where and how they engage including what apps they use, what websites they visit, and what actions they perform on their devices.


Much work remains to be done here to help users make better decisions about what technologies they want to engage with, and how they want to engage including how they will configure and use the technologies to both achieve their goals, while minimizing the privacy cost and risk.


What questions do you have?

Filter Blog

By date:
By tag: