Currently Being Moderated

Healthcare workers are increasingly empowered with apps, devices, social media, wearables and the Internet of Things. Concurrent with this trend is the widespread adoption of BYOD (Bring Your Own Device) in healthcare, enabling healthcare workers to use personal devices to complete work tasks. These tools enable great improvements in patient care, but also bring new privacy and security risks.

 

Research shows that when usability is lacking, security too cumbersome, or the IT department too slow or overly restrictive in healthcare organizations, healthcare workers can and do use workarounds, or procedures out of compliance with policy, to complete their work. Examples of workarounds include using personal devices with apps or websites, personal email, USB sticks, texting and so forth. This may be exacerbated where healthcare workers are increasingly under time and cost reduction pressure.

 

Some of this risk can be mitigated with safeguards such as MDM (Mobile Device Management) and DLP (Data Loss Prevention). In a sense, these tools are mitigating “black and white” risks where user actions are clearly out of compliance with privacy and security policy, and can detect and prevent incidents such as breaches. However, with many user actions compliance is harder to determine. An example is where a healthcare worker is using a personal BYOD smartphone to post an image to social media. On one hand this could be an image of a patient and represent a clear non-compliance. On the other hand, it may just be a non-sensitive, personal picture the user took last weekend that they are sharing with friends. Another example is an SMS text between healthcare workers that could be a patient update introducing risk, or could be benign and just setting up a lunch date. Many other examples exist of actions users can take that may or may not be in compliance with policy.

 

In a sense, this is a “grey region” of the healthcare enterprise privacy and security risk spectrum, where compliance really depends on the context, and is difficult to establish technically. Note that in this type of risk the healthcare worker is typically not malicious, and actions that inadvertently add risk are intended to improve patient care. Given technical difficulty in establishing (non)compliance, administrative controls such as policy, training and audit and compliance are often used to mitigate this type of risk.

 

Unfortunately, training is very often the Achilles' heel in this approach, very limited in effectiveness and typically taking the form of “once a year scroll to the bottom and click accept” that is more of a regulatory compliance checkbox activity than something that empowers healthcare workers with the right knowledge and skills to make better choices that both achieve their goals as well as minimize risk.

 

Further empowerment of healthcare workers with new apps, devices, wearables and Internet of Things promises great new benefits to patient care, while also exacerbating this growing inadvertent “grey region” of risk. To enable healthcare to embrace new technologies while minimizing privacy and security risk we must find better ways of providing healthcare workers timely training and choices that enable them to navigate hidden potholes of risk associated with new technologies.

 

What strategies are is your organization using to tackle this challenge?

 

David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

 

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Comments

Filter Blog

By author: By date:
By tag: