1 2 Previous Next

Intel Healthcare IT

28 Posts authored by: David Houlding

A recent Reuters / Ipsos poll finds that 51 percent of Americans are spooked by Internet companies and their growing ambitions, “with a majority worried that Internet companies are encroaching too much upon their lives.”

 

Clearly, users are increasingly concerned about privacy, including healthcare workers and patients. This will likely be exacerbated going forward by new powerful apps and devices, health and wellness devices, medical devices, social media, wearables, and the Internet of Things.

 

However, many users don’t know what to do about it, and feel the situation is hopeless, as can be seen by widespread sentiment that “privacy is dead.” I assert that if privacy were dead, and all of our personal information was available to anyone who wanted it, including malicious individuals that would seek to abuse it, then we would be much worse off than we currently are.

 

In a prior blog, I discussed Viewing Health IT Privacy as a Purchase Decision. From this standpoint, privacy is far from dead, and we are not “privacy broke,” but rather many users are currently spending too much of their privacy in online engagements, given the benefits they are receiving, and there is a growing need to help users find more “privacy cost effective” solutions to meet their needs.

 

In many forms of online engagement, whether apps or social media, there is a natural privacy “give” required for the benefit or “get.” For example, if one wants to get live traffic information one must share one’s location history since this is a critical input used to calculate the live traffic information. Similarly, if a patient wants health and wellness advice he/she must be willing to share personal health and wellness information with apps and organizations that have the big data/analytics to collect, analyze and derive knowledge from raw health and wellness data and present it to the patient to help them make better choices.

 

However, in many online engagements there is an unnecessary privacy “give,” not required for the benefit the user is receiving. An example may include a flashlight app that has ad network libraries in it tracking the user’s location and other personal information – clearly not required for the user to get the function of the flashlight app providing light, and especially considering that there are many other functionally equivalent flashlight apps out there that do not require this unnecessary privacy “give.”

 

In many cases, there are simple actions users can use to achieve their goals while reducing or minimizing unnecessary privacy “give.” These could include changing configuration settings of their apps and devices – for example unchecking opt-outs, replacing privacy intrusive apps with safer alternatives such as in the flashlight example above, changing the type of information they share in specific online engagements, or in a worst case uninstalling privacy intrusive apps.

 

In many cases users, are unaware of the privacy “give” in online engagements. To really help users with privacy, beyond raising the alarm … helping them actually improve their privacy posture, we need to first increase users awareness of their unnecessary privacy “give” and then guide them to viable alternative actions that achieve their goals while significantly reducing or eliminating the unnecessary privacy “give.” This is no easy task with the rapidly changing technology landscape especially in the exploding ecosystem of health and wellness apps and online services, but critical if we are to maintain users trust and confidence in the privacy safety of new technology, and their willingness to adopt it and use it.

 

Are you concerned about your privacy online? What solutions do you see to address these concerns?

 

David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

 

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Estimates of the number of IoT (Internet of Things) project 1.9 billion devices today growing to 9 billion by 2018. Already, healthcare has made major strides into the Internet of Things with a myriad of healthcare specific Internet connected devices, or “things” for managing health and wellness through vital signs.

 

For example, multiple healthcare “things” can measure everything from patient activity through multiple vital signs such as blood pressure, glucose levels and so forth. Connecting these “things” to the Internet enables the data to be analyzed, for example, for diagnostics. This has potential to radically transform healthcare enabling better, faster diagnostics, and personalized medicine.

 

Patient conditions can be detected proactively and early, personalized treatment provided, and patients allowed to return home for recovery faster with post treatment monitoring. Healthcare IoT is also poised to empower patients with their data, which historically has been locked inside healthcare organizations and difficult for patients to acquire. Clearly, potential benefits of healthcare IoT are great.

 

Security of IoT

Concurrently, privacy and security incidents such as breaches have reached alarming levels globally, both in frequency and impact. Privacy concerns have also been exacerbated in recent years by concerns over surveillance and privacy intrusions from online service providers such as social media platforms. Realizing the benefits of healthcare IoT sans the privacy and security incidents, and doing so in a way that preserves and builds patient trust, requires a proactive approach where privacy and security is built in by healthcare IoT device and service providers.

 

Many healthcare IoT service providers today stream sensitive patient data from the devices, securely over the Internet, to repositories they maintain for secure storage. These repositories enable analytics on the patient data, empowering patients with new insights, knowledge, and enabling them to make better informed decisions on their health and wellness. However, in a sense, these repositories are silos, storing the data from the specific healthcare IoT device and enabling analytics just on that data. Unfortunately for the patient, this data is not automatically available for co-mingling with other data from other healthcare IoT devices provided by other organizations. The result is a limitation in the analytics that can be done and benefits that can be delivered back to the patient.

 

Privacy through separation

Interestingly, one of the unintended benefits of silo’ing patient data across separate secure clouds maintained by different healthcare IoT service providers is that privacy and security risk is reduced through separation. If one of the providers is breached, there is a limit to the variety and quantity of sensitive healthcare data at risk. While industry is generally currently in the phase of building out the healthcare IoT, proliferating devices and silos, proactive attention to privacy and security demands that we think ahead to the inevitable next phase.

 

This is where data from different healthcare IoT providers is brought together, further enabling greatly increased benefits, while also greatly increasing privacy and security risks. An intrusion of such an integrated repository of patient data could breach a much greater variety and quantity of sensitive data. Preventing cybercrime in healthcare requires a holistic approach where a combination of administrative, physical, and technical safeguards are used to mitigate privacy and security risks. With cybercriminals using increasingly sophisticated techniques for intrusions, technical controls need to protect the whole stack, from various layers of software right down to the hardware level. With patients and healthcare workers being increasingly empowered with more sensitive data, and tools such as smart devices, apps, social media, wearables and IoT, we need to recognize that many breaches occur from inadvertent user actions that while well intentioned, subject sensitive data to greatly increased privacy and security risks.

 

In addition to securing the hardware and software, we need to secure the user, also empowering them with new visibility into privacy and security risks of their actions, as well as actionable alternatives available to them that both achieve their goals while reducing or eliminating risks.

 

What privacy and security challenges and risks are you seeing from healthcare IoT, and how are you planning to address these?

 

David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

Healthcare workers are increasingly empowered with apps, devices, social media, wearables and the Internet of Things. Concurrent with this trend is the widespread adoption of BYOD (Bring Your Own Device) in healthcare, enabling healthcare workers to use personal devices to complete work tasks. These tools enable great improvements in patient care, but also bring new privacy and security risks.

 

Research shows that when usability is lacking, security too cumbersome, or the IT department too slow or overly restrictive in healthcare organizations, healthcare workers can and do use workarounds, or procedures out of compliance with policy, to complete their work. Examples of workarounds include using personal devices with apps or websites, personal email, USB sticks, texting and so forth. This may be exacerbated where healthcare workers are increasingly under time and cost reduction pressure.

 

Some of this risk can be mitigated with safeguards such as MDM (Mobile Device Management) and DLP (Data Loss Prevention). In a sense, these tools are mitigating “black and white” risks where user actions are clearly out of compliance with privacy and security policy, and can detect and prevent incidents such as breaches. However, with many user actions compliance is harder to determine. An example is where a healthcare worker is using a personal BYOD smartphone to post an image to social media. On one hand this could be an image of a patient and represent a clear non-compliance. On the other hand, it may just be a non-sensitive, personal picture the user took last weekend that they are sharing with friends. Another example is an SMS text between healthcare workers that could be a patient update introducing risk, or could be benign and just setting up a lunch date. Many other examples exist of actions users can take that may or may not be in compliance with policy.

 

In a sense, this is a “grey region” of the healthcare enterprise privacy and security risk spectrum, where compliance really depends on the context, and is difficult to establish technically. Note that in this type of risk the healthcare worker is typically not malicious, and actions that inadvertently add risk are intended to improve patient care. Given technical difficulty in establishing (non)compliance, administrative controls such as policy, training and audit and compliance are often used to mitigate this type of risk.

 

Unfortunately, training is very often the Achilles' heel in this approach, very limited in effectiveness and typically taking the form of “once a year scroll to the bottom and click accept” that is more of a regulatory compliance checkbox activity than something that empowers healthcare workers with the right knowledge and skills to make better choices that both achieve their goals as well as minimize risk.

 

Further empowerment of healthcare workers with new apps, devices, wearables and Internet of Things promises great new benefits to patient care, while also exacerbating this growing inadvertent “grey region” of risk. To enable healthcare to embrace new technologies while minimizing privacy and security risk we must find better ways of providing healthcare workers timely training and choices that enable them to navigate hidden potholes of risk associated with new technologies.

 

What strategies are is your organization using to tackle this challenge?

 

David Houlding, MSc, CISSP, CIPP is a senior privacy researcher with Intel Labs and a frequent blog contributor.

 

Find him on LinkedIn

Keep up with him on Twitter (@davidhoulding)

Check out his previous posts

To fully realize the benefits of personalized medicine while avoiding negative impacts such as breaches, we must minimize the associated privacy and security risks. Personal information, including a patients genetic data, used to support personalized information is considered sensitive information, and is regulated in the US by the Genetic Information Non-Discrimination Act (GINA) and the HIPAA Privacy Rule. This prevents abuse of this information, for example for discrimination based on genetic information for employment or health coverage, or breaches.

 

A best practice in identifying and mitigating such risks is to follow the sensitive information through its lifecycle, identifying and assessing risks, and implementing safeguards to mitigate at each stage. In previous blogs we discussed the collection, use, retention, and disclosure stages. In this blog I’ll focus on the disposal stage. This last stage is often overlooked in privacy and security risk assessments, and can be the source of security incidents such as breaches. Several examples of breaches resulting from improper disposal of protected health information can be seen on the HSS Breaches Affecting 500 or More Individuals, by searching on “disposal.”

 

More examples can be found globally, for example in Britain: Buy A Computer On eBay, Find Sensitive Health-Care Records!, where computers containing sensitive patient health information (that as not properly disposed of) were sold on eBay. As we can see from this last reference, impacts of such breaches can easily run into several hundreds of thousands of US dollars. In fact, the impact of such breaches can even run into millions of dollars as reflected by the Ponemon 2013 Cost of a Data Breach Study which found that in the US breaches on average cost US $5.4 million.

 

In order to minimize these kinds of risks, a best practice is to securely dispose of patient information used for personalized medicine when it is no longer required for the purpose to which the patient has consented, and is outside of any regulatory/legally or policy imposed mandatory retention periods. Disposal could also be explicitly requested by a patient. In this case the healthcare organization should inform the patient of the benefits of retaining their information, for example to ensure the completeness of their longitudenal patient record. However, in the event that the patient record must be securely disposed of, the last thing a healthcare covered entity or data controller wants is to have a breach and then have it further exacerbated by the scope of the breach include patient information they should no longer have.

 

To accomplish secure disposal, all of the sensitive data for a given patient, throughout the personalized medicine process needs to be securely disposed of. It is helpful to review some of the key data records created in personalized medicine process.

 

This starts with blood or saliva samples taken from patients, then the raw genetic data produced from sequencing the DNA in these samples. A variance file is then produced from the raw genetic data, in comparison with baseline genetic data, to produce a variance file highlighting specific variations in the patient genetics from the norm. Lastly a risk factors report is produced from the variance file that identifies patient propensities to specific traits such as diseases, and pharmacogenetics or the efficacy or toxicity of specific medicines to the patient based on their genetics. We also need to consider any personal information in backups, archives, or offsite for example to support business continuity/disaster recovery.

 

Any information shared with third parties, known as Business Associates in the US, or data processors in Europe, should also be securely disposed of. Disposal methods can range from incinerating samples, to shredding paper records, to secure wipe of storage media, physical destruction of hardware devices, encrypting and securely disposing of the key, and so forth. In the case of backups and archives it may not be practical to delete a specific record. However, in such cases if the patient record is disposed of in the online tier 1 storage, eventually within a set time period as backups / archives reach end of life, for example after 6 months, the deletion of the patient record will effectively propagate to those backups/archives as well.

 

There are several places a patients personal information can hide to make this job even tougher. An example is caches, for example in web applications, proxies, performance caches and so forth. Another example is the patients personal health information exchanged with other healthcare organizations through health information exchanges. Fortunately, once exchanged through such HIE’s the patient information retained by another healthcare organization is subject to their regulatory compliance.

 

Unfortunately for the patient this may mean that they need to go to the various independent entities holding their information and explicitly request disposal of their information if their goal is deletion of their record more broadly than a single healthcare organization. As healthcare workers are increasingly empowered with more devices, apps, online services, and also wearables and Internet of Things, the risk of sensitive patient personal information being retained or transmitted in places or ways that it should not be, increase considerably. Examples today can be seen in Workarounds in Healthcare, a Risky Trend, driven by healthcare workers use of workarounds. DLP (Data Loss Prevention) can be an effective tool in discovering such personal information at rest or in transit, enabling a healthcare organization to securely dispose of it or move it somewhere more secure as needed.

 

Lastly, but not least, one should keep a good audit log of such disposal activities, to enable effective audit and compliance and implementation of policy, as well as demonstrate due diligence should you ever need to in the event of a breach.

 

What kinds of challenges are you seeing with securely disposing of health information used for personalized medicine?

Recent privacy storms around government surveillance, big data / analytics, social media and so forth have led many media publications to proclaim “privacy is dead.” To cope with these trends, as well as wearables, drones, Internet of Things (IoT) and other technologies just around the corner we need to move beyond a view of privacy in absolutes. If we truly had no privacy, and all of our personal information was available to anyone that wanted it then we would be in much worse shape from a privacy standpoint than we currently are.

 

Research studies have shown that users are increasingly empowered with mobile devices, apps, social media, and new trends around wearables and IoT are sure to compound this. This empowerment has enabled users to be productive in ways we couldn’t imagine a decade ago. However, this has also provided a lot of rope with which users can, mostly inadvertently, hurt themselves and others from a privacy standpoint. This is evident in studies such as Workarounds in Healthcare, a Risky Trend that shows that when usability is lacking in solutions or security, or IT departments get in the way of healthcare workers they find alternative workarounds that get the job done, unfortunately also adding non-compliance issues and additional privacy and security risk. This trend is particularly acute in healthcare where personal information can be very sensitive and is heavily regulated, for example by HIPAA, and healthcare and wellness apps working with such information are proliferating at an amazing pace.

 

To cope with this increasing empowerment of users, and the fact that user behavior is a major and growing source of privacy risk, users need to make better decisions regarding how to engage in technologies. Consumers make purchasing decisions every day, where they evaluate the value of the purchase against the cost and make a decision whether to buy or not. Viewing decisions whether to engage in technologies through this metaphor we can think of the purchase as the potential engagement in technology, the value as the benefit of the engagement, and the cost as what privacy we are giving up by engaging which depends on the personal information that will be shared as part of the engagement.

 

In many technology engagements today users pay little to no attention to the “privacy cost” as evidenced by studies that show little attention to permissions granted to apps being installed on mobile devices. To address this we need to improve technologies that show end users the “privacy cost” of their decisions. Further, effective privacy and security awareness training for users is much needed. We can learn from the gaming industry where gamers, including young children, learn highly complex games “on the go” without ever reading a manual.

 

Technologies such as app permission watchers, ad network detectors, site advisors, endpoint DLP have started to shine a light on “privacy cost” and risks and thereby influence users to make better decisions regarding where and how they engage including what apps they use, what websites they visit, and what actions they perform on their devices.

 

Much work remains to be done here to help users make better decisions about what technologies they want to engage with, and how they want to engage including how they will configure and use the technologies to both achieve their goals, while minimizing the privacy cost and risk.

 

What questions do you have?

Many of the benefits of personalized medicine depend on sharing genetic and other healthcare information. For example, deriving meaning out of healthcare data and in particular genetic data requires sharing sensitive information for research, often conducted by third party organizations separate from the covered entity or other organization that originally collects the genetic data. Collaborative care for patients, involving primary care physician as well as multiple specialists, requires sharing sensitive healthcare information. Healthcare organizations may also be motivated to derive revenue from massive databases of such information through de-identifying / anonymizing and then sharing it, within compliance with applicable healthcare regulations and data protection laws, such as the HIPAA Privacy Rule.

 

Healthcare breaches have reached alarming levels, both in frequency as evidenced by the HHS Breaches Affecting 500 or More Individuals, as well as business impact as evidenced by the Ponemon 2013 research on the Cost of a Data Breach which shows an average total cost per breach event of $5.4 million in the U.S. in 2012. Many of these breaches occur with healthcare data in transit, or where healthcare data is shared with third parties, also often known as a Data Processor in the EU or Business Associates in the U.S. These business impacts have ”naturally selected” a proactive approach (in contrast to a “wait and see” approach) as the only practical approach to privacy and security for safely sharing sensitive healthcare data.

 

Best practices in a proactive approach include holistic security which involves applying administrative, physical and technical safeguards, as well as a multi-layered security, also known as defense in depth where multiple security controls are applied together in layers to progressively minimize risk. Administrative controls in such an approach include a privacy notice to patients that enables them to be fully aware of the benefits and risks including specifically what sensitive healthcare information is collected, and how it will be used, retained, shared and disposed of. This enables patients to make informed choices such as opt-in or opt-out, and enables provide their consent. Another key risk mitigation in this approach includes the minimization of sensitive information based on the type(s) of processing stated in the privacy notice to the patient. Minimization involves the healthcare organization collecting the sensitive healthcare information for personalized medicine to minimize the Personally Identifiable Information (PII) in this information before sharing with a third party. For example if a research use case doesn’t require any PII, the healthcare organization should fully de-identify, or remove PII, from the genetic / healthcare information before sharing with that third party.

 

HIPAA, for example, provides guidance on specific PII elements required for safe harbor de-identification. Such de-identified information has low risk, although not zero risk, of re-identification of the patient. Further, some use cases require either full PII, or partial PII. Therefore it is highly recommended to supplement de-identification with other safeguards including tokenization, where any residual PII is stored separately in a secure database with access controls ensuring only authorized access. Encryption is a key safeguard to protect confidentiality of information being shared, or in transit. Hardware assisted security such as encryption acceleration enables encryption of large genetic data sets in transit with high performance.

 

Last, but not least, appropriate access controls should be used to ensure only authorized access to sensitive healthcare information that is shared, with appropriate auditing to ensure compliance with policies. The healthcare organization collecting the sensitive healthcare information should also vet the privacy and security of the target organization(s) for sharing, to ensure adequate protection. Contractual controls are needed between the healthcare organization sourcing the sensitive healthcare information and the target third party with which information will be shared. Key elements of these contractual controls include service level agreements (SLAs), business associate agreements (BAAs) and security incident response plans (SIRPs), in particular to outline procedures for timely response and collaboration in the event of security incidents such as breaches.

 

What kinds of strategies are you using to safely share sensitive data for personalized medicine?

Personalized medicine promises compelling benefits in improving the quality and reducing the cost of healthcare. Personalized medicine is enabled by powerful new types of sensitive data including genetic information about patients. To ensure these benefits are realized quickly, effectively and smoothly it is desirable to avoid security incidents such as breaches. In prior blogs I discussed how to manage privacy and security risks, and securely collect and use data for personalized medicine. In this blog I focus on how to retain data for personalized medicine.

 

When looking at retention it is useful to consider the types and characteristics of the data used in personalized medicine. The types of data powering personalized medicine range from the original blood or saliva samples used to get genetic information for a patient, to the raw genomic data for a human which is approximately 3.2Gb in size, as well as various other types of derived data. One of the key steps in deriving meaning out of the raw genomic data involves comparing this raw genomic data to baseline genomic data to derive a variance file that is much smaller in size, highlighting only the interesting variations in the genomic data of the specific patient. The data points in the variance file are referred to as SNP’s. Lastly, a risk factors report can be produced from this variance file, highlighting the patient propensity to various traits such as diseases. This report may also highlight pharmacogenomics, specifying the efficacy or toxicity of various drugs to the patient. The risk factors report is often included in the EHR for the patient.

 

Genetic data are considered PHI and subject to federal regulations such as HIPAA, HITECH Act as well as state level regulations such as for breach notification, for example CA SB 1386, and subject to privacy, security and breach notification rules. The 2013 Cost of a Data Breach Study estimates the average total cost of a data breach in the US in 2012 at $5.4M. Clearly a major business impact. Avoiding such incidents requires a proactive approach to privacy and security.

 

Location of data retained has a direct impact on regulations and data protection laws that apply. This includes not only the primary backend servers, but also Business Continuity / Disaster Recovery sites, backup sites and any business associates or data processors that may also retain sensitive data. Recent studies and incidents point to the risk of BYOC (Bring Your Own Cloud). To ensure sensitive data for personalized medicine stays in the cloud where it is supposed to be, under the control of the healthcare organization with effective privacy and security controls, it is necessary to ensure solutions are usable, security is not cumbersome, and IT within the healthcare organization is responsive and not overly restrictive.

 

De-identification is a key safeguard often applied to enable research and mitigate risk of security incidents such as breaches. Various methods exist for de-identification. This can involve removing specific elements of PII, such as in the HIPAA Safe Harbor method. Alternatively a risk based method such as the HIPAA Statistical Method may be used. De-identified data often has some small risk of re-identification, and research has shown that it is possible to re-identify patients using de-identified genetic information. Further, some types of research require some elements of PII, for example phenotype research may require zip code. A practical approach to effectively mitigating risk of sensitive data retained for personalized medicine requires a holistic approach where administrative, physical and technical controls are applied in combination, together with a multi-layered approach where for example de-identification is combined with tokenization, access controls, encryption and so forth.

 

To ensure solutions are usable security must not be cumbersome, otherwise research shows that non-compliance and BYOC and other risks can increase. Hardware assisted security such as encryption acceleration enables such technical security controls to be implemented with improved performance, robustness to increasingly sophisticated malware, improved usability, and reduced cost. Performance testing shows that such an approach can be very effective in enabling sensitive data to be retained in a highly secure manner with minimal performance and usability impact.

 

What kinds of strategies are you using to protect sensitive data for personalized medicine?

In my last blog, How to Securely Collect Data for Personalized Medicine, I discussed risks and safeguards for how to collect data for personalized medicine. The next step in the information lifecycle after collection is use, and I’ll focus on privacy and security concerns, risks and solutions in the use of sensitive data for personalized medicine.

 

During the collection phase a blood / saliva sample is typically acquired from the patient. Sample(s) are then sequenced to create the raw genome sequence data.

 

The raw genome sequence data for the patient is then compared to a typical raw genome data baseline data set to create a variance file, or a data set with points of interest where the patients raw genome deviates in interesting ways from the baseline. This raw genomic sequence data set can be very large, ranging to more than 3GB in size. Genomic databases can also contain tens or hundreds of thousands of raw genomic data sets. Maintaining security with such large data sets requires special attention to performance. Examples include hardware accelerated encryption, for example with Intel® Advanced Encryption Standard – New Instructions (AES-NI). Such hardware acceleration can be used in the high performance encryption of databases such as InterSystems Cache.

 

The variance file may then be annotated to attach meaning to the points of interest where they have been correlated with known conditions or traits, perhaps an increased propensity for a specific disease, or for pharmacogenomics where a specific point of interest in the variance file is associated with increased efficacy or toxicity of a given medicine.

 

Lastly, a risk factors report is produced from the annotated variance file and may be used by the healthcare professional to deliver personalized medicine.

The risk factors report may then be attached to the electronic health record (EHR) for the patient.

 

Clearly there are several data sets through the use of sensitive data in personalized medicine, from the raw genomic sequence data, to the variance file, risk factors report and patient EHR, and these need to be protected in confidentiality, integrity and availability.

 

Healthcare organizations using genetic information must constrain their use of this data to usage(s) specified in the privacy notice given to the patient prior to the patient granting consent to use their genetic data.

 

On the regulatory front, the Genetic Information Non-discrimination Act (GINA) prohibits the use of genetic information from any of these data sets by group health plans and health insurers for the purpose of denying coverage to a healthy individual or charging that patient higher premiums based solely on a genetic predisposition to developing a disease in the future. Genetic information is also considered Protected Health Information (PHI) and an organization using genetic information may be subject to the Health Insurance Portability and Accountability Act (HIPAA).

 

For healthcare organizations using genetic information in the United States, the Health Information Technology for Economic and Clinical Health (HITECH) Act requires organization subject to HIPAA to report data breaches affecting 500 or more individuals to Health and Human Services (HHS) and the media, in addition to notifying the affected individuals. Many states now also have breach notification laws, for example California SB 1386 requiring notification of affected individuals in the event of a breach of their sensitive information, which would include PHI such as genetic information that could be associated with them (was not de-identified).

 

Recently, the HIPAA Omnibus Rule became effective and includes further changes to when healthcare organizations must report breaches, together with new requirements Business Associates to comply with HIPAA Security and HITECH Act breach notification rules, holding them directly accountable for doing so. Business associates may include data processors that use genetic information in providing services to healthcare organizations. Disclaimer: this is publicly available information and not a legal summary or advice about regulations.

 

Personalized medicine use of sensitive data may also involve sensitive Intellectual Property (IP), especially in algorithms and knowledge bases used to analyze and assign meaning to genomic data. This IP must also be protected.

 

What types of privacy and security challenges and solutions do you see with the use of sensitive data for personalized medicine?

In my last blog, I discussed the rationale for applying privacy and security best practices to enable the benefits of personalized medicine while minimizing risks of breaches and other types of security incidents. One of these best practices involves walking through each step of the information lifecycle, from collection, to use, retention, disclosure and disposal. In this blog I take a look at the collection stage of the information lifecycle.

 

Collecting information for personalized medicine requires informed patient consent. Patients must be informed about the benefits, risks, who will have access to their data, how their data will be processed, and choices they have regarding their personal healthcare information. This includes both physical samples collected, such as saliva and blood samples, as well as the raw genome sequence data.

 

Research is needed to further the science of analyzing and deriving meaning from genetic information, and this research needs genetic data. Patients are typically presented with a choice of whether to participate in this type of research, and whether they want to authorize sharing of their genetic data, most often in de-identified form, with such researchers. Choices presented to the patient are typically either opt-in or opt out.

 

Opt in is where the patients data by default will not be shared with researchers unless they explicitly opt into sharing their data. Alternatively the patient may be presented with an opt-out choice where the default is for their data to be shared with researchers unless they explicitly opt out. These basic “all or nothing” opt-in / opt-out types of choices are often overly simplistic and don’t give the patient much control over their data. More sophisticated consent and choice mechanisms are required in the future for the patient to have greater control over who should have access to their data, for what purposes, how they can get access and participate, and so forth.

 

Some types of genetic research require more than fully de-identified data. An example is phenotype research which requires information about the patients environment, for example their zip code. This is location information about the patient and therefore Personally Identifiable Information (PII) which, when associated with healthcare information such as genetic information can cause the combination to be classified as Protected Health Information (PHI) under HIPAA and subject to legal and regulatory requirements, for example breach notification in the event of loss or theft. 

 

For this reason, tokenization is often used in the collection of genetic information for patients. Right from the time physical saliva or blood samples are taken they are often bar-coded to associate them with the patient, in contrast to labeling the samples with elements of the patient PII such as names, date of birth and so forth. Tokenization may also be used later to enable authorized access to limited PII, in addition to de-identified genetic data, in order to support more sophisticated research such as phenotype research.

 

Encryption may be used to protect the confidentiality of collected sensitive data at rest and in transit, including elements of PII stored in secure databases. Genetic data can take the form of very large data sets. For example a single raw genome sequence data can be several hundred GB or larger in size. Encrypting a volume of data such as this, while maintaining performance, requires hardware acceleration, such as Intel® AES-NI (Advanced Encryption Standard – New Instructions).

 

What types of privacy and security challenges and solutions do you see with the collection of data for personalized medicine?

Personalized medicine, or tailoring medicine to individuals based on genetic and other information, promises major benefits to improve the quality of healthcare. This key trend is also sure to accelerate in the next few years to a major change driver as DNA sequencing becomes more affordable and algorithms to derive meaning from this data become more powerful. Many new types of sensitive data and intellectual property are used through the personalized medicine information lifecycle from collection, to use, retention, disclosure and disposal.

 

HIPAA, HITECH Act, GINA, and state level regulations such as CA SB 1386 regarding healthcare / genetic information and breach notification present a complex legal and regulatory compliance landscape. Privacy and security concerns about regulatory compliance, breaches and theft of IP abound, and often impede realization of the full benefits of personalized medicine. Advancing the science of personalized medicine requires vast databases of sensitive healthcare and genetic information, and access for research.

 

De-identification, for example based on the HIPAA 18 identifiers commonly found in protected health information, is often applied to enable research and help mitigate privacy and security concerns and risks. However, there have been several successful high profile re-identification attempts that have correlated de-identified data with the correct patients.

 

Clearly, even with de-identification, there is residual risk. Compounding this, genetic information is far from fully understood, and the genetic “dark regions” we don’t yet fully understand, may well hold information that increases re-identification risks.

 

In my next few blogs, I’ll apply best practices in healthcare privacy and security to take an objective approach to assess risks, apply safeguards using a multi-layered approach to effectively reduce residual risk to acceptable levels. I’ll look at various types of sensitive data used through the personalized medicine information lifecycle from collection, to use, retention, disclosure and disposal, assessing risks to confidentiality, integrity and availability of the data.

 

I’ll also look at recent healthcare security research underscoring the importance of usability of solutions and security, how a lack of usability can adversely impact compliance and risk, and practical strategies to implement strong and usable security. Hardware based security is enabling stronger and more usable security controls that can be used as part of a holistic multi-layered approach to effectively mitigate risks in personalized medicine, enabling benefits to be fully realized sans privacy and security incidents such as breaches.

 

What approach are you using to manage privacy and security risks and enable personalized medicine in your organization?

Healthcare IT is moving away from the top down, “command and control” model of 10 years ago. Back then, IT provisioned all devices and the mobile device environment was more homogeneous, strongly managed and secured, to a much more diverse heterogeneous environment including BYOD, often with less manageability and security. In this new diverse and rapidly changing environment, a strong and effective detection and response capability becomes much more important. We can compare the new environment and this security model to an immune system where when a pathogen appears it is detected by the body and an immune response starts to eliminate the pathogen and put out antibodies to prevent a future recurrence.

 

In this analogy a pathogen in healthcare IT security could be a new type of malware or phishing attack, or some risky healthcare worker action such as attempting to copy unencrypted patient records onto a USB key, or attempting on impulse a post of sensitive healthcare data to social media. SIEM, DLP and global threat intelligence capabilities are just a few great examples of security detection controls. An effective immune response in healthcare IT security needs to be holistic and multi-layered in the sense of incorporating several administrative, physical and technical controls complementing each other for effective risk mitigation. Administrative controls may include updates to policy, risk assessments, effective training, audit and compliance, and security incident management controls. Physical controls may include locks and other physical access and tamper proofing controls for data, assets and facilities. Technical controls may include anti-malware, IPS, whitelisting, encryption, anti-theft and many others.

 

Of this mix of safeguards, and with key healthcare trends such as BYOD, social media, mobile healthcare and others increasingly empowering healthcare workers with more tools and options to get their work done, the human factor and effective training is becoming incredibly important. Recent HIMSS research shows if solutions or security are lacking usability, healthcare workers use these tools and options to get their job done in workarounds that add non-compliance issues and additional risk.

 

Compounding this challenge, recent HHS OCR audit findings shows that many healthcare organizations lack effective training. To be effective training must move beyond the “once a year scroll to the bottom and click accept model” to a much more continuous, bite-sized, gamified, engaging form, and enable the healthcare worker to apply and solidify their knowledge as a part of their daily job. Penetration testing needs to include the human factor to help detect vulnerabilities in end user behavior that can then be remedied. Some innovators such as Wombat Security Technologies have emerged with capabilities in this area. Security safeguards such as DLP also offer special value in helping educate healthcare workers on the job in “teachable moments” where at the point where they attempt an action that is out of compliance with policy the DLP control can inform them and educate them on safer alternatives.

 

What kinds of trends and risks, and detection and response safeguards, are you seeing in your healthcare organization?

When security technologies are introduced together with usability improvements in healthcare solutions they have a much greater chance of being approved and winning acceptance by healthcare workers. This is in contrast to introducing security technologies into healthcare organizations without usability improvements which at best have no usability impact, and may in fact have negative usability impact.

 

In my last blog, Improving Healthcare Solution Usability with Single Sign-On, I describe how too many layers of login is one of the most cumbersome usability challenges that compels healthcare workers to do risky workarounds out of compliance with privacy and security policy. Single Sign On (SSO) solutions provide a solution that can greatly reduce the number of sets of credentials as well as the number of actual logins required by healthcare workers during their day, providing major usability benefits. When such a solution is combined with more usable forms of multi-factor authentication such as wireless proximity cards (RFID, NFC or other) it can greatly improve both security and usability. In this type of solution once the healthcare worker has logged into a device they can start up multiple apps within their session without having to re-authenticate to each app. As more healthcare apps are integrated with such a SSO solution the number of separate credentials needed for the healthcare worker can be reduced, eventually to a single set of credentials required to login to the SSO solution.

 

Many SSO solutions also enable healthcare organizations to implement policy where the first login of the day requires 2 factors, perhaps the proximity card and a password, but thereafter as long as the clinician authenticates at another point in the network with their proximity card within a configurable amount of time defined by policy, eg 2 hours, then the proximity card alone is sufficient to authenticate and no password is required. This effectively enables the clinician to move between devices throughout the day with a simple tap of their proximity card.

 

SSO may also provide patient context sharing where different healthcare apps running in the same session track the same patient automatically so a clinician that searches and finds a patient in the Electronic Health Record (EHR) system can then switch over to a Picture Archiving and Communication System (PACS) and it has already automatically found the same patient, freeing the clinician from having to search for the patient again in each application. Such patient context capability may be based on the Clinical Context Object Workgroup (CCOW) standard. Clearly another major usability benefit that also mitigates risk of a clinician accidentally looking at different patients across different apps.

 

Just as important as easy login is minimizing risk of a live session being hijacked once the authenticated healthcare worker moves away from the device with the open live session. This can be done by setting an inactivity timeout to a low number of minutes, which in practice is workable from a usability standpoint since a simple tap of the wireless proximity card gets the healthcare worker back into their session. In the future technologies such as facial recognition may also enable the device to detect when the healthcare worker moves away, closing the session automatically and further reducing the window of opportunity for session hijacking.

 

Biometrics holds promise in further freeing the healthcare worker from having a wireless proximity card. This is especially compelling in healthcare where not having to touch anything can be a significant healthcare improvement since healthcare workers need to keep sterile hands. To achieve this improvement biometrics need to be both highly reliable and resilient to spoofing. For example viable facial recognition would need to have negligibly low false accept and false reject rates, and would have to be able to detect if a face in front of a device was a picture or a real person. Several strategies are emerging for this including multiple cameras able to detect depth, and facial recognition strategies that require some motion such as blinking to ensure the subject is not a static picture. The reality in healthcare is many healthcare workers, such as doctors working in multiple healthcare organizations, need separate credentials for each organization, and in a worst case a separate proximity card for each facility. As more healthcare organizations implement biometrics this has potential to reduce the number of tokens such as proximity cards required by a given healthcare worker. Furthermore, strategic initiatives such as National Strategy for Trusted Identities in Cyberspace (NSTIC) have the potential to separate Identity Providers from Service Providers where healthcare workers have one set of credentials to authenticate with the Identity Provider and could then access multiple Service Providers such as healthcare organizations without having to be issued a separate set of credentials from each healthcare organization.

 

Another technology that holds major promise is virtualization with “follow me session” where a healthcare worker that has logged into a given device to start up a secure session, started up healthcare apps within their session, and located a given patient medical record, may then move to another device, login and get access to the same session without having to start the apps and search for that patient again. This becomes particularly compelling as the number and types of devices healthcare workers use increases and their use cases require them to move between the devices seamlessly. This capability can also be especially beneficial where healthcare workers must use many shared workstations throughout their day and switching of devices is frequent even within a given patient encounter. Along with this type of compute model one can do centralized patching and management, leading to major security, manageability and operational efficiency benefits. Where virtualized healthcare clients running on mobile devices have the ability for secure local storage of limited healthcare data, for example just records for the patients a healthcare worker will see that day, they enable healthcare workers to be productive even in areas lacking network coverage or performance, such as rural areas or patient homes. This improved availability is particularly important has healthcare becomes more decentralized.

 

What kinds of solutions that combine usability and security improvements are you seeing in your healthcare organization?

 

The benefits of analytics in healthcare are compelling, and big data is fueling this with increasing quantity and quality of patient data with potential to enable major improvements in evidence based medicine, ultimately enabling greatly improved quality of care.

 

Combining this with cloud computing enables healthcare to rapidly realize benefits with less initial capital investment, more of a pay-as-you-go financial approach, and much greater agility, amongst other benefits. However, privacy and security are major concerns and an impediment to many healthcare organizations realizing these benefits. Further, legal and regulatory compliance challenges abound, from national to state level regulations, and across verticals and different types of data.

 

I had the privilege of moderating and participating on a workshop panel filmed at HIMSS 2013 in New Orleans with a group of leading experts:

 

Nicole Martinez, Director of Nursing Informatics, Robert Wood Johnson University

Brian Balow, Partner, Dickinson Wright PLLC

Dr Khaled el Emam, CEO, Privacy Analytics

Kim Singletary, Director of Technical Solutions Marketing, McAfee

 

See highlights of our workshop panel at the video above.

 

We discuss frontline healthcare workers real experience with analytics, the compelling benefits, common challenges, and practical solutions encountered in implementation. We also discuss the regulatory and legal landscape, and practical strategies for compliance.

 

A multi-layered approach to security emerges in our discussion as a best practice to mitigate risk, and we discuss several key security safeguards including risk based de-identification, tokenization, encryption and various administrative security controls including policy, effective training, audit and compliance, and contracts and plans with Business Associates.

 

We also discuss results from a recent HIMSS global research survey of frontline healthcare workers, highlighting challenges with IT department responsiveness and flexibility, the usability of solutions and security, and how usability is much more than a “nice to have”, having real impacts on compliance and risk where healthcare workers are compelled to use workarounds.

 

Based on this research we pose and discuss the pertinent question: “If we are going to secure our data in the cloud, which cloud is the data in?” We discuss how this research shows that the use of workarounds by healthcare workers can drive sensitive healthcare data into “side clouds” outside of the control of the healthcare organization, where it is at increased risk of confidentiality / breach, integrity, and potential trans-border data flow issues.

 

Last but not least, we discuss how usable hardware based security solutions can enable strong and usable security that avoid compelling healthcare workers to use workarounds, thereby improve compliance and reduce risk, and ultimately help ensure sensitive healthcare data stays in clouds where it is supposed to be, within the control and effective security of the healthcare organization.

 

What kinds of benefits, risks and practical solutions are you seeing with healthcare analytics in the cloud?

In a 2013 HIMSS global security survey of 674 frontline healthcare workers (Workarounds in Healthcare, a Risky Trend), too many layers of login was cited by 36 percent as a key driver compelling the use of risky workarounds, which are out of compliance with policy, to get their jobs done. An example of a workaround could be a file transfer app on a personal device used to transfer sensitive healthcare data unencrypted.

 

Single Sign-On (SSO) is a natural solution to this, reducing the total number of logins required for healthcare workers to do their job “the right way,” in compliance with policy, avoiding compelling them to resort to risky workarounds. However, as more healthcare systems are integrated behind a single sign-on solution, the risk and specifically the business impact of a compromised set of credentials increases. For this reason single-sign on is often combined with stronger multi-factor authentication.

 

A key take-away from the HIMSS survey is that usability is more than a “nice to have,” directly impacting non-compliance and risk. BYOD, social media, apps and other trends are empowering healthcare workers with more tools than ever before, and this research shows that if IT departments, solutions or security gets in the way, healthcare workers can and do use workarounds to get their job done.

 

Usability issues with multi-factor authentication, and specifically separate hardware tokens are well known. People lose them, break them, don’t like them (especially if they need multiple of them), and separate hardware tokens are often associated with increased TCO (Total Cost of Ownership) due to support and provisioning costs. Intel® Identity Protection Technology provides a strong 2-factor authentication solution without a separate hardware token, thereby avoiding the usability, support and TCO issues with separate hardware tokens.

 

The “what you have” in this case is the Intel® IPT capable mobile device that gets provisioned by the healthcare worker as a secure terminal for accessing healthcare solutions and sensitive patient information. Here’s how this works: in the event that the healthcare worker’s username/password credentials are compromised, and an impersonator tries to use these stolen or lost credentials to access the healthcare solution, the login will fail and they will be blocked since they don’t have the Intel® IPT capable mobile device that was previously provisioned by the healthcare worker as a secure terminal.

 

Combining SSO with Intel® IPT combines both the usability benefits of a reduced number of logins, as well as the usability benefits of a multi-factor solution that does not require a separate hardware token, for a stronger and more usable healthcare security solution.

 

What issues are you seeing with too many layers of login in your healthcare organization, and are you looking at single sign on solutions with multi-factor authentication?

Evernote says security has been breached by hackers. Dropbox password breach highlights cloud security weaknesses. These recent headlines are just two in a long list of examples of popular apps being compromised, putting sensitive data stored in their respective clouds at risk.

 

In an earlier blog, What cloud is your healthcare data in?, I explored the impacts of healthcare workers using apps with sensitive healthcare data, and the often undesirable side effect of moving the sensitive data into “side clouds” that are relatively insecure and add significant privacy and security risk.

 

A recent HIMSS global security survey of 674 frontline healthcare workers, Workarounds in Healthcare, a Risky Trend, HIMSS media, March 2013, shows that when solutions are unusable, security is cumbersome, or IT departments too slow or too restrictive in enabling new technologies, healthcare workers use workarounds. This survey revealed that this happens every day (22%) or sometimes (30%).

 

Personal apps for file transfer, note sharing, communications or other purposes where identified by 20 percent of healthcare workers as key tools to do workarounds. When sensitive healthcare data is used in workarounds this adds risk from a confidentiality / breach standpoint, as well as an integrity (completeness / accuracy) standpoint since the patient record often does not get updated with data moving in these workaround “side channels.”

 

To mitigate this risk we need a multi-pronged strategy including improving the usability of healthcare solutions and security to avoid compelling healthcare workers to use workarounds. IT departments in healthcare organizations need to be responsive and avoid being overly restrictive in enabling new technologies, or face being bypassed by healthcare workers in their use of workarounds. Administrative controls need to be bolstered, including policy, risk assessment (and proactively addressing deficiencies) and effective security training.

 

What kinds of apps are your healthcare workers using, and where do you see the risks?

Filter Blog

By author: By date:
By tag: