1 2 3 Previous Next

Intel Health & Life Sciences

260 posts

By Steve Leibforth, Strategic Relationship Manager, Intel Corporation

 

How sustainable is your health IT environment? With all the demands you’re putting on your healthcare databases, is your infrastructure as reliable and affordable as it needs to be so you can stay ahead of the rising demand for services?

 

In Louisiana, IT leaders at one of the health systems we’ve been working with ran the numbers. Then, they migrated their InterSystems Caché database from their previous RISC platforms onto Dell servers based on the Intel® Xeon® processor E7. They tell us they couldn’t be happier—and they’re expecting the move to help them reduce TCO for their Epic EHR and Caché environment by more than 40 percent.

 

“Using Intel® and Dell hardware with Linux and VMware, you can provide a level of reliability that’s better than or equal to anything out there,” says Gregory Blanchard, executive director of IT at Shreveport-based University Health (UH) System. “You can do it more easily and at much lower cost. It’s going to make your life a lot easier. The benefits are so clear-cut, I would question how you could make the decision any differently.”

 

UH Photo.jpg

 

We recently completed a case study describing UH’s decision to migrate its Caché infrastructure. We talked with UH’s IT leaders about their previous pain points, the benefits they’re seeing from the move, and any advice they can share with their health IT peers. If your health system is focused on improving services while controlling costs, I think you’ll find it well worth a read. You’ll also learn about the Dell, Red Hat, Intel, and VMware for Epic (DRIVE) Center of Excellence—a great resource for UH and other organizations that want a smooth migration for their Epic and Caché deployments.  

 

university-health-cuts-epic-tco-infographic.png

                                                              

UH is a great reminder that health IT innovation doesn’t just happen at the Cleveland Clinics and Kaiser Permanentes of the world. Louisiana has some of the saddest health statistics in the nation, and the leaders at UH know they need to think big if they’re going to change that picture. As a major medical resource for northwest Louisiana and the teaching hospital for the Louisiana State University Shreveport School of Medicine, UH is on the forefront of the state’s efforts to improve the health of its citizens. Its new infrastructure—with Intel Inside®—gives UH a scalable, affordable, and sustainable foundation. I’ll be excited to watch their progress.

                                                                  

Read the case study and tell  me what you think.

Read a whitepaper about scaling Epic workloads with the Intel® Xeon® processor E7 v3.

Join and participate in the Intel Health and Life Sciences Community

Follow us on Twitter: @IntelHealth, @IntelITCenter

By Justin Barnes and Mason Beard

 

The transition to value-based care is not an easy one. Organizations will face numerous challenges on their journey towards population health management.

 

We believe there are five key elements and best practices to consider when transitioning from volume to value-based care:  managing multiple quality programs; supporting both employed and affiliated physicians and effectively managing your network and referrals; managing organizational risk and utilization patterns; implementing care management programs; and ensuring success with value-based reimbursement.

 

When considering the best way to proactively and concurrently manage multiple quality programs, such as pay for performance, accountable care and/ or patient-centered medical home initiatives, you must rally your organization around a wide variety of outcomes-based programs. This requires a solution that supports quality program automation. Your platform must aggregate data from disparate sources, analyze that data through the lens of a program’s specific measures, and effectively enable the actions required to make improvements. Although this is a highly technical and complicated process, when done well it enables care teams to utilize real-time dashboards to monitor progress and identify focus areas for improving outcomes.

 

In order to provide support to both employed and affiliated physicians, and effectively manage your network and referrals, an organization must demonstrate its value to healthcare providers. Organizations that do this successfully are best positioned to engage and align with their healthcare providers. This means providing community-wide solutions for value-based care delivery. This must include technology and innovation, transformation services and support, care coordination processes, referral management, and savvy representation with employers and payers based on experience and accurate insight into population health management as well as risk.

 

To effectively manage organization risk and utilization patterns, it is imperative to optimize episodic and longitudinal risk, which requires the application of vetted algorithms to your patient populations using a high quality data set. In order to understand the difference in risk and utilization patterns you need to aggregate and normalize data from various clinical and administrative sources, and then ensure that the data quality is as high as possible. You must own your data and processes to be successful. And importantly, do not rely entirely on data received from payers.

 

It is also important to consider the implementation of care management programs to improve individual patient outcomes. More and more organizations are creating care management initiatives for improving outcomes during transitions of care and for complicated, chronically ill patients. These initiatives can be very effective.  It is important to leverage technology, innovation and processes across the continuum of care, while encompassing both primary and specialty care providers and care teams in the workflows. Accurate insight into your risk helps define your areas of focus. A scheduled, trended outcomes report can effectively identify what’s working and where areas of improvement remain.

 

Finally, your organization can ensure success with value-based reimbursement when the transition is navigated correctly. The shift to value-based reimbursement is a critical and complicated transformation—oftentimes a reinvention—of an organization. Ultimately, it boils down to leadership, experience, technology and commitment. The key to success is working with team members, consultants and vendor partners who understand the myriad details and programs, and who thrive in a culture of communication, collaboration, execution and accountability.

 

Whether it’s PCMH or PCMH-N, PQRS or GPRO, CIN or ACO, PFP or DSRIP, TCM or CCM, HEDIS or NQF, ACG’s or HCC’s, care management or provider engagement, governance or network tiering, or payer or employer contracting, you can find partners with the right experience to match your organizations unique needs. Because much is at stake, it is necessary to ensure that you partner with the very best to help navigate your transition to value-based care.

 

Justin Barnes is a corporate, board and policy advisor who regularly appears in journals, magazines and broadcast media outlets relating to national leadership of healthcare and health IT. Barnes is also host of the weekly syndicated radio show, “This Just In.”

 

Mason Beard is Co-Founder and Chief Product Officer for Wellcentive. Wellcentive delivers population health solutions that enable healthcare organizations to focus on high quality care, while maximizing revenue and transforming to support value-based models.

 

The Internet of Things (IoT) is one of those subject matters that tends to include a lot of future-gazing around what may be possible in five, 10 or even 20 years’ time but we’re very fortunate in the healthcare sector to be able to show real examples where IoT is having a positive impact for both patient and provider today.


IoT across Healthcare

It’s estimated that IoT in healthcare could be worth some $117 billion by 2020 and while that number may seem incomprehensibly large it is worth remembering that IoT touches on so many areas of healthcare from sensors and devices for recording and analysis through to the need for secure cloud and networks to transmit and store voluminous data.


When the UK Government published their ‘The Internet of Things: making the most of the Second Digital Revolution’ report, healthcare was one of most talked about areas with IoT making a significant impact in helping to ‘shift healthcare from cure to prevention, and give people greater control over decisions affecting their wellbeing.’


Meaningful Use Today

Here at Intel in the UK we’re working with a fantastic company in the Internet of Things space that is having a real and meaningful impact for patient and provider. MimoCare’s mission is ‘to support independent living for the elderly and vulnerable’ using pioneering sensory-powered systems. And with an ageing population across Europe and the associated rise in healthcare costs, Mimocare are already helping to ‘shift healthcare from cure to prevention’ today.


I think it’s important to highlight that MimoCare’s work focuses on measuring the patient’s environment, rather than the patient. For example, sensors can be placed to record frequency of bathroom visits and a sudden variation from the normal pattern may indicate a urinary infection or dehydration.

 

Medication Box


emeapillimage.jpg

The phrase ‘changing lives’ is sometimes overused but when you read feedback from an elderly patient benefiting from MimoCare’s work then I think you’d agree that it is more than appropriate. MimoCare talked me through a fantastic example of an 89 year old male who is the primary carer for his 86 year old wife and is benefiting greatly from IoT in healthcare. The elderly gentleman has a pacemaker fitted so is required to administer warfarin but with his primary focus on caring for his wife there is a risk that he may miss taking his own medication.

 

Using MimoCare sensors on the patient’s pill box enables close family to be alerted by SMS if medication is missed. The advantage to the patient is that both the sensors in the home and, importantly, the alert triggers are unobtrusive, meaning that the patient remains free from anxiety. If medication is missed a gentle reminder via a phone call from a family member is all that I needed to ensure the patient takes medication. And for the healthcare provider the cost in providing care for the patient is significantly reduced too.


The elderly male patient said, “I really like the medication box as it feels like something for me. It's nice to know someone is keeping an eye out to help remind me to take my medication daily and on time.  In fact last time I visited the surgery they were able to reduce my warfarin and I'm sure that's because I'm now taking it regularly.” Read more on how MimoCare is using sensors in the home to help the elderly stay independent and out of hospital.


Big Data, Big Possibilities

I’m really excited about the possibilities of building up an archive of patient behaviour in their own home that will enable cloud analytics to produce probability curves to predict usual and unusual behaviour. It’s a fantastic example of the more data we have, the more accurate we can be in predicting unusual behaviour and being able to trigger alerts to patients, family and carers. And that can only be a positive when it comes to helping elderly patients stay out of hospital (and thus significantly reduce the cost of hospital admissions).


Intel has played a pivotal role in assisting of porting both software and hardware to give improved performance of the IoT gateway, also provided through WindRiver Linux an enhanced data and network security including down-the-wire device management for software updates and configuration changes.


Sensing the Future

But where will the Internet of Things take healthcare in the next 5-10 years? What I can say is that sensors will become more cost-effective, smaller and will be more power-efficient meaning that they can be attached to a multitude of locations around the home. Combining this sensor data with that recorded by future wearable technology will give clinicians a 360 degree view of a patient at home which will truly enable the focus to be shifted from cure to prevention.


I asked MimoCare’s Gerry Hodgson for his thoughts on the future too and he told me, “IoT and big data analytics will revolutionise the way care and support services are integrated. Today we have silos of information which hold vital information for coordinating emergency services, designing care plans, scheduling transport and providing family and community support networks. The projected growth in the elderly population means that it is imperative we find new ways of connecting local communities, families and healthcare professionals and integrating services.”


“Our cascade 3-D big data analytics provides a secure and globally scalable ecosystem that will totally revolutionise the way services are coordinated.  End to end, IoT sensors stream valuable data to powerful server platforms such as Hadoop which today provides an insight into what would otherwise be unobtainable.”

“I'm very excited about the future where sensors and analytics change the way we coordinate and deliver services on a huge scale.”

 

by Jeff Zavaleta, MD, chief medical officer, Graphium Health, and Daniel Dura, chief technology officer, Graphium Health

 

Discussions around security in the healthcare IT space usually center around external threats to our Healthcare IT infrastructure. Sure, this is a big area of concern and one that should not be taken lightly. Software needs to use encryption properly, it needs to protect and monitor from known threats, it needs implement best practices in infrastructure architecture as we design cloud based systems that are more accessible via the public internet.

 

But while these are definitely critical items, some of the biggest threats are not technical. Many times we have to deal with threats inside our organizations. This may include ensuring that we are screening and monitoring employees for nefarious behavior but the more likely situation is when good, law abiding and well intentioned employees are putting data at risk. Many times employees have easy access to large swaths of PHI data which is critical to them performing their jobs appropriately. This access is not inherently bad, but if the software is not designed to take this into account it can actually encourage a user to do things that will lead to inadvertent data disclosure put our patient data at risk.

 

So the question is, is our software designed to promote good security? Here are a few of the most important techniques and guidelines that we use when designing software that help promote good security practices:

 

Understand your user, and understand their workflow

 

Good software considers how a user is going to access data and how they are going to move within the system. Provide users with the quickest and most efficient path for them to get to the data they need. Also understand how users use their mobile devices in your environment. Create clear and concise use cases that define how the mobile application will accomplish specific goals. When software is not designed to the specific workflow of a user, that user will usually figure out a workaround which sometimes involves putting patient data at risk.  By providing the user a better overall experience you aren’t only protecting the data in the system, but you are likely to increase their satisfaction with it.

 

Ensure that users only see the data they absolutely need

 

This will only happen when you understand your user and their processes. Just showing information because you have it is not a good practice and one that is common in healthcare software. Curate the data and survey users to understand exactly how they use your system. Provide justification for every field, and don’t be afraid to be conservative in what you provide. You can always provide the user a way to customize what they see so that it will help them in their specific job, but err on the side of less. Also, on mobile devices we have limited real estate to display that data, and so by removing unneeded data you are ensuring a great user experience targeted to the mobile use case.

 

Limit the need to export data

 

While this may depend on the software system, but in many EHRs we find it too easy to export data. This usually is a release valve of sorts to enable unimplemented functionality. Understand why users are exporting data out of your system and provide that functionality if it is prudent. Anytime a user exports data out of a system, it is more likely to end up in the wrong hands.

 

Use safe password practices, but explain them to the user

 

Passwords are hard, but we can make managing passwords easier for the user. Make the path to resetting their password easy and explain to them in clear concise terms what the password requirements are. If the user has to attempt a password reset multiple times because they don't understand the precise rules you have, they are more likely to use a common password they have used on other systems, or to change their passwords less often. Use real time updates in the UI to show how they are complying with the rules as they are entering a new password and provide clear feedback. Also, use 2-factor authentication and PINs appropriately on mobile devices. And if you are on iPhones or iPads, make use of features such as TouchID for biometric authentication. Not only will it make the software more secure, but your users will appreciate it.

 

Be careful on alerts and other notifications

 

Our mobile devices are wonderful at surfacing valuable information to us using system specific notifications and alerts, but not all of them are necessarily appropriate to use as vehicles for sharing PHI. Avoid using patient data in notifications that can show up on device 'unlock' screens or in other places on the device that can bee seen without entering appropriate authentication credentials (for example prior to entering a PIN or showing up outside of your software.) If you are using notifications, use them to provide calls-to-action that will enable a user to understand that they software may need their attention, or provides them cues as to something they may have already seen in your app.

 

Some of these guidelines may be obvious, but it is something has to be constantly evaluated and improved as not only our technologies evolve, but as new devices become available for us to use. When software is designed properly, not only are we making your applications more secure, but you are creating applications that will be a joy to use and may actually save lives.

 

What questions do you have?

by Chiara Garattini & Han Pham

 

In the first of our Future Health, Future Cities blog series, we posed questions around the future of health in the urban environment. Today we look at some of the projects undertaken by students on the physical computing module on the Innovation Design and Engineering Masters programme run in conjunction between Imperial College and the Royal College of Arts (RCA).

 

Health and Safety in the Workplace

The first group of project is related to the important issue of health and safety in the workplace.

2 - a - Health and Safety in the Workplace.png

Figure 1. Circadian Glasses


Christina Petersen’s ‘circadian glasses’ considered the dangers of habitual strains and stressors at work, particularly for individuals in careers with prolonged evening hours or excessively in light-poor conditions, which may have a cumulative effect on health over time. Although modern technologies allow for the convenience of working at will regardless of external environmental factors, what is the effect on the body’s natural systems? In particular, how does artificial lighting affect the circadian rhythm?

 

Her prototyped glasses use two LED screens that can adjust the type of light to help users better adjust their circadian rhythms and sleep patterns. The concept also suggests a potentially valuable intersection of personal wearable and personal energy usage (lighting) in the future workplace. Unlike sunglasses, the glasses are also a personal, portable source of light – an interesting concept in workplace sustainability, given the majority of energy expenditure is in heating/cooling systems and lighting.

 

While there is room to make the user context and motivation more plausible, the prototype literally helps shed light on meaningful, and specific, design interventions for vulnerable populations such as nurses or night shift workers for personal and workplace sustainability over time.

 

2 - b - Smart Workplace Urinal.png

Figure 2. Smart Workplace Urinal


As we often see within our work, a city’s hubs for healthcare resources and information often are informally ubiquitous and present within the community before one reaches the hospital. Jon Rasche’s smart urinal was created to decrease the queue and waiting time at the doctor’s office even before you arrived, by creating more personal, preventative care via lab testing at the workplace.

 

The ‘Smart Urinal’ created an integrated service with a urinal-based sensor and a display unit, QR codes, and a mobile application (Figure 2). The system also considered concerns around patient privacy by intentionally preventing private patient information from entering the cloud. Instead, each of the possible results links to a QR codes leading to a static web page with the urinalysis information.

 

While the system might be perceived as too public for comfort, it connects to the technological trends toward for more personalised and accessible testing (Scanadu’s i-Phone ready urinalysis strip is a good example). It also raises the consideration of how to design for the connected ecosystem of responsibility, accountability and care – how can different environments influence, impact and support an individual’s wellbeing? How can personalised, connected care be both anticipatory, preventative, and immediate, yet, private?

 

Pollutants Awareness

The dynamic life of a city often means it’s in a state of constant use and regeneration – but many of the resulting pollutants are invisible to the naked eye. How do we know when the microscopic accumulation of pollutants will be physically harmful? How can we make the invisible visible in a way that better engages us with our environment?

 

2A.jpg

Figure 3. Air Pollution Disk


Maria’s Noh’s ‘Air Pollution Disc’ (Figure 3) considers how we can design for information to be more physical, visible and intuitive by creating a mechanical, physical filter on our immediate environment driven by local air quality data using polarised lenses.

 

It’s a very simple mechanism with an elegant design that ties to some of our earlier cities research into perceptual bias around air quality substituting numeric data for physical feedback (e.g., although pollutants may not always be visible, we equate pollution with visual cues). Noh suggested two use scenarios – one was to affix device to a window of a home to understand pollution at potential destinations, such as the school; another was to potentially influence driver behaviour by providing feedback on relationship of driving style to pollution.

 

While there are some future nuances and challenges to either case, the immediacy of the visualisation for both adults and children, may make it interesting to see the Air Pollution Disc as a play-based, large-scale urban installation of physicalizing the hidden environment of the city.

 

Ghost.jpg

Figure 4. Ghost 7.0


The pollutants category relates to the prototype for ‘Ghost 7.0’ by student Andre McQueen, a smart clothing system that addresses how weather and air quality affect health. The idea tries to tackle breathing problems, e.g. due to allergies, associated to weather changes. The device (Figure 4) embedded in the running clothing is designed to communicate with satellites to receive updates on weather conditions and signal warnings under certain circumstances.

 

When a significant meteorological change is signalled, the fabric would change colour and release negative ions (meant to help breathing under certain conditions). The student also investigated oxidisation to fight pollutants, but could not overcome the problem of the releasing some small amounts of CO2.

 

What we found interesting in this project was the idea that a wearable device would do something to help against poor air quality, rather than just passively detecting the problem. Too many devices currently are focusing on the latter task, leaving the user wondering about the actionability of the information they receive.

 

Glove.jpg

Figure 5. Dumpster diving 'smart glove'


The last selected project for this section is a project on dumpster diving by student Yuri Klebanov. Yuri built a system to make dumpster diving safer (by creating a ‘smart glove’ that reacts to chemicals) and more effective (by creating a real time monitoring system that uploads snapshots of what is thrown away on a website for users to monitor).

 

While the latter idea is interesting but presents several challenges (e.g. privacy around taking pictures of people throwing away things), what we liked about the project was the ‘smart glove’ idea. The solution device was to boil fabric gloves with cabbage, making them capable of changing colour when in contact with acid, liquids, fats and so on (Figure 5). This frugal technology solution made us reflect on how smart is ‘smart’? Technology overkill is not always the best solution to a problem, and something simple is always preferable to something more complex that provides the same (or little incremental) results.

 

In the third and final blog of our Future Cities, Future Health blog series we will look at the final theme around Mapping Cities (Creatively) which will showcase creative ideas of allocating healthcare resources and using sound to produce insights into complex health data.

 

Read Part I

Read Part III

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

Intel has sponsored a physical computing module on the topics of ‘Future Health, Future Cities’ as part of the first year in the Innovation Design and Engineering Master programme in conjunction between Imperial College and the Royal College of Arts (RCA). This module, coordinated by Dominic Southgate at Imperial College, was intended to be an investigation into the future of health in urban environments and a projection of how technology might support this multi-faceted theme.

 

The Intel team (John Somoza, Chiara Garattini and Duncan Wilson) suggested themes that the 40 students (10 allocated for each theme) had to work on individually over the course of four weeks:

 

1.  Food, Water, Air

The human body can only live for three weeks without food, three days without water, and three minutes without air. These ingredients are vital for our survival and key to our good health – how can we optimise each of them within our cities?

 

Food has an obvious connection to healthy living. But what about the more subtle relationships? How can food be analysed/customised/regulated to help with specific disorders or conditions? Meanwhile, how can technology help us in water catchment and distribution in the city or manage quality? Can we reuse water better?

 

Likewise, an invisible and yet vast component of cities is its air, which is key to human well-being. While air is currently rated by proxy metrics in many ways, what is air quality and pollution through a human lens? How can we re-think the air we breathe?


2.  Systems of Systems

A city is made of many systems inextricably related and depending on each other. One important aspect of cities is its healthcare system. How can we re-imagine a city-based healthcare service? For example, hospitals are currently hubs for providing health care when needed, yet they often may not be the first or best place we seek care when unwell. Can we reimagine what a hospital of the future would look like? What would a healthcare worker of the future look like and what equipment would they use?

 

Although we currently use tools such as the healthy city index rates that rate cities as healthy and un-healthy, how could we measure a healthy city in a way which reflects its complexity? Measuring the world in a new way at some point becomes experiencing the world in a new way -- what tools do we need and what are the implications?

 

Ultimately, if cities are systems of systems, then we are the nodes in those systems: how do we understand the impact of our individual accumulative actions on the larger systems? How can we see small, seemingly un-impactful actions, in their incremental, community wide scaling? How can we entangle (or disentangle) personal and collective responsibilities?

 

3.  Measuring and Mapping

There are various ways to measure a sustainable city, but none is perfect (e.g. carbon credits). What is the next thing for measuring a sustainable city? What would be the tools to do so? How local do we want our measures to be?

 

Our cities have different levels of language and communication embedded in their fabric (symbols, maps, and meanings). Some of these are more evident and readable than others, marking danger, places, and opportunities. One class of such signals relates to health. What kind of message does our city communicate in order to tell us about health? What symbols does it use and how do they originate and change through time?

 

4.  Cities of Data

Much of the current quantified-self movement is centred on metrics collected by individuals and shared with a relatively close, like-minded community. What would a ‘quantified-selfless’ citizen look like within the context of a city-wide community? How would people share data to improve their lives and that of other people? How could this impact on the environment and systems in which they live? How could the city augment and integrate people’s self-generated data and support you in an effort of being ‘healthy’ (e.g. personalised health cityscapes)? At the same time, individual and communities’ interests are sometimes harmonic and sometimes competing. How would citizens and cities of the future face this tension?

 

Commentary on selected projects

The underlying idea behind these themes was to stimulate design, engineering and innovation students to think about the complex relationship between connected cities and connected health. Because the task is wide and complex, we decided to start by pushing them to consider some broad issue, e.g., how can a city’s health infrastructure become more dynamic? How can we help cities reconsider the balance between formal/informal resourcing to meet demand? What are the triggers to help communities understand/engage with environmental/health data?

 

The aim was to encourage the upcoming generation of technology innovators to think of health and cities as vital to their work.

 

The IDE programme was ideal for the task. Imagined originally for engineers who wanted to become more familiar with design, it has now transformed into a multidisciplinary programme that attracts innovative students from disciplines as varied as design, business, fashion and archaeology. This shows a resurgence of the relevance of engineering among students, possibly stimulated by the accessibility and ubiquity of tools for development (e.g. mobile apps) as well as the desire to find solutions to pressing contemporary problems (e.g. aging population trends).

 

Students were able to explore different points of interest in Intel’s ‘Future Health, Future Cities’ physical computing module, each an interesting starting point into the challenges of designing for complex, living systems such as a city.

 

We will share eight of the projects in our next two blogs, based not on their overall quality (which was instead assessed by their module coordinators) but rather how their collective narrative under three emergent sub-themes help highlight connections to some of the ongoing challenges and questions we face in our daily work.

 

Read Part II

Read Part III

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

by Chiara Garattini & Han Pham

 

Read Part I

Read Part II

 

In the third and final edition of our Future Cities, Future Health blog series, we will look at the final theme around Mapping Cities (Creatively) which showcases the creative ideas of allocating healthcare resources and using sound to produce insights into complex health data as part of the physical computing module on the Innovation Design and Engineering Masters programme run in conjunction between Imperial College and the Royal College of Arts (RCA).


Mapping cities (creatively)

In considering how to allocate resources, we also need to understand where resources are most needed, and how this changes dynamically within a city.

 

3A.jpg

Figure 1. Ambulance Density Tracker


Antoni Pakowski asked how to distribute ambulances within a city to shorten response times for critical cases, and suggested this could be supported by anonymous tracking of people via their mobile phones. The expected service window of ambulance arrival in critical care cases is 8 minutes. However, in London, only around 40 percent of calls meet that target. This may be due to ambulances being tied to a static base station. How can the location of the ambulance change as people density changes across a city?

 

The ambulance density tracker (Figure 1) combined a mobile router and hacked Pirate Box to retrieve anonymously the IP of phones actively seeking Wi-Fi to create a portable system to track the density of transient crowds. The prototype was designed to only rely upon one point of data within a certain region, requiring less processing than an embedded phone app. He also created a scaled down model of the prototype, to suggest a future small device that could potentially be affixed to static and moving infrastructure such as taxis within the city.

 

Although the original use case needs additional design work to be clearer, the prototype itself as a lightweight, anonymous device that allows for a portable proxy of transient crowd density may be useful as a complementary technology for other design projects geared toward designing for impromptu and ad hoc health resources within a city based on audience shifts.

 

3B.jpg

Figure 2. 'Citybeat'


The second project in this category is called ‘Citybeat’ by student Philippe Hohlfeld (Figure 2). Philippe wanted to look at the sound of a city and create not only ‘sound’ maps of the city, but also capturing the ‘heartbeat’ of a city by exploring ‘sonified’ feedback from it. His thinking originated from three distinct scientific endeavours: a) turning data from the Higgs Boson Atlas preliminary data at CERN into a symphony to celebrate the connectedness of different scientific fields; b) turning solar flares into music at the University of Michigan to produce new scientific insights; and c) a blind scientist at NASA turning gravitational fields of distant stars into sound to determine how they interact.

 

The project looked specifically at the Quality of Life Index (safety, security, general health, culture, transportation, etc.) and tried to attribute sounds to different elements so to create a ‘tune’ for each city. Sonification is good for finding trends and for comparison between two entities. What we most liked of the project though, was the idea of using sound rather than visual tools to produce insights into complex data.


Personal data from wearables, for example, is generally often in visual dashboard. Even though these are meant to simplify data fruition, they not always do. Sound could be quicker than visual displays in expressing, for example, rapid or slow progress (e.g. upbeat) or regress (e.g. downbeat). In the current landscape of information overload, exploring sound as alternative way of summarizing usage is something we thought very interesting.

3 - b - Bee Gate.png

Figure 3. 'Bee gate'

Finally, the last selected project in this list is also one of the most unusual ones. Student James Batstone wanted to think of how bees interact with polluted environments and how they could be used as part of reclamation or decontamination programmes. He imagined a city (or territory) abandoned due to pollution, and of using bees to collect and analyse pollen to establish whether the territory was ready for being reclaimed to human habitation.

He built a prototype with ‘bee gates’ that would allow for the harmless capturing of pollen from the individuals insects when returning to the hive (Figur3). He also theorised to complement this with an automated software that used cameras to track and automatically analyse their dance to establish provenance. What we liked about this project is the imaginative idea of using bees to monitor air and land quality by analysing vegetation through their pollen, as well as radiation and pollutants in honey, to create maps of lands quality levels. Using natural resources and occurring events to complement what technology can do (and vice versa) is the way to achieve sustainable solutions in the long term.

 

Final thoughts

As part of our work at Intel, we collaborate with the world’s top universities to look at the future of cities with an eye toward the intersection of technology, environment, and social sustainability. In our groups one can find entrepreneurs, designers, hacktivists, engineers, data artists, architects and more.

 

We seek to support the same diversity of inspiration in today’s students as the future technology innovators by tapping into how to connect creativity to the technology for more vibrant, connected cities and communities. In many ways, working with first year master’s students is a refreshing perspective of how to open these questions with a beginner’s mind-set by suggesting how embrace simplicity in the face of rising information – just because our digital traces and data footprint will be increasing, our time to juggle what that means won’t.

 

Physical computing is coming into play in new ways, more often. It will not be enough to get lost in a screen – the interface of tomorrow will be everywhere, and interactions leap off screens into the real world. ‘Future Health, Future Cities’ suggested how to consider the role of physical computing in helping create more sustainable services by, for example, making transparent what and where the need for services are, by exploring how to communicate simply and well new urban information streams, and, last but not least, by reflecting on how to deliver resources where it will be most needed in a constantly changing city.

 

 

*Concepts described are for investigational research only.

**Other names and brands may be claimed as the property of others.

When National Coordinator Karen DeSalvo said, at HIMSS15 in Chicago, that the nation needed "true interoperability, not just exchange,” I imagined the 45,000 or so attendees crying out in unison, “Just exchange?”

 

Measured only since George Bush made EMRs a priority in his 2004 State of the Union address, it has taken our country 11 difficult years to get to a point where 75-80 percent of U.S. hospitals are deploying electronic health records and exchanging a limited set of data points and documents. Especially for smaller hospitals, “just exchange” represents a herculean effort in terms of acquisition, deployment, training and implementation.

 

But adoption of EHRs, meaningful use of the technology, and peer-to-peer exchange of data were never defined as the endpoints of this revolution. They are the foundation on which the next chapter – interoperability — will grow.

 

I asked a former colleague of mine, Joyce Sensmeier, HIMSS Vice President, Informatics, how she distinguished exchange from interoperability.

 

“I think of exchange as getting data from one place to another, as sharing data. Ask a question, get an answer,” she said. “Interoperability implies a many-to-many relationship. It also includes the semantics – what do the data mean? It provides the perspective of context, and to me, it’s going towards integration.”

 

There are other definitions as well. One CIO I correspond with told me he relies on the IEEE definition, which is interesting because ONC uses it too: “the ability of two or more systems or components to exchange information and to use the information that has been exchanged.” And as I was writing this blog, John Halamka beat me to the punch on his blog with a post titled So what is interoperability anyway?

 

His answer: it’s got to be more than “the kind of summaries we’re exchanging today which are often lengthy, missing clinical narrative and hard to incorporate/reconcile with existing records.”

 

Sensmeier notes that interoperability, like exchange, is an evolutionary step on the path to a learning health system. I like that metaphor. As with biological evolution, healthcare IT adaptation is shaped by an ever-changing environment. We shouldn’t expect that every step creates a straight line of progress — some solutions will be better than others. The system as a whole learns, adopts the better practices, and moves forward.

 

(This, incidentally, is why Meaningful Use has proven unpopular among some providers. Although it  emerged from excellent thinking in both the public and private sector, its implementation has taken the command-and-control approach that rewarded — or punished —providers for following processes rather than creating positive change.)

 

Moving from exchange to interoperability, DeSalvo said at HIMSS15, will require “standardized standards,” greater clarity on data security and privacy, and incentives for “interoperability and the appropriate use of health information.”

 

I’ve seen two recent reports that suggest the effort will be worthwhile, even if the payoff is not immediate. A study from Niam Yaraghi of the Brookings Institute found that after more than a decade of work,  “we are on the verge of realizing returns on investments on health IT.” And analysts with Accenture report that healthcare systems saved $6 billion in avoided expense in 2014 thanks to the “increasing ubiquity of health it.” Accenture expects that number to increase to $10 billion this year and $18 billion in 2016.

 

Sensmeier says she expects that reaching “true operability” will occur faster than “just exchange” did.

 

“It won’t happen overnight,” she warns, “but it won’t take as long, either.” With evolving agreement on standards, the infrastructure of HIEs and new payment models, the path to interoperability should be smoother than the one that got us to where we are today.

 

What questions do you have?

Would you use a medical diagnostic that is missing 63 percent of its input data, and simultaneously has a 30 percent false positive rate? I wouldn’t.

 

That is, I wouldn’t if I could avoid it. Sadly, I can’t. These are the stats for the structured problem list in the typical enterprise electronic health record (EHR). Two studies have found that at least 63 percent* of the key clinical information about patients can only be found in text and scanned documents, which means that it is not readily accessible to clinical decision support.

 

At the same time, consider heart failure. One study found that a shocking 30 percent** of patients with heart failure in their problem lists do not actually have heart failure!

 

On Wednesday, May 27, I had the pleasure of co-presenting on this topic at the Health Technology Forum with Vishnu Vyas. Vishnu is the Director of Research and Development at Apixio, a Big Data analytics company for healthcare. Our goal was to introduce the concept of The True State of the Patient as a computable model of the patient for optimizing care delivery, outcomes and revenue.

 

Our message was that Big Data analytics can dramatically improve, and eventually eliminate, these problems.

 

First, the bad news. I’ve already mentioned a source of error: improperly attributed diagnoses in the structured problem list. I’ve also mentioned a data gap: Missing diagnoses in the structured problem list. But there are other critical gaps. For example, typical patients spend only 0.08percent*** of their waking hours in a clinical setting, so we are missing observations of the remaining 99.92 percent*** of their lives. To manage a patient with diabetes or cardiovascular disease, the inability to measure activity over the course of the entire day makes it very difficult to proactively manage these chronic diseases. For Parkinson’s patients, whose symptoms such as tremor and sleep disturbance are intermittent and highly variable, it is nearly impossible to assess the impact of therapy with such meager sampling. In both cases, to fill in these gaps is to move closer to knowing the True State of the Patient.

 

Another gap is genomic data. The literature abounds with amazing examples of genomic markers that can inform care decisions for patients with a variety of conditions, including but not limited to cancer, however these markers are not routinely measured and applied in clinical practice. We would argue that genomic data is often an important, but overlooked, component of the True State of the Patient.

 

Ok, so we have errors and we’re missing data. How does Big Data analytics help?

 

Let’s look at false positives for heart failure in the structured problem list. This is an important example, because heart failure patients are very sick, and they are subject to Medicare 30-day readmission rules, so the correct assessment of heart failure has real consequences for patients and health systems.

 

It is possible to build a classifier (a kind of machine learning model) that can look at all of the information in a patient’s chart and make a determination about how likely it is that the patient actually has heart failure. This is very interesting, because the computer is not diagnosing heart failure, it is reliably identifying what the chart of a true heart failure patient looks like. Machine learning, a typical tool in the Big Data arsenal, allows us to dramatically reduce the impact of errors in the problem list.

 

What about gaps? The same trick can be used for filling in gaps in the structured data, and is especially effective when text mining and natural language processing are used to find key clinical information in the unstructured data. Progress notes, nursing notes, consult letters and hospital discharge summaries are fertile sources of information for this kind of analysis. Big Data analytics leads to fewer gaps in the structured data.

 

What about gaps caused by the 99.92 percent of the time that the patient spends outside the clinic? This is where Big Data’s Internet of Things (IOT) comes in. In healthcare, IOT means remote patient monitoring, which can include anything from activity monitoring with a smart watch, to weight, blood pressure and glucose monitoring with FDA-approved medical devices. There are a number of very interesting studies happening right now in which patient monitoring is filling in the gaps in the clinical record.

 

Finally, what about the genome? Everything about the human genome is Big Data analytics, from the raw data for more than two billion base pairs, to the huge datasets and patient cohorts that are needed to correlate observed genomic variation with clinical measurements and outcomes. Inclusion of the human genome in the True State of the Patient will lead to some of the largest and most interesting Big Data advances in healthcare over the next 10 years. 

 

Once we have the True State of the Patient in a computable form, it is possible to improve every area of healthcare. Quality reporting, precision medicine, “practice-based evidence,” proactive wellness and disease management, revenue optimization and care delivery optimization all flow from accurate models of healthcare data. There are few businesses that can survive without accurate, reliable business data to drive modeling and optimization. The changes we are seeing in healthcare today are demanding that healthcare have the same level of transparency into the True State of the Patient.

 

Big Data analytics will be a critical part of this transformation in healthcare. What questions do you have?

 

* Studies performed by Apixio (private communication) and Harvard: http://www.nejm.org/doi/full/10.1056/NEJMp1106313#t=article
** Apixio study (private communication)
***http://www.cdc.gov/nchs/fastats/physician-visits.htm
(4.08 visits/American/year)
http://mail.fmdrl.org/Fullpdf/July01/ss5.pdf
(all visits < 80 minutes time with physician- this is a strong upper limit)
NB: 0.08% assumes 16 waking hours in a day, 80 minutes per visit, 4.08 visits/year/patient

Read Part 1 of this two-part blog series

 

What does the next 17 years hold in store for us? I believe it we are at the dawn of the era of petascale genomics. In 2015, we can manipulate gigascale genomic datasets without much difficulty. Deriving insight from data at this scale is pervasive today, and the IT infrastructure needed isn’t much more advanced than a small Linux cluster, or an easily affordable amount of time on a large cloud provider’s infrastructure. Manipulation of terascale datasets, however, is still not easy. It is possible to be sure, and researchers are busy attempting to derive insight from genomic data at these scales. But definitely not easy, and again the reason is the IT infrastructure.

 

Terascale data sets do not fit neatly into easily affordable computational architectures in 2015 – one needs advanced techniques to split up the data for analysis (e.g., Hadoop-style workflows) or one needs advanced systems well beyond the average Linux HPC cluster (e.g., the SGI UV server). Indeed, the skilled IT observer would say that these techniques and systems were invented for data analysis at terascales.  But true petascale genomics research? No, we’re not there yet. We can certainly create data at petascales, and storage infrastructure for storing petabytes of data are fairly common (a petabyte stored on hard drives can easily fit into half a rack in 2015), but this is not petascacle analysis. But to be adept at analyzing and deriving scientific insight from petascale genomic datasets requires IT architectures that have not yet been produced (although theoretical designs abound, including future generations of systems from SGI!)

 

We are headed in this direction. NGS technologies are only getting more affordable. If there’s anything the past 17 years has taught us it is that once the data can be generated at some massive scale, it will be. 

 

Perhaps “consumer” genomics will be the driver. The costs of DNA sequencing will be low enough that individuals with no scientific or HPC background will want to sequence their own DNA for healthcare reasons. Perhaps the desire for control over one’s genomic data will become pervasive (giving a whole new meaning to “personalized medicine”) versus having that information be controlled by healthcare providers or (gasp!) insurance companies. Once you have millions of individuals capturing their own genomes on digital media we will have petascale genomics analysis.

 

Imagine the insights we can gain from manipulation of data at these scales. Genomic analysis of not one human genome, but millions of genomes, and perhaps also tracking genomic information through time. Why not? If the cost of DNA sequencing is not a barrier why not sequence individuals or even whole populations through time? That’ll give new meaning to “genome-wide association studies”, that’s for sure. Whatever the reason and whatever the timeline, the destination is not in doubt – we will one day manipulate petascale genomics datasets and we will derive new scientific insight simply because of the scale and pace of the research. And it will be advanced IT architectures from companies like SGI and Intel that will make this possible.

 

Here’s to the next 17 years. I’ll see you in 2032 and we’ll talk about how primitive your 50,000-core cluster and your 100PB filesystems are then.

 

What questions do you have?

 

James-Reaney_avatar_1430432638-80x80.jpg

James Reaney is Senior Director, Research Markets for Silicon Graphics International (SGI).

 

Patient engagement and analytics were trending topics at HIMSS15. I now wonder how the essence of those conversations will change going forward.


In this video, I share insight into how the discussions around analytics and patient engagement need to shift toward improving the quality of care and reducing costs. I also look at how the growing volume of personal health data coming from wearables and genomic research will help drive truly customized care into becoming a reality.

 

Watch the short video and let us know what questions you have about the future of analytics and customized care, and where you think they’re headed.

Seventeen years. That’s how long it has taken us to move from the dawn of automated DNA sequencing to the data tsunami that defines next-generation sequencing (NGS) and genomic analysis in general today. I’m remembering, with some fondness, the year 1998 which I’ll consider as the year the life sciences got serious about automated DNA sequencing, about sequencing the human genome in particular, and the year the train left the station and the genomics research went from the benchtop to prime mover of high-performance computing (HPC) architectures and never looked back.

 

1998 was the year Perkin Elmer formed PE Biosystems, an amalgam of Applied Biosystems, PerSeptive Biosystems, Tropix, and PE Informatics, among other acquisitions. That was the year PE decided they could sequence the human genome before the academics could – that is, by competing against their own customers, and they would do it by brute force application of automated sequencing technologies. That was the year Celera Genomics was born and Craig Venter became a household name. At least if you lived in a household where molecular biology was a common dinnertime subject.

 

Remember Zip Drives?

In 1998, PE partnered with Hitachi to produce the ABI “PRISM” 3700, and hundreds of these machines were sold worldwide, kick starting the age of genomics. PE Biosystems revenues that year were nearly a billion dollars. The 3700 was such a revolutionary product that it purportedly could produce the same amount of DNA data in a single day what the typical academic lab could produce in a whole year. And yet, from an IT perspective, the 3700 was quite primitive. The computational engine driving the instrument was a Mac Centris, later upgraded to a Quadra, then finally to a Dell running Windows NT. There was no provision for data collection other than local storage, which if you wanted any portability was at that time the ubiquitous Iomega Zip Drive. You remember those? Those little purplish-blue boxes that sat on top of your computer and gave you a whopping 100 megabytes of portable storage. The pictures on my phone would easily fill several Zip disks today.

 

Networking the 3700 was no mean feat either. We had networking in 1998 of course; gigabit Ethernet and most wireless networking technologies were still just an idea in 1998 but 100 megabit (100Base-TX) connections were common enough and just about anyone in any academic research setting had a least 10 megabit (10Base-T) connections available. The problem was the 3700, and specifically the little Dell PC that was paired with the instrument and responsible for all the data collection and subsequent transfer of data to some computational facility (Beowulf-style Linux HPC clusters were just becoming commonplace in 1998 as well.)  As shipped from PE at that time, there was zero provision for networking, zero provision for data management beyond the local hard drive and/or the Zip Drive.

 

It seems laughable today but PE did not consider storage and networking, i.e., the collection and transmission of NGS data, a strategic platform element. I guess it didn’t matter since they were making a BILLION DOLLARS selling 3700s and all those reagents, even if a local hard drive and sneakernet were your only realistic data management options. Maybe they just didn’t have the proper expertise at that time.  After all, PE was in the business of selling laboratory instruments, not computers, storage, or networking infrastructure.

 

Changing Times

How times have changed. NGS workflows today practically demand HPC-style computational and data management architectures. The capillary electrophoresis sequencing technology in the 3700 was long-ago superseded by newer and more advanced sequencing technologies, dramatically increasing the data output of these instruments and simultaneously lowering the costs as well.  It is not uncommon today for DNA sequencing centers to output many terabytes of sequencing data every day from each machine, and there can be dozens of machines all running concurrently. To be a major NGS center meant also being adept at collecting, storing, transmitting, managing, and ultimately archiving petascale amounts of data. That’s seven orders of magnitude removed from the Zip Drive. If you are also in the business of genomics analysis that meant you needed to be experts in computational systems capable of handling data and data rates at these scales as well.

 

Today, this means either massively scalable cloud-based genomics platforms or the more traditional and even higher scale HPC architectures that dominate all large research computing centers worldwide. We are far, far beyond the days of any single Mac Quadra or Dell server. Maybe if PE had been paying closer attention to IT side of the NGS equation they would still be making billions of dollars today.

 

In Part II of this blog, I’ll look at what’s in store for the next 17 years in genomics. Watch for the post next week.

 

James-Reaney_avatar_1430432638-80x80.jpg

James Reaney is Senior Director, Research Markets for Silicon Graphics International (SGI).

Telehealth is often touted as a potential cure for much of what ails healthcare today. At Indiana’s Franciscan Visiting Nurse Service (FVNS), a division of Franciscan Alliance, the technology is proving that it really is all that. Since implementing a telehealth program in 2013, FVNS has seen noteworthy improvements in both readmission rates and efficiency.


I recently sat down with Fred Cantor, Manager of Telehealth and Patient Health Coaching at Franciscan, to talk about challenges and opportunities. A former paramedic, emergency room nurse and nursing supervisor, Fred transitioned to his current role in 2015. His interest in technology made involvement in the telehealth program a natural fit.


At any one time, Fred’s staff of three critical care-trained monitoring nurses, three installation technicians and one scheduler is providing care for approximately 1,000 patients. Many live in rural areas with no cell coverage – often up to 90 minutes away from FVNS headquarters in Indianapolis.


Patients who choose to participate in the telehealth program receive tablet computers that run Honeywell LifeStream Manager* remote patient monitoring software. In 30-40 minute training sessions, FVNS equipment installers teach patients to measure their own blood pressure, oxygen, weight and pulse rate. The data is automatically transmitted to LifeStream and, from there, flows seamlessly into Franciscan’s Allscripts™* electronic health record (EHR). Using individual diagnoses and data trends recorded during the first three days of program participation, staff set specific limits for each patient’s data. If transmitted data exceeds these pre-set limits, a monitoring nurse contacts the patient and performs a thorough assessment by phone. When further assistance is needed, the nurse may request a home visit by a field clinician or further orders from the patient’s doctor. These interventions can reduce the need for in-person visits requiring long-distance travel.


FVNS’ telehealth program also provides patient education via LifeStream. For example, a chronic heart failure (CHF) patient experiencing swelling in the lower extremities might receive content on diet changes that could be helpful.


Since the program was implemented, overall readmission rates have been well below national averages. In 2014, the CHF readmission rate was 4.4%, compared to a national average of 23%. The COPD rate was 5.47%, compared to a national average of 17.6%, and the CAD/CABG/AMI rate was 2.96%, compared to a national average of 18.3%.


Despite positive feedback, medical staff resistance remains the biggest hurdle to telehealth adoption.  Convincing providers and even some field staff that, with proper training, patients can collect reliable data has proven to be a challenge. The telehealth team is making a concerted effort to engage with patients and staff to encourage increased participation.


After evaluating what type of device would best meet the program’s needs, Franciscan decided on powerful, lightweight tablets. The touch screen devices with video capabilities are easily customizable and can facilitate continued program growth and improvement.


In the evolving FVNS telehealth program, Fred Cantor sees a significant growth opportunity. With knowledge gained from providing the service free to their own patients, FVNS could offer a private-pay package version of the program to hospital systems and accountable care organizations (ACOs).


Is telehealth a panacea? No. Should it be a central component of any plan to reduce readmission rates and improve workflow? Just ask the patients and healthcare professionals at Franciscan VNS.

 

Healthcare systems are coping with an unprecedented level of change. They’re managing a new regulatory environment, a more complex healthcare ecosystem, and an ever-increasing demand for services—all while facing intense cost pressures.

 

These trends are having a dramatic impact on EMR systems and healthcare databases, which have to maintain responsiveness even as they handle more concurrent users, more data, more diverse workflows, and a wider range of application functionality.

 

As Intel prepared to introduce the Intel® Xeon® processor E7 v3 family, we worked with engineers from Epic and InterSystems to ensure system configurations that would provide robust, reliable performance. InterSystems and VMware were also launching their next-generation solutions, so the test team ran a series of performance tests pairing the Intel Xeon processor E7-8890 v3 with InterSystems Caché 2015.1 and a beta version of VMware vSphere ESXi 6.0.

                                

The results were impressive. “We saw the scalability of a single operational database server increase by 60 percent,” said Epic senior performance engineer Seth Hain. “With these gains, we expect our customers to scale further with a smaller data center footprint and lower total cost of ownership.” Those results were also more than triple the end-user database accesses per second (global references or GREFs) achieved using the Intel® Xeon® processor E7-4860 with Caché® 2011.1.

 

leibforth graph.jpg

 

These results show that your healthcare organization can use the Intel Xeon processor E7 v3 family to implement larger-scale deployments with confidence on a single, scale-up platform.

 

In addition, if you exceed the vertical scalability of a single server, you can use InterSystems Caché’s Enterprise Cache Protocol (ECP) to scale horizontally. Here again, recent benchmarks show great scalability. A paper published earlier this year reported more than a threefold increase in GREFs for horizontal scalability compared to previous-generation technologies.

 

This combination of outstanding horizontal and vertical scalability—in the cost-effective environment of the Intel® platform—is exactly what needed to meet rising demands and create a more agile, adaptable, and affordable healthcare enterprise.

                                                                              

What will these scalability advances mean for your healthcare IT decision makers and data center planners? How will they empower your organization deliver outstanding patient care and enhance efficiency? I hope you’ll read the whitepapers and share your thoughts. And please keep in mind: Epic uses many factors, along with benchmarking results, to provide practical sizing guidelines, so talk to your Epic system representative as you develop your scalability roadmap.

 

Read the whitepaper about vertical scalability with the Intel Xeon processor E7 v3.

 

Read the whitepaper about horizontal scalability  with Intel Xeon processors.

 

Join and participate in the Intel Health and Life Sciences Community

 

Follow us on Twitter: @IntelHealth, @IntelITCenter, @InterSystems, @vmwareHIT

 

Steve Leibforth is a Strategic Relationship Manager at Intel Corporation

The health and well-being of any workforce has a direct impact on worker productivity, efficiency and happiness, all critical components of any successful organization. With this in mind, Intel has developed a next-generation healthcare program, called Connected Care, which includes an integrated delivery system based on a patient-centered medical home (PCMH) approach to care.

The shift to value-based compensation and team-based care is driving the need for improved collaboration and patient data sharing between a growing number of providers and medical systems. While we’ve successfully introduced the Connected Care program in smaller locations, bringing it to Oregon and the larger Portland Metropolitan area presented us with a common healthcare IT challenge, interoperability. Shah.PNG

 

Advanced Interoperability Delivers Better Experiences for Clinicians, Patients

 

Intel is using industry standards to address these challenges, geared towards advancing interoperability in healthcare. The ability to quickly share clinical information between on-site Health for Life Center Clinics and delivery system partners (DSPs) enables:

 

  • Efficient and seamless experiences for members
  • Informed decision-making by clinicians
  • Improved patient safety
  • Increased provider efficiency
  • Reduced waste in the delivery of healthcare, by avoiding redundant testing

 

These improvements will help us make the Institute for Healthcare Improvement’s (IHI’s) Triple Aim a reality, by improving the patient experience (quality and satisfaction), the health of populations, and reducing the per-capita cost of health care.

 

Kaiser and Providence Part of Intel’s Connected Care Program

 

Intel’s Connected Care program is offering Intel employees and their dependents two new options in Oregon. Kaiser Permanente Connected Care and Providence Health & Services Connected Care have both been designed to meet the following requirements of Intel and their employees:

 

  • “Optimize my time” – member and provider have more quality interactions
  • “Don’t make me do your work” – no longer rely on members to provide medical history
  • “Respect my financial health” - lower incidence of dropped hand-offs/errors
  • “Seamless member and provider experience” - based on bi-directional flow of clinical data

 

Now that we have eliminated the interoperability barrier, we can enable strong coordination between providers at Health For Life Centers (on-campus clinics at Intel), and the Kaiser and Providence network providers, enabling the ability to quickly share vital electronic health records (EHRs) between varying systems used by each organization.

 

In our efforts to deliver optimal care to every Intel employee, we sought solutions that would ensure all providers serving Intel Connected Care members are able to see an up-to-date patient health record, with accurate medications, allergies, problem lists and other key health data, every time a Connected Care member needs care.

 

Learn More: Advancing Interoperability in Healthcare

 

What questions do you have?

 

Prashant Shah is a Healthcare Architect with Intel Health & Life Sciences

Filter Blog

By date:
By tag: