Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, PagerDuty Blog, Christopher Keene, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Agile Computing, @BigDataExpo

@CloudExpo: Article

Healthcare IT and the Cloud

Moving healthcare data into a Cloud Ecosytem

Over the last few weeks I've been hearing a lot of discussion around HIPAA. When we speak about HIPAA, invariably the two components of data security and data privacy arises.

In the traditional data centers, database managers and data owners know where their data resides and implement the necessary processes to preserve privacy and audit access.

However, when we move to the cloud, the cloud being all about data, we are looking at servers, network, and storage that are abstracted. This raises concern that data owners may not necessarily know where their data sets physically reside and we are looking at Cloud Service Provider (CSP) employees who will be handling confidential patient data or Personally Identifiable Information (PII).

Of importance here is that when it comes to leveraging the cloud ecosystem for healthcare segments, the foremost concerns are around HIPAA and the  HITECH Act compliance capabilities and meaningful use provisions.

So what is meaningful use? According to HealthIT.gov

"Meaningful use is the set of standards defined by the Centers for Medicare & Medicaid Services (CMS) Incentive Programs that governs the use of electronic health records and allows eligible providers and hospitals to earn incentive payments by meeting specific criteria."

The goal of meaningful use is to promote the spread of Electronic Health Records (EHR) to improve health care in the United States.

Benefits of meaningful use of EHRs include:

  • Complete and accurate information.
  • Better access to information.
  • Patient empowerment.

In the healthcare world, organizations are positioning to attain meaningful use. This to capture the incentives allocated by the Federal Government as well as to ensure that reimbursements do not face jeopardy for providers not in line with the meaningful use provisions.

As healthcare practitioners and organizations increase the use of technology solutions in delivering clinical care, their IT departments are faced with additional stress to provide availability on demand and operate data center approaching 99.999 percent availability. In most cases this is a major challenge that can lead to the risk of unscheduled outages and costly solutions.

Assuring high availability for healthcare applications, means meeting uptime requirements; and in today's environments will require access to more than one data center. This can significantly impact the overall capital investment in data center infrastructure for healthcare organizations.

Looking to the cloud as a solution is not only the next step in services but will ensure high availability of clinical applications. This will allow a healthcare organization to leverage the expertise and financial stability of an established CSP. Another advantage of leveraging a cloud ecosystem, is that of rapid provisioning and deployment, with the ability to change compute capacity as demand changes.

Thus in the event of failure, server instances can be seamlessly moved to alternate hosts or in anticipation can be clustered to provide redundancy.

Some may ask whether it is risky to transfer data from site to cloud. The answer is no as a majority of organizations move data over the Internet via encryption channels. Where we can see concerns arising is with the hand-off of data into the (CSP) environment.

In a seamless environment all data will have site to site encryption up to and including storage. Where we can see some separation is with healthcare application vendors support.

In the cloud, it is a given that we can have a number of people with access to the physical servers and storage that cloud consumers have no control over. For an IT Security person this will elicit conflicting concerns as on one hand there is the presupposition that complete control is being relinquished which can only be assured with prescriptive precautions defined by a CSP.

The cloud computing ecosystem is still evolving and as such there is still a lack of industry-wide certifications. As we mature within this ecosystem the intent is to drive toward processes, best practices and certifications which would provide legal protection that can reduce the complexities of a long negotiation and complex SLA requirements.

Within a regular data center or even a small IT shop, as an IT Security leader one of my first expectation for any shop is some form of centralized logging with automation. Similarly by transferring such a mindset into the cloud ecosystem (they are after datacenters) any healthcare customer security leaders expect the assurance that detailed reporting is a given.

Having worked on the security strategy and assessment separately for a few cloud computing projects I have seen first-hand, that access rights was a major focus. In light of this, it is not a complex process to segment solutions for healthcare. As a result any access to servers and storage dedicated to a healthcare customer by anyone within a CSP organization will be logged and thus can provide the assurance of controls around access.

From a legal perspective, more specifically talking contracts, healthcare customers expect the provisions of strong financial penalties to indemnify against a breech as well as to hold the CSP accountable.

Some CSPs are moving to providing a HIPAA Business Associate Agreement (BAA) for their healthcare customers. The assurance provided by their BAA demonstrates meeting the compliance requirements (enabling the physical, technical, and administrative safeguards required) of the HIPAA and the HITECH Acts.

In closing, I will state that HIPPA compliance and cloud computing do not have to be in conflict. Rather healthcare entities can leverage the benefits of the cloud, coupled with the necessary due diligence and legal contracts to meet their needs.

More Stories By Jon Shende

Jon RG Shende is an executive with over 18 years of industry experience. He commenced his career, in the medical arena, then moved into the Oil and Gas environment where he was introduced to SCADA and network technologies,also becoming certified in Industrial Pump and Valve repairs. Jon gained global experience over his career working within several verticals to include pharma, medical sales and marketing services as well as within the technology services environment, eventually becoming the youngest VP of an international enterprise. He is a graduate of the University of Oxford, holds a Masters certificate in Business Administration, as well as an MSc in IT Security, specializing in Computer Crime and Forensics with a thesis on security in the Cloud. Jon, well versed with the technology startup and mid sized venture ecosystems, has contributed at the C and Senior Director level for former clients. As an IT Security Executive, Jon has experience with Virtualization,Strategy, Governance,Risk Management, Continuity and Compliance. He was an early adopter of web-services, web-based tools and successfully beta tested a remote assistance and support software for a major telecom. Within the realm of sales, marketing and business development, Jon earned commendations for turnaround strategies within the services and pharma industry. For one pharma contract he was responsibe for bringing low performing districts up to number 1 rankings for consecutive quarters; as well as outperforming quotas from 125% up to 314%. Part of this was achieved by working closely with sales and marketing teams to ensure message and product placement were on point. Professionally he is a Fellow of the BCS Chartered Institute for IT, an HITRUST Certified CSF Practitioner and holds the CITP and CRISC certifications.Jon Shende currently works as a Senior Director for a CSP. A recognised thought Leader, Jon has been invited to speak for the SANs Institute, has spoken at Cloud Expo in New York as well as sat on a panel at Cloud Expo Santa Clara, and has been an Ernst and Young CPE conference speaker. His personal blog is located at http://jonshende.blogspot.com/view/magazine "We are what we repeatedly do. Excellence, therefore, is not an act, but a habit."

@MicroservicesExpo Stories
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...

Modern organizations face great challenges as they embrace innovation and integrate new tools and services. They begin to mature and move away from the complacency of maintaining traditional technologies and systems that only solve individual, siloed problems and work “well enough.” In order to build...

The post Gearing up for Digital Transformation appeared first on Aug. 31, 2016 06:15 PM EDT  Reads: 1,738

Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
This complete kit provides a proven process and customizable documents that will help you evaluate rapid application delivery platforms and select the ideal partner for building mobile and web apps for your organization.
The following fictional case study is a composite of actual horror stories I’ve heard over the years. Unfortunately, this scenario often occurs when in-house integration teams take on the complexities of DevOps and ALM integration with an enterprise service bus (ESB) or custom integration. It is written from the perspective of an enterprise architect tasked with leading an organization’s effort to adopt Agile to become more competitive. The company has turned to Scaled Agile Framework (SAFe) as ...
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Node.js and io.js are increasingly being used to run JavaScript on the server side for many types of applications, such as websites, real-time messaging and controllers for small devices with limited resources. For DevOps it is crucial to monitor the whole application stack and Node.js is rapidly becoming an important part of the stack in many organizations. Sematext has historically had a strong support for monitoring big data applications such as Elastic (aka Elasticsearch), Cassandra, Solr, S...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
Before becoming a developer, I was in the high school band. I played several brass instruments - including French horn and cornet - as well as keyboards in the jazz stage band. A musician and a nerd, what can I say? I even dabbled in writing music for the band. Okay, mostly I wrote arrangements of pop music, so the band could keep the crowd entertained during Friday night football games. What struck me then was that, to write parts for all the instruments - brass, woodwind, percussion, even k...
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...