Click here to close now.


Microservices Expo Authors: Pat Romanski, Janakiram MSV, Jason Bloomberg, Carmen Gonzalez, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Article

Data Control and Data Residency

Consider your options carefully when planning the move to the cloud

The benefits associated with adoption of the cloud are well documented and understood. Organizations cite tremendous cost savings, fast deployment times and streamlined application support and maintenance when compared to traditional on-premise software deployments. So what is holding many companies back from adopting the cloud? A recent report from Gartner entitled "Five Cloud Data Residency Issues That Must Not Be Ignored" highlights one key reason for this hesitancy - enterprises' questions and concerns about jurisdictional and regulatory control arising from a lack of clarity on where cloud data truly resides. The report from Gartner recommends that enterprises adopt measures that will simultaneously boost the security of sensitive data as well as assist them in satisfying regulatory compliance with data residency laws.

While the report provides some excellent guidance associated with the implementation of one technique - encryption - to safeguard sensitive information in the cloud, it did not cover a few key points that deserve to be mentioned:

  • Tokenization should be given strong consideration as the data security technique that enterprises deploy when data residency is a critical concern.
  • If encryption is deployed by enterprises, they should take every measure to ensure that they are deploying the strongest form of encryption possible (e.g., use FIPS 140-2 validated modules) to guard against the inherent threats associated with multi-tenant cloud environments.

Why Tokenization?
Tokenization is a process by which a sensitive data field, such as a "Name" or "National ID Number," is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated original value. While various approaches to creating tokens exist, frequently they are simply randomly generated values that have no mathematical relation to the original data field (click here to review third-party evaluation of PerspecSys' tokenization approach). This underlies the inherent security of the approach - it is nearly impossible to determine the original value of a sensitive data field by knowing only the surrogate token value. When deployed as a technique within a Cloud Data Protection Gateway, the token "vault" that matches the clear text value with the surrogate token stays on-site within an organization's data-center. Because of this, the benefit from a data residency compliance perspective is apparent - the data truly never leaves the enterprise's location.

How Encryption Differs
Encryption is an obfuscation approach that uses a cipher algorithm to mathematically transform sensitive data's original value to a surrogate value. The surrogate can be transformed back to the original value via the use of a "key," which can be thought of as the means to undo the mathematical lock. While encryption clearly can be used to obfuscate a value, a mathematical link back to its true form still exists. As described, tokenization is unique in that it completely removes the original data from the systems in which the tokens reside (the cloud) and there is no construct of a "key" that can be used to bring it back into the clear in the cloud.

In our experience with many customers, it is this unique characteristic of tokenization that has made it the preferred approach selected by enterprises when they are explicitly trying to address data residency requirements. In the words of one of our largest customers (who selected tokenization as their data security approach), "encrypted data leaving your premises is still data leaving your premises."

But If Encryption Is Used - Deploy Using Best Practices
If an organization decides to deploy encryption in order to protect sensitive information going to the cloud, then they need to ensure that industry standard best practices on the use of encryption are followed. As highlighted in the Cloud Security Alliance's Guidelines as well as numerous Gartner Reports, the use of published, well-vetted strong encryption algorithms is a must. In fact, the previously mentioned report "Five Cloud Data Residency Issues That Must Not Be Ignored" notes that enterprises need to ensure that the "strength of the security is not compromised." A good guideline is to look for solutions that support FIPS 140-2 validated algorithms from well-known providers such as McAfee, RSA, SafeNet, Symantec and Voltage Security. A unique and highly valued quality of the PerspecSys gateway is that cloud end users can still enjoy the full capabilities of cloud applications (such as SEARCH) even with data that is strongly encrypted with these industry accepted, validated algorithms.

Netting It Out
There is much to gain from using data obfuscation and replacement technologies to satisfy residency requirements in order to pave the way to cloud adoption. But equally, there is much to lose if the implementation is not well thought through. Do your homework - consider tokenization as an approach, question any encryption techniques that are not well vetted and accepted in the industry and finally, compare solutions from multiple vendors (a suggestion - refer to our whitepaper as a guide: "Critical Questions to Ask Cloud Protection Gateway Providers". We know from our experience helping many organizations around the world tackle these challenges via the use of our Cloud Data Protection Gateway, that by charting your path carefully at the beginning of your project, you can arrive at a solution that will fully meet the needs of your Security, Legal, and Business Line teams.

Read the original blog entry...

PerspecSys Inc. is a leading provider of cloud protection and cloud encryption solutions that enable mission-critical cloud applications to be adopted throughout the enterprise. Cloud security companies like PerspecSys remove the technical, legal and financial risks of placing sensitive company data in the cloud. PerspecSys accomplishes this for many large, heavily regulated companies across the world by never allowing sensitive data to leave a customer's network, while maintaining the functionality of cloud applications. For more information please visit or follow on Twitter @perspecsys.

More Stories By Gerry Grealish

Gerry Grealish is Vice President, Marketing & Products, at PerspecSys. He is responsible for defining and executing PerspecSys’ marketing vision and driving revenue growth through strategic market expansion and new product development. Previously, he ran Product Marketing for the TNS Payments Division, helping create the marketing and product strategy for its cloud-based payment gateway and tokenization/encryption security solutions. He has held senior marketing and leadership roles for venture-backed startups as well as F500 companies, and his industry experience includes enterprise analytical software, payment processing and security services, and marketing and credit risk decisioning platforms.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
In a report titled “Forecast Analysis: Enterprise Application Software, Worldwide, 2Q15 Update,” Gartner analysts highlighted the increasing trend of application modernization among enterprises. According to a recent survey, 45% of respondents stated that modernization of installed on-premises core enterprise applications is one of the top five priorities. Gartner also predicted that by 2020, 75% of
Despite all the talk about public cloud services and DevOps, you would think the move to cloud for enterprises is clear and simple. But in a survey of almost 1,600 IT decision makers across the USA and Europe, the state of the cloud in enterprise today is still fraught with considerable frustration. The business case for apps in the real world cloud is hybrid, bimodal, multi-platform, and difficult. Download this report commissioned by NTT Communications to see the insightful findings – registra...
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
DevOps Summit at Cloud Expo 2014 Silicon Valley was a terrific event for us. The Qubell booth was crowded on all three days. We ran demos every 30 minutes with folks lining up to get a seat and usually standing around. It was great to meet and talk to over 500 people! My keynote was well received and so was Stan's joint presentation with RingCentral on Devops for BigData. I also participated in two Power Panels – ‘Women in Technology’ and ‘Why DevOps Is Even More Important than You Think,’ both ...
Our guest on the podcast this week is Jason Bloomberg, President at Intellyx. When we build services we want them to be lightweight, stateless and scalable while doing one thing really well. In today's cloud world, we're revisiting what to takes to make a good service in the first place. Listen in to learn why following "the book" doesn't necessarily mean that you're solving key business problems.
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at, and Tomer Levy, co-founder and CEO of, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the archi...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
For it to be SOA – let alone SOA done right – we need to pin down just what "SOA done wrong" might be. First-generation SOA with Web Services and ESBs, perhaps? But then there's second-generation, REST-based SOA. More lightweight and cloud-friendly, but many REST-based SOA practices predate the microservices wave. Today, microservices and containers go hand in hand – only the details of "container-oriented architecture" are largely on the drawing board – and are not likely to look much like S...
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.