Welcome!

Microservices Expo Authors: TJ Randall, Liz McMillan, Elizabeth White, Pat Romanski, AppDynamics Blog

Related Topics: Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing, Cloud Security

Microservices Expo: Blog Feed Post

Context Aware Data Privacy | Part 2

So you need to protect your data at rest

If you missed my Part 1 of this article, you can read it here when you get a chance (link).

As a continuation to part 1, where I discussed the issues with Data Protection, we will explore how to solve some of those issues in this article.

People tend to forget that hackers are attacking your systems for one reason only –  DATA. You can spin that any way you want, but at the end of the day, they are not attacking your systems to see how you configured your workflow or how efficiently you processed your orders. They could care less. They are looking for the golden nuggets of information that either they can either resell or use to gain some other kind of monetary advantage. Your files, databases, data in transit, storage data, archived data, etc. are all vulnerable and will be of value to the hacker.

Gone are the old days when someone was sitting in mom’s basement and hacking into US military systems to boast about their ability amongst a small group of friends. Remember Wargames,  the movie?  Modern day hackers are very sophisticated, well-funded, often in for-profit organizations, and are backed by either big organized cyber gangs or by other entities within their respective organizations.

So you need to protect your data at rest (regardless of how the old data is – as a matter of fact, the older the data, the chances are, they are less protected), data in motion (going from somewhere to somewhere – whether it is between processes, services, between enterprises, or into/from the cloud or to storage), data in process/usage. You need to protect your data with your life.

Let us closely examine the things I said in my last blog (Part 1 of this blog), the things that are a must for a cloud data privacy solution.

More importantly, let us examine the elegance of our data privacy gateways (code named: Intel ETB – Expressway Tokenization Broker) that can help you with this costly, scary, mind-numbing experience go easily and smoothly. Here are the following elements that are embedded in our solution that are going to make your problem go away sooner.

1. Security of your sensitive message processing device
As they say, Caesar’s wife must be above suspicion (did you know Caesar divorced his wife in 62 BC). What is the point of having a security device that inspects your crucial traffic, if it can’t be trusted? You need to put in a solution/devices where a vendor  can make assertions regarding security and have the necessary certifications  to back up those claims. This means that a third party validation agency should have tested the solution and certified it to be ‘kosher enough’ for an enterprise, data center or cloud location. The certification must include FIPS 140-2 Level 3, CC EAL 4+, DoD PKI, STIG vulnerability tested, NIST SP 800-21, and support for HSM, etc. The validation must come from recognized authorities, not just from the vendor.

2. Support for multiple protocols
When you are looking to protect your data, it is imperative that you choose a solution that not only can handle the HTTP/ HTTPS/ SOAP, JSON, AJAX and REST protocols. In addition, you need to consider whether the solution supports all standard protocols known to the enterprise/cloud, with “Legacy” protocols such as JMS, MQ, EMS, FTP, TCP/IP (and secure versions of all of the above) and JDBC. More importantly, you also need to determine whether the solution can speak industry standard protocols natively such as SWIFT, ACORD, FIX, HL-7, MLLP, etc. You also need to look at whether or not the solution has the capability of supporting  other custom protocols that you might have. The solution you are looking at should give you the flexibility of inspecting your ingress and egress traffic regardless of how your traffic flows.

3. Able to read into anything
This is an interesting concept. I was listening to one of our competitor’s webcasts… there was complete silence when what appeared to be a dreaded question, was asked of the person speaking on behalf of that company: “How do you help me protect  a specific format of data that I use in transactions with a partner?” Without hesitation, the presenter answered the question by  suggesting their solution lacked support for it. While I’m not trying to be unnecessarily abrasive, the point is that you should have the capability to be able to look into any format of data that is flowing into, or out of, your system when the necessity arises. This means that you should be able to inspect not only XML, SOAP, JSON, and other modern format messages. A solution should be able to retrofit your existing legacy systems to provide the same level of support. Message formats such as COBOL (oh yes, we will be doing a Y10K on this all-right), ASCII, Binary, EBCDIC, and other unstructured data streams that are of equal importance. Sprinkle in the industry format messages such as SWIFT, NACHA, HIPAA, HL7, EDI, ACORD, EDIFACT, FIX, FpML to make the scenario interesting. But don’t forget our good old messages that can be sent in conventional ways such as MS Word, MS Excel, PDF, PostScript and good old HTML, etc. You need a solution that can look into any of these data types and help you protect the data in those messages seamlessly.

4. Have an option to sense not only the sensitive nature of the message, but who is requesting it and on what context and from where
This is where we started our discussion. Essentially, you should be able to not only identify data that is sensitive,  but take necessary actions based on the context. Intention, or heuristics, are a lot more important than just sensing something that is going out, or in. So this essentially means you should be able to sense who is accessing what, when, from where, and more importantly from what device. Once you identify that, you should be able to able to determine how you may want to protect that data. For example, if a person is accessing specific data from a laptop from within the corporate network, you can let the data go with the transport security, assuming he has enough rights to access that data. But if the same person is trying to access the same data using a mobile device, you can tokenize the data and send only the token to the mobile device. (This allows you to solve the problem where location is unknown as well. ) All conditions being the same, the tokenization will occur based on a policy that senses that the request came from a mobile device.

5. Have an option to dynamically tokenize, encrypt, format preserve the encryption based on the need
This will allow you to be flexible to encrypt certain messages/ fields, tokenize certain messages/ fields or employ FPE on certain messages. While you are at it, don’t forget to read my blog on why Intel’s implementation of the FPE variation is one of strongest in the industry here.

6. Support the strongest possible algorithms to encrypt, storage, and use the most random possible random number for tokenization
Not only should you verify the solution has strong encryption algorithm options available out of the box (such as AES-256, SHA 256, etc.), but you should also ensure that the solutions delivers cutting edge security options when they become available – including support for the latest security updates.

7. Protect the encryption keys with your life. There is no point in encrypting the data, yet giving away the “Keys to the Kingdom” easily
Now this is the most important point of all. If there is one thing you take away from this article let this be it: When you are looking at solutions, make sure that not only that a solution is strong on all of the above points, but most importantly, ensure that you  protect the proverbial keys with your life. This means the key storage should be encrypted, and  should be capable of having: an SOD (separation of duties), key encrypting keys, strong key management options, key rotation, re-key options when the keys need to be rotated/expired or lost, key protection, key lifetime management, key expiration notifications, etc. In addition, you also need to explore if there is an option to integrate with your existing key manager in house such as RSA DPM (the last thing you need is to disrupt the existing infrastructure by introducing a newer technology).

8. Encrypt the message while preserving the format so it won’t break the backend systems
This is really important if you want to do the tokenization or encryption on the fly without the backend or connected client applications knowing about it. When you encrypt the data and  preserve its format, it will not only look and feel the same as the original data, but the receiving party won’t be able to tell the difference.

If you are wondering Intel comes into the picture in this area, we address of all of the discussion points mentioned in #1 to #8, with our Intel Cloud data privacy solution (a.k.a. Intel ETB – Expressway Token Broker) and a lot more. Every single standard that is mentioned in here  is supported, and we are working on adding the newer, better standards as they come along.

Check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

I also encourage you to download the Intel Expressway Tokenization Broker Data Sheet:

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

Read the original blog entry...

More Stories By Andy Thurai

Andy Thurai is Program Director for API, IoT and Connected Cloud with IBM, where he is responsible for solutionizing, strategizing, evangelizing, and providing thought leadership for those technologies. Prior to this role, he has held technology, architecture leadership and executive positions with Intel, Nortel, BMC, CSC, and L-1 Identity Solutions.

You can find more of his thoughts at www.thurai.net/blog or follow him on Twitter @AndyThurai.

Microservices Articles
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
With the rise of Docker, Kubernetes, and other container technologies, the growth of microservices has skyrocketed among dev teams looking to innovate on a faster release cycle. This has enabled teams to finally realize their DevOps goals to ship and iterate quickly in a continuous delivery model. Why containers are growing in popularity is no surprise — they’re extremely easy to spin up or down, but come with an unforeseen issue. However, without the right foresight, DevOps and IT teams may lo...
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, will discuss how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galer...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...