Click here to close now.




















Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Trevor Parsons, Jason Bloomberg, Tim Hinds

Related Topics: Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing, Cloud Security

Microservices Expo: Blog Feed Post

Context Aware Data Privacy | Part 2

So you need to protect your data at rest

If you missed my Part 1 of this article, you can read it here when you get a chance (link).

As a continuation to part 1, where I discussed the issues with Data Protection, we will explore how to solve some of those issues in this article.

People tend to forget that hackers are attacking your systems for one reason only –  DATA. You can spin that any way you want, but at the end of the day, they are not attacking your systems to see how you configured your workflow or how efficiently you processed your orders. They could care less. They are looking for the golden nuggets of information that either they can either resell or use to gain some other kind of monetary advantage. Your files, databases, data in transit, storage data, archived data, etc. are all vulnerable and will be of value to the hacker.

Gone are the old days when someone was sitting in mom’s basement and hacking into US military systems to boast about their ability amongst a small group of friends. Remember Wargames,  the movie?  Modern day hackers are very sophisticated, well-funded, often in for-profit organizations, and are backed by either big organized cyber gangs or by other entities within their respective organizations.

So you need to protect your data at rest (regardless of how the old data is – as a matter of fact, the older the data, the chances are, they are less protected), data in motion (going from somewhere to somewhere – whether it is between processes, services, between enterprises, or into/from the cloud or to storage), data in process/usage. You need to protect your data with your life.

Let us closely examine the things I said in my last blog (Part 1 of this blog), the things that are a must for a cloud data privacy solution.

More importantly, let us examine the elegance of our data privacy gateways (code named: Intel ETB – Expressway Tokenization Broker) that can help you with this costly, scary, mind-numbing experience go easily and smoothly. Here are the following elements that are embedded in our solution that are going to make your problem go away sooner.

1. Security of your sensitive message processing device
As they say, Caesar’s wife must be above suspicion (did you know Caesar divorced his wife in 62 BC). What is the point of having a security device that inspects your crucial traffic, if it can’t be trusted? You need to put in a solution/devices where a vendor  can make assertions regarding security and have the necessary certifications  to back up those claims. This means that a third party validation agency should have tested the solution and certified it to be ‘kosher enough’ for an enterprise, data center or cloud location. The certification must include FIPS 140-2 Level 3, CC EAL 4+, DoD PKI, STIG vulnerability tested, NIST SP 800-21, and support for HSM, etc. The validation must come from recognized authorities, not just from the vendor.

2. Support for multiple protocols
When you are looking to protect your data, it is imperative that you choose a solution that not only can handle the HTTP/ HTTPS/ SOAP, JSON, AJAX and REST protocols. In addition, you need to consider whether the solution supports all standard protocols known to the enterprise/cloud, with “Legacy” protocols such as JMS, MQ, EMS, FTP, TCP/IP (and secure versions of all of the above) and JDBC. More importantly, you also need to determine whether the solution can speak industry standard protocols natively such as SWIFT, ACORD, FIX, HL-7, MLLP, etc. You also need to look at whether or not the solution has the capability of supporting  other custom protocols that you might have. The solution you are looking at should give you the flexibility of inspecting your ingress and egress traffic regardless of how your traffic flows.

3. Able to read into anything
This is an interesting concept. I was listening to one of our competitor’s webcasts… there was complete silence when what appeared to be a dreaded question, was asked of the person speaking on behalf of that company: “How do you help me protect  a specific format of data that I use in transactions with a partner?” Without hesitation, the presenter answered the question by  suggesting their solution lacked support for it. While I’m not trying to be unnecessarily abrasive, the point is that you should have the capability to be able to look into any format of data that is flowing into, or out of, your system when the necessity arises. This means that you should be able to inspect not only XML, SOAP, JSON, and other modern format messages. A solution should be able to retrofit your existing legacy systems to provide the same level of support. Message formats such as COBOL (oh yes, we will be doing a Y10K on this all-right), ASCII, Binary, EBCDIC, and other unstructured data streams that are of equal importance. Sprinkle in the industry format messages such as SWIFT, NACHA, HIPAA, HL7, EDI, ACORD, EDIFACT, FIX, FpML to make the scenario interesting. But don’t forget our good old messages that can be sent in conventional ways such as MS Word, MS Excel, PDF, PostScript and good old HTML, etc. You need a solution that can look into any of these data types and help you protect the data in those messages seamlessly.

4. Have an option to sense not only the sensitive nature of the message, but who is requesting it and on what context and from where
This is where we started our discussion. Essentially, you should be able to not only identify data that is sensitive,  but take necessary actions based on the context. Intention, or heuristics, are a lot more important than just sensing something that is going out, or in. So this essentially means you should be able to sense who is accessing what, when, from where, and more importantly from what device. Once you identify that, you should be able to able to determine how you may want to protect that data. For example, if a person is accessing specific data from a laptop from within the corporate network, you can let the data go with the transport security, assuming he has enough rights to access that data. But if the same person is trying to access the same data using a mobile device, you can tokenize the data and send only the token to the mobile device. (This allows you to solve the problem where location is unknown as well. ) All conditions being the same, the tokenization will occur based on a policy that senses that the request came from a mobile device.

5. Have an option to dynamically tokenize, encrypt, format preserve the encryption based on the need
This will allow you to be flexible to encrypt certain messages/ fields, tokenize certain messages/ fields or employ FPE on certain messages. While you are at it, don’t forget to read my blog on why Intel’s implementation of the FPE variation is one of strongest in the industry here.

6. Support the strongest possible algorithms to encrypt, storage, and use the most random possible random number for tokenization
Not only should you verify the solution has strong encryption algorithm options available out of the box (such as AES-256, SHA 256, etc.), but you should also ensure that the solutions delivers cutting edge security options when they become available – including support for the latest security updates.

7. Protect the encryption keys with your life. There is no point in encrypting the data, yet giving away the “Keys to the Kingdom” easily
Now this is the most important point of all. If there is one thing you take away from this article let this be it: When you are looking at solutions, make sure that not only that a solution is strong on all of the above points, but most importantly, ensure that you  protect the proverbial keys with your life. This means the key storage should be encrypted, and  should be capable of having: an SOD (separation of duties), key encrypting keys, strong key management options, key rotation, re-key options when the keys need to be rotated/expired or lost, key protection, key lifetime management, key expiration notifications, etc. In addition, you also need to explore if there is an option to integrate with your existing key manager in house such as RSA DPM (the last thing you need is to disrupt the existing infrastructure by introducing a newer technology).

8. Encrypt the message while preserving the format so it won’t break the backend systems
This is really important if you want to do the tokenization or encryption on the fly without the backend or connected client applications knowing about it. When you encrypt the data and  preserve its format, it will not only look and feel the same as the original data, but the receiving party won’t be able to tell the difference.

If you are wondering Intel comes into the picture in this area, we address of all of the discussion points mentioned in #1 to #8, with our Intel Cloud data privacy solution (a.k.a. Intel ETB – Expressway Token Broker) and a lot more. Every single standard that is mentioned in here  is supported, and we are working on adding the newer, better standards as they come along.

Check out information about our tokenization and cloud data privacy solutions here.

Intel Cloud Data Privacy/ Tokenization Solutions

Intel Cloud/ API resource center

I also encourage you to download the Intel Expressway Tokenization Broker Data Sheet:

 

Andy Thurai — Chief Architect & Group CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and Group CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Mobile, Big Data, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on Security, SOA, Identity, Governance and Cloud topics. You can also find him on LinkedIn at http://www.linkedin.com/in/andythurai

Read the original blog entry...

More Stories By Andy Thurai

Andy Thurai is Program Director for API, IoT and Connected Cloud with IBM, where he is responsible for solutionizing, strategizing, evangelizing, and providing thought leadership for those technologies. Prior to this role, he has held technology, architecture leadership and executive positions with Intel, Nortel, BMC, CSC, and L-1 Identity Solutions. You can find more of his thoughts at www.thurai.net/blog or follow him on Twitter @AndyThurai.

@MicroservicesExpo Stories
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Microservices are individual units of executable code that work within a limited framework. They are extremely useful when placed within an architecture of numerous microservices. On June 24th, 2015 I attended a webinar titled “How to Share Share-Nothing Microservices,” hosted by Jason Bloomberg, the President of Intellyx, and Scott Edwards, Director Product Marketing for Service Virtualization at CA Technologies. The webinar explained how to use microservices to your advantage in order to deliv...
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction an...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Microservices are hot. And for good reason. To compete in today’s fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery. But is it too...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Puppet Labs has published their annual State of DevOps report and it is loaded with interesting information as always. Last year’s report brought home the point that DevOps was becoming widely accepted in the enterprise. This year’s report further validates that point and provides us with some interesting insights from surveying a wide variety of companies in different phases of their DevOps journey.
"ProfitBricks was founded in 2010 and we are the painless cloud - and we are also the Infrastructure as a Service 2.0 company," noted Achim Weiss, Chief Executive Officer and Co-Founder of ProfitBricks, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.