Click here to close now.


Microservices Expo Authors: Yeshim Deniz, Carmen Gonzalez, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Cloud Security

@CloudExpo: Article

Cloud Security: Encryption Is Key

Cloud security should include a blend of traditional security elements combined with new “cloud-adjusted” security technologies

Today, with enterprises migrating to the cloud, the security challenge around protecting data is greater than ever before. Keeping data private and secure has always been a business imperative. But for many companies and organizations, it has also become a compliance requirement and a necessity to stay in business. Standards including HIPAA, Sarbanes-Oxley, PCI DSS and the Gramm-Leach-Bliley Act all require that organizations protect their data at rest and provide defenses against data loss and threats.

Public cloud computing is the delivery of computing as a service rather than as a product, and is usually categorized into three service models: Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). When it comes to public cloud security, all leading cloud providers are investing significant efforts and resources in securing and certifying their datacenters. However, as cloud computing matures, enterprises are learning that cloud security cannot be delivered by the cloud provider alone. In fact, cloud providers make sure enterprises know that security is a shared responsibility, and that cloud customers do share responsibility for data security, protection from unauthorized access, and backup of their data.

Actually, this "shared responsibility" makes sense most of the time. The responsibility of cloud providers offering Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) reasonably extends to the network and the infrastructure they provide. In fact, a typical agreement between you and your cloud provider will usually state that " acknowledge that you bear sole responsibility for adequate security..." So businesses hosting their applications in the cloud understand that they must share responsibility for ensuring the security of their data.

As cloud computing becomes increasingly more mainstream, it's harder to distinguish the generic security issues that an IT manager needs to tackle, from those that are specific to cloud computing. Issues such as roles and responsibilities, secure application development, least privilege and many more apply equally well in traditional on-premise environments as they do in the cloud.

When an IT application is moved to a public cloud, all of the old security risks associated with it in the past still exist but, in addition, there are new risk vectors. Previously your servers and your data were physically protected within your server room. Now the "virtual servers" and "virtual storage devices" are accessible to you, the customer, via a browser; raising the concern that hackers may attempt to access the same. Here are some new risks scenarios to consider when migrating to the cloud:

  1. Snapshotting your virtual storage by gaining access to your cloud console.
    A malicious user might gain access to your cloud console by stealing your credentials or by exploiting vulnerabilities in cloud access control. In any case, once inside your account, a "snapshot" of your virtual disks will allow an attacker to move a copy of your virtual storage to his or her preferred location and abuse the data stored on those virtual disks. This risk is in our opinion the most obvious reason to deploy data encryption in the cloud, but surprisingly enough, not all companies are aware of the threat and unknowingly expose their cloud-residing data to this significant risk.
  2. Gaining access from a different server within the same account.
    Gaining access to sensitive data from a different virtual server inside the same account can be achieved by an attacker exploiting a vulnerability on that other server (such as a misconfiguration), or by one of your other cloud system administrators (a "malicious insider" from a different project in your own organization) using credentials or exploiting one of many known web application vulnerabilities to launch an attack on your virtual server. Unencrypted data can be exposed and stolen using this method.
  3. The insider threat.
    Though this scenario gets mentioned a lot, it's unlikely that a cloud provider employee will be involved in data theft. The more realistic scenario is an accidental incident related to an insider with physical access to the data center. One well-known example is the HealthNet case where 1.9 million customer records of HealthNet, a major US health insurer, were lost after its IT vendor misplaced nine server drives following a move to a new data center. According to HIPAA rules, disk-level encryption would have negated the incident impact.

The industry consensus is that encryption is an essential first step in achieving cloud computing security. An effective solution needs to meet four critical needs: High security, convenient management, robust performance and regulatory compliance. Data at rest is no longer between the proverbial "four walls" of the enterprise; the data owner is managing their own data with browsers and cloud APIs, and the concern is that a hacker can do the same. As such, cloud encryption is recognized as a basic building block of cloud security, though one difficult question has remained - where to store the encryption keys, since the keys cannot safely be stored in the cloud along with the data.

Protecting Content with Cloud Encryption and Key Management
Encryption technology is only as secure as the encryption keys. You have to keep your keys in a safe place. You need a cloud key management solution that can support encryption of your data and should supply the encryption keys for files, databases (whether the complete database or at the column, table, or tablespace level), or disks. This is actually the trickiest security question when implementing encryption in the cloud and requires thought and expertise. For example, database encryption keys are often kept in a database "wallet," which is often a file on your virtual disk. The concern is that hackers will attack the virtual disk in the cloud, and from there get access to the wallet, and through the wallet access the data.

Encrypting sensitive data in the cloud is an absolute must. Cloud security should include a blend of traditional security elements combined with new "cloud-adjusted" security technologies. Encryption should be a key part of your cloud security strategy due to the new cloud threat vectors (but also due to regulations such as the Patriot Act), and you should pay specific attention to key management.

More Stories By Ariel Dan

Ariel Dan is co-founder and Executive Vice President at Porticor cloud security. Follow him on twitter: @ariel_dan

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
For it to be SOA – let alone SOA done right – we need to pin down just what "SOA done wrong" might be. First-generation SOA with Web Services and ESBs, perhaps? But then there's second-generation, REST-based SOA. More lightweight and cloud-friendly, but many REST-based SOA practices predate the microservices wave. Today, microservices and containers go hand in hand – only the details of "container-oriented architecture" are largely on the drawing board – and are not likely to look much like S...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
DevOps Summit, taking place at the Santa Clara Convention Center in Santa Clara, CA, and Javits Center in New York City, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
As operational failure becomes more acceptable to discuss within the software industry, the necessity for holding constructive, actionable postmortems increases. But most of what we know about postmortems from "pop culture" isn't actually relevant for the software systems we work on and within. In his session at DevOps Summit, J. Paul Reed will look at postmortem pitfalls, techniques, and tools you'll be able to take back to your own environment so they will be able to lay the foundations for h...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data ...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
Now, with more hardware! September 21, 2015, Don MacVittie, Sr. Solutions Architect. The “continuous” trend is continuing (get it..?), and we’ll soon reach the peek of the hype cycle, with continuous everything. At the pinnacle of the hype cycle, do not be surprised to see DDOS attacks re-branded as “continuous penetration testing!” and a fee … Read More Continuous Provisioning
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Containers are all the rage among developers and web companies, but they also represent two very substantial benefits to larger organizations. First, they have the potential to dramatically accelerate the application lifecycle from software builds and testing to deployment and upgrades. Second they represent the first truly hybrid-approach to consuming infrastructure, allowing organizations to run the same workloads on any cloud, virtual machine or physical server. Together, they represent a ver...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
DevOps is here to stay because it works. Most businesses using this methodology are already realizing a wide range of real, measurable benefits as a result of implementing DevOps, including the breakdown of inter-departmental silos, faster delivery of new features and more stable operating environments. To take advantage of the cloud’s improved speed and flexibility, development and operations teams need to work together more closely and productively. In his session at DevOps Summit, Prashanth...
DevOps Summit at Cloud Expo 2014 Silicon Valley was a terrific event for us. The Qubell booth was crowded on all three days. We ran demos every 30 minutes with folks lining up to get a seat and usually standing around. It was great to meet and talk to over 500 people! My keynote was well received and so was Stan's joint presentation with RingCentral on Devops for BigData. I also participated in two Power Panels – ‘Women in Technology’ and ‘Why DevOps Is Even More Important than You Think,’ both ...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
There’s no shortage of guides and blog posts available to provide you with best practices in architecting microservices. While all this information is helpful, what doesn’t seem to be available in such a great number are hands-on guidelines regarding how microservices can be scaled. Following a little research and sifting through lots of theoretical discussion, here is how load-balancing microservices is done in practice by the big players.
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?