Click here to close now.


Microservices Expo Authors: XebiaLabs Blog, Pat Romanski, Elizabeth White, Liz McMillan, Carmen Gonzalez

Related Topics: Cloud Security, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @BigDataExpo

Cloud Security: Article

Protecting the Network with Proactive Encryption Monitoring

Encryption technology is everywhere: in applications, data centers and other foundation infrastructure

Encryption is a key element of a complete security strategy. The 2013 Global Encryption Trends Study shows a steady increase in the use of encryption solutions over the past nine years. Thirty-five percent of organizations now have an encryption strategy applied consistently across the entire enterprise, up from 29 percent in 2012. The study showed that, for the first time, the main goal for most organizations in deploying encryption is mitigating the effects of data breaches. There is good reason for this shift: the latest Ponemon Institute research reveals that the cost of a data breach is $3.5 million, up 15 percent from last year.

On the surface, the 35 percent figure seems like good news, until one realizes that 65 percent of organizations do not have an enterprise-wide encryption strategy. In addition, even a consistently applied strategy can lack visibility, management controls or remediation processes. This gives hackers the green light to attack as soon as they spot a vulnerability.

While organizations are moving in the right direction when it comes to encryption, much more needs to be done - and quickly. Encryption has come to be viewed as a commodity: organizations deploy it and assume they've taken the steps they need to maintain security. If breaches occur, it's rarely the fault of the software or the encryption protocol. The fault lies rather in the fact that encryption management is left in the domain of IT system administrators and has never been properly managed with access controls, monitoring or proactive data loss prevention.

Too Many Keys Spoil the Security
While recent high-profile vulnerabilities have exposed the need to manage encrypted networks better, it's important to understand that administrators can cause vulnerabilities as well. In the Secure Shell (SSH) data-in-transit protocol, key-based authentication is one of the more common methods used to gain access to critical information. Keys are easy to create, and, at the most basic level, are simple text files that can be easily uploaded to the appropriate system. Associated with each key is an identity: either a person or machine that grants access to information assets and performs specific tasks, such as transferring a file or dropping a database, depending on the assigned authorizations. In the case of Secure Shell keys, those basic text files provide access to some of the most critical information within an organization.

A quick calculation will reveal that the number of keys assigned over the past decade to employees, contractors and applications can run up to a million or more for a single enterprise. In one example, a major bank with around 15,000 hosts had over 1.5 million keys circulating within its network environment. Around 10 percent of those keys - or 150,000 - provided high-level administrator access. This represents an astonishing number of open doors that no one was monitoring.

It may seem impossible that such a security lapse could happen, but consider that encryption is often perceived merely as a tool. Because nothing appeared on the surface to be out of place, no processes were shut down and the problem was undetected.

Safety Hazards
Forgetting to keep track of keys is one problem; failing to remove them is another. System administrators and application developers will often deploy keys in order to readily gain access to systems they are working on. These keys grant a fairly high level of privilege and are often used across multiple systems, creating a one-to-many relationship. In many cases, employees or contractors who are terminated - or even simply reassigned to other tasks that no longer require the same access - continue to carry access via Secure Shell keys; the assumption is that terminating the account is enough. Unfortunately, this is not the case when Secure Shell keys are involved; the keys must also be removed or the access remains in place.

SSH keys pose another threat as well: subverting privileged access management systems (PAMs). Many PAMs use a gateway or jump host that administrators log into to gain access to network assets. PAM solutions connect with user directories to assign privileges, monitor user actions and record which actions have taken place. While this appears like an airtight way to monitor administrators, it is incredibly easy for an administrator to log into the gateway, deploy a key and then log in using key authentication, thereby circumventing any PAM safeguards in place.

Too Clever for Their Own Good
Poorly monitored access is just one security hazard in encrypted environments. Conventional PAM solutions, which use gateways and focus on interactive users only, are designed to monitor administrator activities. Unfortunately, as mentioned earlier, they end up being fairly easy to work around. Additionally, encryption blinds attackers the same way it blinds security operations and forensics teams. For this reason, encrypted traffic is rarely monitored and is allowed to flow freely in and out of the network environment. This creates obvious risks and negates security intelligence capabilities to a large degree.

The Internet offers many articles on how to use Secure Shell to bypass corporate firewalls. This is actually a fairly common and clever workaround policy that unfortunately creates a huge security risk. In order to eliminate this risk, the organization must decrypt and inspect the traffic.

Traffic Safety
Decrypting Secure Shell traffic would require an organization to use an inline proxy with access to the private keys - essentially a friendly man-in-the-middle - to decrypt the traffic without interfering with the network. When successfully deployed, 100 percent of encrypted traffic for both interactive users and M2M identities can be monitored. Also, because this is done at the network level, it's not possible for malicious parties to execute a workaround. With this method, enterprises can proactively detect suspicious or out-of-policy traffic. This is called encrypted channel monitoring and represents the next generation in the evolution of PAM.

This kind of monitoring solves the issue of decrypting traffic at the perimeter and helps organizations move away from a gateway approach to PAM. At the same time, it prevents attackers from using the organization's own encryption technology against itself. In addition, an organization can use inline access controls and user profiling to control what activities a user can undertake. For example, policy controls can be enforced to forbid file transfers from certain critical systems. With the more advanced solutions, an organization can even block subchannels from running inside the encrypted tunnel, the preferred method of quickly exfiltrating data.

Encryption technologies are often set up without effective monitoring or proper access controls, which also blinds layered defenses. A major vulnerability could potentially compromise the entire server, which could in turn expose other areas of the network to subsequent attacks.

A Healthy Respect for Encryption
Encryption technology is everywhere: in applications, data centers and other foundation infrastructure. While it has been widely embraced, it has also often been abused, misused or neglected. Most organizations have not instituted centralized provisioning, encrypted channel monitoring and other best practices, even though the consequence of inadequate security can be severe. IT security staff may think conventional PAM is keeping their organizations safe, when commonly-known workarounds are instead putting their data in jeopardy.

No one understands better than IT administrators how critical network security is. This understanding should spur security professionals to do all in their power to make their organizations' data as safe as possible. Given all that can go awry, it's important to examine encrypted networks, enabling layered defenses and putting proactive monitoring in place if they have not yet done so. An all-inclusive encrypted channel monitoring strategy will go a long way toward securing the network.

More Stories By Jason Thompson

Jason Thompson is director of global marketing for SSH Communications Security. He brings more than 12 years of experience launching new, innovative solutions across a number of industry verticals. Prior to joining SSH, he worked at Q1 Labs where he helped build awareness around security intelligence and holistic approaches dealing with advanced threat vectors. Mr. Thompson holds a BA from Colorado State University and an MA for the University of North Carolina at Wilmington.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Containers are all the rage among developers and web companies, but they also represent two very substantial benefits to larger organizations. First, they have the potential to dramatically accelerate the application lifecycle from software builds and testing to deployment and upgrades. Second they represent the first truly hybrid-approach to consuming infrastructure, allowing organizations to run the same workloads on any cloud, virtual machine or physical server. Together, they represent a ver...
As operational failure becomes more acceptable to discuss within the software industry, the necessity for holding constructive, actionable postmortems increases. But most of what we know about postmortems from "pop culture" isn't actually relevant for the software systems we work on and within. In his session at DevOps Summit, J. Paul Reed will look at postmortem pitfalls, techniques, and tools you'll be able to take back to your own environment so they will be able to lay the foundations for h...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data ...
Now, with more hardware! September 21, 2015, Don MacVittie, Sr. Solutions Architect. The “continuous” trend is continuing (get it..?), and we’ll soon reach the peek of the hype cycle, with continuous everything. At the pinnacle of the hype cycle, do not be surprised to see DDOS attacks re-branded as “continuous penetration testing!” and a fee … Read More Continuous Provisioning
Despite all the talk about public cloud services and DevOps, you would think the move to cloud for enterprises is clear and simple. But in a survey of almost 1,600 IT decision makers across the USA and Europe, the state of the cloud in enterprise today is still fraught with considerable frustration. The business case for apps in the real world cloud is hybrid, bimodal, multi-platform, and difficult. Download this report commissioned by NTT Communications to see the insightful findings – registra...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
Our guest on the podcast this week is Jason Bloomberg, President at Intellyx. When we build services we want them to be lightweight, stateless and scalable while doing one thing really well. In today's cloud world, we're revisiting what to takes to make a good service in the first place. Listen in to learn why following "the book" doesn't necessarily mean that you're solving key business problems.
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
DevOps Summit, taking place at the Santa Clara Convention Center in Santa Clara, CA, and Javits Center in New York City, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
For it to be SOA – let alone SOA done right – we need to pin down just what "SOA done wrong" might be. First-generation SOA with Web Services and ESBs, perhaps? But then there's second-generation, REST-based SOA. More lightweight and cloud-friendly, but many REST-based SOA practices predate the microservices wave. Today, microservices and containers go hand in hand – only the details of "container-oriented architecture" are largely on the drawing board – and are not likely to look much like S...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...