Welcome!

Microservices Expo Authors: John Katrick, Elizabeth White, Liz McMillan, Derek Weeks, Pat Romanski

Related Topics: @CloudExpo, Microservices Expo, Microsoft Cloud, Open Source Cloud, Release Management , Cloud Security

@CloudExpo: Article

Security Automation Connects Silos

The true promise of security automation

A wealth of security information exists in our networks from a variety of sources - policy servers, firewalls, switches, networking infrastructure, defensive components, and more. Unfortunately, most of that information is locked away in separate silos due to differences in products and technologies, as well as by companies' organizational boundaries. Further complicating the issue, information is stored in different formats and communicated over different protocols.

An open standard from the Trusted Computing Group (TCG) offers the capability to centralize communication and coordination of information to enable security automation. The Interface for Metadata Access Points - IF-MAP for short - is like Facebook for network and security technology, allowing real-time sharing of information across a heterogeneous environment.

IF-MAP, part of TCG's Trusted Network Connect (TNC) architecture, makes it possible for any authorized device or system to publish information to a Metadata Access Point (MAP), a clearinghouse for information about who's on the network, what endpoint they're using, how they're behaving, and many other details of the network. Systems can also search the MAP for relevant information and subscribe to any updates to that information. Just as IP transformed communications, IF-MAP revolutionizes the way systems share data.

Security automation is any part of a security system that is able to operate without - or with only limited - administrative involvement. As shown in Figure 1, a security administrator can define a unified security policy that applies to different types of protective mechanisms, such as next-generation firewalls (NGFW), intrusion prevention systems (IPS), unified threat management (UTM) systems, and more. Best-of-breed components from multiple vendors can share information using a standard information bus.

Figure 1: Effective security automation includes several protection mechanisms.

This coordination can extend beyond front-line access control products to back-end systems such as authorization databases, virtualization technology, and reputation systems. A policy server might create and modify policy based completely on the information received from other resources in the environment.

Logs from multiple sources can be collected and correlated by a security information and event management (SIEM) system, which itself acts as both a consumer of information and a provider of real-time intelligence based on that information. Security operations personnel can easily oversee activities in the network and provide human intervention in cases where full automation may not be achievable or desirable. Security automation enhances fundamental security solutions, adding dynamic, responsive, intelligent decision-making.

Establishing Network Trust
One of the basic solutions enabled by the TNC architecture is Comply to Connect, which incorporates Network Access Control (NAC) principles - an endpoint must first show its compliance with selected endpoint health requirements before being granted access to the network. Figure 2 shows a common Comply to Connect scenario.

Figure 2: The TNC architecture enables evaluation and enforcement of compliance at admission.

The endpoint, on the left, is a device attempting to access a protected network. The enforcement point is a guard that grants or denies access based on instructions from the policy server. The policy server is really the brains of the operation; it looks at the configured policy and decides what level of access should be granted. Then it informs the enforcement point, which executes those instructions.

Many enforcement options exist; the example in Figure 2 shows a wireless access point and a switch, but environments may also use a firewall or a virtual private network (VPN) gateway. Each of these has its own pros and cons; for example, a wireless access point with 802.1X can totally block unauthorized users. But while it provides admission control, it doesn't offer enforcement deeper in the network. For that reason, most NAC solutions support a combination of different enforcement points, which can be used individually or in combination.

The security policy controlling the compliance check shown in Figure 2 is quite simple: every Windows 7 endpoint on the network must have a self-encrypting drive (SED), up-to-date anti-virus protection, and a personal firewall. When a new Windows 7 endpoint comes on the network, the enforcement point will query it and then consult the policy server. If the endpoint complies with security policy, it is given access to the production network. Another endpoint that does not have an SED may be given only limited access to the network. That way, if either endpoint is lost or stolen, protected information is only on the endpoint that could store it securely on an SED.

Expanding Network Trust Evaluation
Behavior monitoring is another way to evaluate an endpoint. Many security-related sensor devices are already deployed in networks to monitor behavior: intrusion detection systems, leakage detection systems, endpoint profiling systems, and more. The TNC architecture lets users integrate those existing systems with each other and with the NAC solution by sharing information via a MAP.

Figure 3 shows an approach to check behavior. Security sensors in the network monitor behavior, and a security policy identifies acceptable behavior.

Figure 3: Behavior checking enables automated response to changes in the endpoint's activity.

Once an endpoint has connected to the network, even if it has passed authentication and compliance checks, it could behave in an unauthorized fashion. If the endpoint starts violating security policy by trying to spread a worm, that traffic is detected and stopped by an IPS sensor.

Even more important, that sensor publishes information to the MAP about the attack it stopped. The MAP notifies the policy server, which evaluates its security policy and instructs the enforcement point to move the endpoint to a remediation network until it can be addressed.

The end result is an entire network security system that is working together. Each part performs its function, and each piece is integrated with the whole using the open IF-MAP standard.

Extending Security to Mobile Devices
TNC standards have enabled NAC to evolve into a foundation technology for business requirements such as mobile security and Bring Your Own Device (BYOD). A common scenario in today's connected world occurs when a mobile user accesses the Internet and social networks on a personal device, such as a smartphone, which they also use to access their corporate network. If the smartphone inadvertently becomes infected with malware, corporate data on that device is now at risk. And it's even worse when the user connects their smartphone to the corporate network; the attacker, who has taken control of the device, can access sensitive information.

This situation occurs when a company's security team lacks the tools to accommodate employees using their own consumer devices to improve productivity. Without the appropriate technology, the IT team cannot:

  • detect malware on the mobile device
  • protect the user from cloud-based threats
  • control access based on user identity, device, and location
  • coordinate security controls to protect sensitive information

This clearly needs a new approach!

Addressing the new requirements of BYOD and providing broad protection involves flexible deployment models that can be tailored to individual environments and security context, and coordination to keep users protected against the dynamic threat landscape.

Security automation makes it possible to detect and address compromised mobile devices; protect the user from malicious sites and applications; restrict network and resource access based on user identity, device, and location; and correlate endpoint activity monitoring across the corporate network infrastructure.

Leveraging Standard Network Security Metadata
These capabilities are enabled by TNC's standardization of basic metadata for network security. Metadata is the information stored in a MAP, representing anything that is known about the network: traffic flows, scan results, user authentications, or other events. In the case above, metadata represents information about network components and applicable security policies. The MAP is a clearinghouse for metadata; MAP clients can publish metadata to it, search it for specific metadata, and/or subscribe to metadata about endpoints in the network.

These inquiries include common things that it might be helpful to know about an endpoint - the type of device, identity of the user operating the device, role assigned to that user, association between the MAC address and IP address of the endpoint, location of the endpoint, and any events related to that endpoint.

Extending Security Automation to Other Use Cases
While standard metadata is useful for out-of-box interoperability, much more information about an endpoint or a network is available. IF-MAP can be extended by creation of vendor-specific metadata, similar to Vendor-Specific Attributes (VSAs) in RADIUS, enabling anyone to publish anything that can be expressed in XML!

Imagine a manufacturing line, where a physical process is controlled by a digital component called a Programmable Logic Controller (PLC). An operator display panel, the Human Machine Interface (HMI), is typically physically remote from the actual process that needs monitoring. As changes in the process occur, the operator display updates in real-time.

Many HMIs use a legacy protocol called Modbus to poll the PLC, retrieve these process variables, and display them. Originally designed to be run over a serial connection, Modbus has been ported to TCP. One of the problems with the Modbus protocol and many others in this space is that there are zero security features in the protocol - no authentication, no authorization - which means no way of knowing whether a requestor is authorized to gain access requested, or even who is sending data the request. If an endpoint (or intruder) can ping the PLC, it can issue commands to it!

Many control systems components operate this way. Until now, they have been small islands of automation with very little interconnection to other systems. Running over a serial bus required physical serial connections - typically, the operator had to be present in front of the machine to affect it, so physical security was sufficient. And once these systems are in place, they are designed to stay in production for decades. So now these systems are getting more and more interconnected with the enterprise network - and, by extension, to external networks - and they encounter the same types of security issues as enterprise systems.

Overlaying Security onto Industrial Control Systems
A single manufacturing line could have hundreds, or even thousands, of these PLCs. Replacing them is out of the question, as is retro-fitting them to add on security. But what if a transparent security overlay was inserted to protect these legacy components?

Deployment and lifecycle management for such an overlay would be a huge challenge - unless there was a mechanism for provisioning certificates, communication details, and access control policies to the overlay components. That's exactly what one manufacturing company has done with IF-MAP, by using vendor-specific metadata for provisioning of certificate information and access control policy, as shown in Figure 4.

Figure 4: IF-MAP enabled security overlay protects industrial control system components.

The first step is to add the overlay protection. In this case, the enforcement points are customized components, designed for Supervisory Control And Data Acquisition (SCADA) networks, that can create an OpenHIP "virtual private LAN" on top of standard IP networks. This requires no changes to the underlying network, protects communications between SCADA devices, and is completely transparent to the protected SCADA devices.

A MAP and a provisioning client enable centralized deployment, provisioning, and lifecycle management for the myriad enforcement points. The provisioning client publishes metadata to the MAP to define the HMIs and PLCs and to specify security policies that allow them to talk to each other, but do not allow external access to them.

For example, when an HMI comes into the network and queries for a PLC, the HMI does an Address Resolution Protocol (ARP) lookup. The enforcement point receives that traffic, searches the MAP, and finds the access control policy determining whether this specific HMI can talk to that particular PLC. Enforcement points can be moved around the network without requiring manual reconfiguration or reprovisioning, since all of the provisioning is centralized via the MAP.

This is not just a neat thought experiment - it is actually in production deployment on hundreds of endpoints in critical manufacturing lines today!

The Future of Security Automation
We've barely scratched the surface of security automation. For one thing, it goes far beyond access control. Imagine...

  • A content management database (CMDB) receives notification of a new device on the network and scans the new endpoint, then updates its data store
  • An analysis engine observes some behavior on the network and requires more information about the associated endpoint, so it requests an investigation by another component such as an endpoint profiler or vulnerability scanner
  • Carrier routers redirect traffic through deep packet inspection based on suspicious user activity
  • A security administrator modifies an existing security policy, or adds a new policy, and various policy servers / sensors are notified, triggering a re-evaluation of the network's endpoints
  • An application server publishes a request for bandwidth for a particular user based on the service the user is accessing, and network infrastructure components change QoS settings for those traffic flows based on that request
  • An IF-MAP enabled OpenFlow switch controller makes packet-handling decisions based on information from other network components
  • An analysis system determines that there's an attack underway; in addition to triggering a response, it notifies security administrators of the attack taking place, populating a dashboard with information to create a "heat map" of the attack

All of these are examples of a common three-step process: sensing, analysis, and response. Security automation is enabled by the abstraction and coordination of these functions across multiple disparate components in the network.

Imagine the power gained by linking together information from all of the various infrastructure and security technologies in a network and using that information to make dynamic, intelligent, automated decisions. That's the true promise of security automation - and the realization of that promise is in its infancy.

More Stories By Lisa Lorenzin

Lisa Lorenzin is a member of the TNC Work Group at Trusted Computing Group and a Principal Solutions Architect at Juniper Networks.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.