Microservices Expo Authors: Hollis Tibbetts, Liz McMillan, Elizabeth White, Yeshim Deniz, Ian Khan

Related Topics: Cloud Security, Microservices Expo, Containers Expo Blog, @CloudExpo

Cloud Security: Article

Top Ten Firewall Management Metrics that Matter…and Why

Proper attention to these metrics will keep your firewalls optimized

If you look at some of the headline-making breaches of the past few years, they all occurred at large companies with highly dynamic and complex computing environments. Securing these environments is impossible to do without automation, which is why so much of the innovation in IT security in recent years has been focused on automating security management.

Network security is one area where systems have become too complex to manage manually. Let's take firewalls as a case in point. A single firewall can have hundreds or thousands of rules, each made up of three components: source, destination and service. Next-generation firewalls add at least two additional fields - users and applications.

Larger companies have hundreds of firewalls, usually from multiple vendors, in multiple geographies, managed by different people. That's just for starters - any number of factors can exponentially increase the degree of firewall complexity. While rare, in extreme situations a particularly bloated or neglected rule base - or even a simple typo while configuring a rule - if left untended, can result in a situation where a firewall can introduce more risk than it prevents.

The only way to get your brain around the state of your firewalls is to audit them. While manual audits can be painful (not to mention error prone), it may be the best way to fast-track proper firewall management. Regardless of whether an audit is manual or automated, here are some important metrics to look for:

  1. Number of shadowed or redundant rules: Shadowed rules refer to rules that are masked, completely or partially, by other rules that are either placed higher up in the rule base. Shadowed rules are very common because administrators add new firewall rules all the time but for a variety of reasons - fear of causing an outage, or not knowing why a rule was added in the first place - rarely delete them. A rule base filled with shadowed rules is not only inefficient, it puts a much greater strain on the firewall then is necessary, which can lead to performance issues.
  2. Number of unused rules: Unused rules will appear rarely, if at all, in firewall logs because they aren't being used for legitimate traffic. Unused rules can lead to serious exposures, such as allowing access to a server that is no longer being used and, as a result, exposing a service likely not properly patched. Looking for unused rules manually can be an extremely slow and tedious process, which is why admins hate to do it. However, automated solutions can make auditing for unused rules both manageable and simple.
  3. Number of unused objects: An object is a component of a rule, and a single field of a rule (i.e., source, destination or service) can have multiple objects - such as a business unit having access to multiple destinations and/or services. Not only do unused objects appear much more frequently than unused rules, they are that much harder to find manually. Cleaning up unused objects can significantly tighten up a rule base and often lead to improved performance.
  4. Number of rules with permissive services: The most common examples of this are rules with "ANY " in the service field, but in general, permissive services give more access then is needed to the destination by allowing additional services (which are often applications), which can lead to unauthorized use, allow the service to be a springboard to other parts of the network, or leave it exposed to malicious activity.
  5. Number of rules with risky services (such as telnet, ftp, snmp, pop, etc.) in general or between zones (i.e., between Internal, DMZ, External, or between development and production networks): Risky services are deemed risky because they usually allow credentials to be passed in plain text, often contain sensitive info or enable access to sensitive systems. Any service that exposes sensitive data or allows for shell access should be tightly monitored and controlled.
  6. Number of expired rules: Any rule that was created on a temporary basis and has clearly expired is just taking up space and does not need to remain in the rule base. If there is no documentation as to when or why the rule expired, check the firewall logs for its "hit count" (or usage, in firewall management-speak).
  7. Number of unauthorized changes: These are rules that are not associated with a specific change ticket. In order to ensure all requests are properly handled, all requests, from initial request to final implementation, and should be managed via a ticketing system. If a change causes a problem and no one has any clue why the change was made, who made it, whether it was assessed for compliance and/or security risk, who approved, etc., it results in a huge waste of resources, and is a clear indicator that firewall management processes are inefficient.
  8. Percentage of changes made outside of authorized change windows: Outages in IT are usually caused by the work IT does. In order to minimize business disruption, change windows are usually set during times when the least amount of people are on the network - for example, firewall admins like to implement rule changes late at night, usually on the weekends. That way, if there is an outage it is much easier to track which change caused it and how. If the majority of changes are made outside of normal change windows, it is usually because admins are spending too much time putting out fires, which indicates the firewall management processes are in need of an overhaul.
  9. Number of rules with no documentation: While the comments section of a firewall rule has text limits that inhibit proper documentation, all change tickets have a comments section, which can be used to provide a business justification for the rule. Some people use spreadsheets to capture this information although using a spreadsheet as a central repository for change information leads to the same issues as with other manual processes. Without proper documentation there is no way to know why a rule was implemented and if it is still necessary.
  10. Number of rules with no logging: Proper firewall management is impossible without leveraging the data found in firewall logs. Similar to other areas of IT, there was a resistance to turning on logging because it would cause performance issues. However, firewall (and firewall management) technology has evolved to the point where logging no longer impacts performance. If there are performance issues with your firewalls, than something else in your environment is not optimal. Determining rule and object usage (numbers two and three in this list) are impossible to do without logging, as is proper forensics and troubleshooting. The advent of automated tools for firewall management makes scanning logs for relevant data a highly manageable endeavor.

Plenty of lip service has been given to the credo that good IT is a function of people, process and technology. Given the fast pace of modern business, breakdowns are bound to happen, and unless a company decides to not use the Internet, they will always be facing a certain level of exposure. While there will never be silver bullet for security, the good news is that proper attention to the metrics above will keep your firewalls optimized so that they remain one of your strongest and most reliable security assets and not a potential liability.

More Stories By Michael Hamelin

Michael Hamelin is Chief Security Architect at Tufin Technologies where he identifies and champions the security standards and processes for Tufin. Bringing more than 16 years of security domain expertise to Tufin, Hamelin has deep hands-on technical knowledge in security architecture, penetration testing, intrusion detection, and anomalous detection of rogue traffic. He has authored numerous courses in information security and worked as a consultant, security analyst, forensics lead, and security practice manager. He is also a featured security speaker around the world widely regarded as a leading technical thinker in information security.

Hamelin previously held technical leadership positions at VeriSign, Cox Communications, and Resilience. Prior to joining Tufin he was the Principal Network and Security Architect for ChoicePoint, a LexisNexis Company. Hamelin received BS degrees in Chemistry and Physics from Norwich University, and did his graduate work at Texas A&M University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
SYS-CON Events announced today that Enzu will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Enzu’s mission is to be the leading provider of enterprise cloud solutions worldwide. Enzu enables online businesses to use its IT infrastructure to their competitive advantage. By offering a suite of proven hosting and management services, Enzu wants companies to focus on the core of their online busine...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
24Notion is full-service global creative digital marketing, technology and lifestyle agency that combines strategic ideas with customized tactical execution. With a broad understand of the art of traditional marketing, new media, communications and social influence, 24Notion uniquely understands how to connect your brand strategy with the right consumer. 24Notion ranked #12 on Corporate Social Responsibility - Book of List.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, will discuss the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docke...
The reason I believe digital transformation is not only more than a fad, but is actually a life-or-death imperative for every business and IT executive on the planet is simple: there will be no place for an “industrial enterprise” in a digital world. Transformation, by definition, is a metamorphosis from one state to another, wholly new state. As such, a true digital transformation must be the act of transforming an industrial-era organization into something wholly different – the Digital Enter...
Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, will contrast how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He will show the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He will also have live demos of building immutable pipe...
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, will discuss what every business should plan for how to structure their teams to d...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
SYS-CON Events announced today that Transparent Cloud Computing (T-Cloud) Consortium will exhibit at the 19th International Cloud Expo®, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. The Transparent Cloud Computing Consortium (T-Cloud Consortium) will conduct research activities into changes in the computing model as a result of collaboration between "device" and "cloud" and the creation of new value and markets through organic data proces...
The evolution of JavaScript and HTML 5 to support a genuine component based framework (Web Components) with the necessary tools to deliver something close to a native experience including genuine realtime networking (UDP using WebRTC). HTML5 is evolving to offer built in templating support, the ability to watch objects (which will speed up Angular) and Web Components (which offer Angular Directives). The native level support will offer a massive performance boost to frameworks having to fake all...
In many organizations governance is still practiced by phase or stage gate peer review, and Agile projects are forced to accommodate, which leads to WaterScrumFall or worse. But governance criteria and policies are often very weak anyway, out of date or non-existent. Consequently governance is frequently a matter of opinion and experience, highly dependent upon the experience of individual reviewers. As we all know, a basic principle of Agile methods is delegation of responsibility, and ideally ...
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.
Apache Hadoop is a key technology for gaining business insights from your Big Data, but the penetration into enterprises is shockingly low. In fact, Apache Hadoop and Big Data proponents recognize that this technology has not yet achieved its game-changing business potential. In his session at 19th Cloud Expo, John Mertic, director of program management for ODPi at The Linux Foundation, will explain why this is, how we can work together as an open data community to increase adoption, and the i...
JetBlue Airways uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications. The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-tim...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...