Welcome!

Microservices Expo Authors: Zakia Bouachraoui, Liz McMillan, Jason Bloomberg, Elizabeth White, Pat Romanski

Related Topics: Open Source Cloud, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo, Cloud Security

Open Source Cloud: Blog Feed Post

Creating a Self-Defending Network Using Open Source Software

You’ve got a firewall and a DMZ, you’re all set, right?

By: Steve McMaster

This past weekend, I presented the idea of a self-defending network at Ohio LinuxFest 2012. The accompanying slides are now available here. So let’s talk about network security. You’ve got a firewall and a DMZ, you’re all set, right? Not so fast slugger. We preach a theory called “defense in depth” here at Hurricane Labs. And that means you need something to defend you when your firewall admins make a mistake. And something to protect you when that layer fails. And so on. So what are these other layers? Well one of them is having a good IDS/IPS system. An IDS/IPS listens to network traffic, generally the traffic inside your firewall, and either alerts on (IDS) or drops/blocks altogether (IPS) traffic that meets specific rules defining “bad traffic”. But what else can you do?

A coworker and I put a couple pieces of open source software (OSSEC and Snort) together to respond to certain types of automated attacks we were seeing in our IDS (we use Snort in this case). Prior to this, an engineer would manually respond to alerts by logging into our firewall and blocking the IP address causing the alert. This process was tedious, repetitive, and time consuming. By the time the firewall change would be pushed, generally the scan (it was usually a scan) was over and the attacker had moved on. So we took advantage of a feature in OSSEC called “active response”, which is used to react to events on the network. OSSEC was configured to watch for Snort alerts, and would run a script on our Internet routers (running Vyatta core 6.3) to block the IP for 10 minutes. This response runs almost immediately. We hand selected alerts that we had associated with simple scans, such as FTP Brute Force attacks, and set them up to block the addresses. But this wasn’t enough for us.

We started to ponder what sorts of scans were happening that our firewall was dropping. For example SIP or SSH scans, which don’t ever pass through the firewall, that were at best sucking up bandwidth and at worst causing problems if our firewall rules ever let something slip. Granted, those sorts of slips are uncommon, but mistakes are always possible and it’s best to plan for every type of failure.

Coincidentally, we also wanted to test a new IDS on the market called Suricata. Suricata was designed from the ground up to be an “open source next generation intrusion detection and prevention engine”, and we wanted to run it through its paces (which is a different article entirely). So, we configured a server running Suricata, but this one was configured to watch traffic on a SPAN session watching traffic outside the firewall. What we found in preliminary testing was that we saw a few types of scans on a regular basis – NMAP ping scans, SSH brute force scans, and SIP scans. So, similarly to what we did with FTP brute forcing (which for multiple reasons is better detected on the sensor inside the network) we configured OSSEC to watch logs from Suricata (which was relatively simple, as it logs in a format compatible with Snort alerts anyways). Poof! A network that defends itself.

What we’ve done is similar in premise to the Team Cymru Darknet Project. According to their website, a darknet is “a portion of routed, allocated IP space in which no active services or servers reside.” It is then assumed that any packets entering the network are unsolicited and more than likely undesirable. This can be used to reliably build a list of known malicious hosts. Unlike a true darknet, we’re using IP space that hosts active services, however we’ve tuned our monitoring to look specifically for traffic we know, by design, not to expect. This allows us to gain many of the benefits of a darknet without the resource investment required.

The advantage of this method is that we can run the “active response” on multiple targets. So, for example, we run two Internet-facing routers on our colocated data center network, and another on the edge of our office network. By detecting scans on both networks, the other network is automatically protected as well. This could be propagated to several other mechanisms as well. It could be used to build a dynamic BGP feed, or DNS blacklist, of hosts that are known to be scanning the Internet maliciously.

I’ve attached a few snippets to this article to help get you started on the path to building a self-defending network. These include configuration examples and rule signatures for OSSEC, Snort and Suricata.

Read the original blog entry...

More Stories By Hurricane Labs

Christina O’Neill has been working in the information security field for 3 years. She is a board member for the Northern Ohio InfraGard Members Alliance and a committee member for the Information Security Summit, a conference held once a year for information security and physical security professionals.

Microservices Articles
Digital Transformation is well underway with many applications already on the cloud utilizing agile and devops methodologies. Unfortunately, application security has been an afterthought and data breaches have become a daily occurrence. Security is not one individual or one's team responsibility. Raphael Reich will introduce you to DevSecOps concepts and outline how to seamlessly interweave security principles across your software development lifecycle and application lifecycle management. With ...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Two apparently distinct movements are in the process of disrupting the world of enterprise application development: DevOps and Low-Code. DevOps is a cultural and organizational shift that empowers enterprise software teams to deliver better software quicker – in particular, hand-coded software. Low-Code platforms, in contrast, provide a technology platform and visual tooling that empower enterprise software teams to deliver better software quicker -- with little or no hand-coding required. ...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
All zSystem customers have a significant new business opportunity to extend their reach to new customers and markets with new applications and services, and to improve the experience of existing customers. This can be achieved by exposing existing z assets (which have been developed over time) as APIs for accessing Systems of Record, while leveraging mobile and cloud capabilities with new Systems of Engagement applications. In this session, we will explore business drivers with new Node.js apps ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...