Click here to close now.


Microservices Expo Authors: Carmen Gonzalez, Liz McMillan, Pat Romanski, Yeshim Deniz, Jason Bloomberg

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, API Journal, IoT User Interface, Agile Computing, Cloud Security

@CloudExpo: Article

Beyond Intrusion Detection: Eight Best Practices for Cloud SIEM Deployment

Improving visibility across the enterprise

For all the right reasons, your company has been thinking about deploying SIEM…to create an alert system when those with less than good intentions come knocking; to remediate potential network threats; to comply with federal, state or industry regulations; and identify the risks and vulnerabilities throughout the enterprise IT infrastructure and architecture. If you maintain even a modest (SMB -> Fortune 1000) organization that has any online identity, SIEM should be the cornerstone of your asset protection strategy.

First and foremost, SIEM (and to a certain extent log management) is about visibility. Who is doing what and when on your network. It is as much about understanding the holistic landscape of your infrastructure as it is protecting proprietary assets. Without it, it’s akin to coaching the Big Game without any idea who is the opponent; or for that matter if you even have a starting left guard.

But fun metaphors aside, SIEM is a critical enterprise tool. And just like any enterprise solution, it requires forethought, vigilance and most importantly, a good game plan. And when deployed properly it can change your IT department from infrastructure-based, to information-centric. And as such you get to make better decisions, faster.

And with every technology there are best practices and pitfalls. In past articles I have spoken at length regarding the advantages of deploying and managing SIEM in the cloud. Many of these surround the affordability, manageability, control and capability of the solution. For many, security from the cloud is still an emerging concept. But for those who’ve already made the leap, they are reaping the significant benefits. But I want to move beyond the arguments of “going cloud” when deciding on security solutions. Today I want to focus on what happens next. How do you start collecting that ROI once a cloud-based security-as-a-service has been chosen?

The reason most enterprise deployments fail (on premise or cloud) can be typically traced to two causes: (1.) Lack of buy-in from the executive level or employee resistance to change, but more often the culprit is (2.) lack of vision or process. Too many companies jump in and apply a solution because they heard it was important or were sold a Porsche when all they needed was a family SUV. Of course one of the benefits of cloud-based security is the ability to "buy" the SUV and instantly scale up to that Porsche, if and when, the business need requires it (without touching CapEx budgets!)! But with that here are 8 best practices you should implement when moving forward with your cloud-based security initiative:

Best Practice #1: Identify your goals and match your scope to them. There are five questions you need to ask before moving forward with any deployment. 1. WHY do you need SIEM (compliance? user and/or partner expansion? BYOD? Breach detection?) HOW will SIEM be deployed to properly address these issues (what processes, functionality and capabilities are needed; which needs to be outsourced/replaced/improved) WHAT needs to be collected, analyzed and reported? HOW BIG does the deployment need to scale to accurately and cost effectively meet your specific business need? And WHERE is the information situated that should/must be monitored?

Best practice #2: Incremental usage. The quickest route to success is taking baby steps. The idea is to prove the concept and then expand the scope. To some this might be to start with log management and add  SIEM once you understand the requirements, commitment and volume. Now because security-as-a-service is so flexible and can ramp up or down instantly, an easy entry point might be to start with only those elements that fulfill compliance. The project might be overwhelming, but if you take it in bite-sized phases, you will find the victories come easier and the ROI is justified. When dealing with a cloud security deployment, it is easy to turn on the fire hose when only a garden hose is needed. But the beauty of a cloud deployment is the ease and flexibility of scaling. Again, another example of incremental usage would be either to apply SIEM against specific use case scenarios or possibly just migrate a division or a department or a function (as opposed to the entire enterprise).

Best Practice #3: Determine what IS and ISN’T a threat to your network. Returning to the fire hose metaphor, when deploying a SIEM initiative, it is very easy to get lost in a sea of data. It can be like trying to drink from that proverbial fire hose. The trick is to recognize what constitutes a true risk and eliminate false positives. And this requires some internal analysis to create a series of rules that sift out the white noise and differentiate “normal” traffic from suspicious activity. For instance, if there is an attempted access to your partner portal from Russia—is that normal? Do you even have a partner in Minsk? But even a simple filter isn’t quite enough. Risk is three dimensional and it can hide in plain sight. That’s why you continue to filter based on time of day, IP address, server, attempts, network availability and a myriad of other forensic qualifiers before the alert is grave enough to require immediate attention.

Best practice #4: Map response plans. Now that an incident gets your attention, what do you do? Do you launch an account investigation, suspend the user, deactivate a password, apply a  denial-of-service against the IP or a number of remediations based on the severity, vulnerability and identity of the transgressor. This goes back to workflow and process. Who is going to what to whom and how? SIEM is a process-reliant technology. You simply can’t flip a switch and say you’ve put up a magic forcefield around your network. Your response plan is your blueprint to closing the vulnerability gaps and ensuring compliance.

Best practice #5 Correlate data from multiple sources. The practice of situational awareness is what adds the muscle into a SIEM initiative. Like #4, it isn’t enough to plug in a solution and press “go.” Situational awareness takes into account a multitude of different endpoints, servers, data streams, assets and inventories, events and flows, from across the enterprise and puts information into context.  Context is the most important portion of risk assessment.  For example, a shark is a threat. However if that shark is 10 miles away, it is not a direct or immediate threat. Doesn't mean you're not vulnerable if that shark gets hungry. Having an engine that not only creates accurate perspective, but analyzes, understands and acts upon behaviors is key. And to do that a centralized SIEM engine needs the data from more than just a single source or single server.

Best Practice #6: Requires Real time monitoring 7/24/365. For many companies this is a challenge, but hackers don’t sleep. And although a great deal of SIEM and Log Management is automated, it still requires the vigilance of 24 hour monitoring. Trees might be falling in the forest, but if there is no one to see them, breaches occur, networks are compromised. I’ve witnessed plenty of IT departments that don’t have the resources. Again, this is a considerable advantage that security-as-s-service provides and allows you to sleep just a little better at night. Knowing that this one crucial element of your security is professionally addressed without additional staff or budget makes the cloud that much more valuable.

Best Practice #7 Remain calm! One thing we’ve noticed is that soon after the deployment of a SIEM/Log Management it seems there are alerts and issues you never dreamed about. Things are bound to look worse before they get better and it can seem overwhelming; kind of opening a Pandora’s Box of malware and botnets.  For the most part it is because you now know what you didn’t know before.  In some respect it is like looking at your hotel room comforter under black light and a microscope. But once you realize what you’re looking at and that much or the remediation can be automated, soon, (with a bit of fine tuning and normalizing correlation feeds) you will be measure that the anomalous events lessen and the alert prioritizations allow you to make timely and intelligent decisions.

Best practice #8: Evolution. Security is a moving target. You need to revisit you processes and workflows every few months to make sure you are up to date with compliance requirements, new users/access points and expanded or redefined workflows. This is more than recognizing the latest virus threats. New users access your network with regularity. New layers of regulations are added. There are new applications requiring monitoring. All in all, by giving your cloud-based SIEM and log management solutions the new and necessary data, your enterprise will be more secure than it was yesterday.

More Stories By Kevin Nikkhoo

With more than 32 years of experience in information technology, and an extensive and successful entrepreneurial background, Kevin Nikkhoo is the CEO of the dynamic security-as-a-service startup Cloud Access. CloudAccess is at the forefront of the latest evolution of IT asset protection--the cloud.

Kevin holds a Bachelor of Science in Computer Engineering from McGill University, Master of Computer Engineering at California State University, Los Angeles, and an MBA from the University of Southern California with emphasis in entrepreneurial studies.

@MicroservicesExpo Stories
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Despite all the talk about public cloud services and DevOps, you would think the move to cloud for enterprises is clear and simple. But in a survey of almost 1,600 IT decision makers across the USA and Europe, the state of the cloud in enterprise today is still fraught with considerable frustration. The business case for apps in the real world cloud is hybrid, bimodal, multi-platform, and difficult. Download this report commissioned by NTT Communications to see the insightful findings – registra...
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
DevOps Summit at Cloud Expo 2014 Silicon Valley was a terrific event for us. The Qubell booth was crowded on all three days. We ran demos every 30 minutes with folks lining up to get a seat and usually standing around. It was great to meet and talk to over 500 people! My keynote was well received and so was Stan's joint presentation with RingCentral on Devops for BigData. I also participated in two Power Panels – ‘Women in Technology’ and ‘Why DevOps Is Even More Important than You Think,’ both ...
In a report titled “Forecast Analysis: Enterprise Application Software, Worldwide, 2Q15 Update,” Gartner analysts highlighted the increasing trend of application modernization among enterprises. According to a recent survey, 45% of respondents stated that modernization of installed on-premises core enterprise applications is one of the top five priorities. Gartner also predicted that by 2020, 75% of
As the world moves towards more DevOps and microservices, application deployment to the cloud ought to become a lot simpler. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. In his session at 17th Cloud Expo, Raghavan "Rags" Srinivas, an Architect/Developer Evangeli...
Our guest on the podcast this week is Jason Bloomberg, President at Intellyx. When we build services we want them to be lightweight, stateless and scalable while doing one thing really well. In today's cloud world, we're revisiting what to takes to make a good service in the first place. Listen in to learn why following "the book" doesn't necessarily mean that you're solving key business problems.
For it to be SOA – let alone SOA done right – we need to pin down just what "SOA done wrong" might be. First-generation SOA with Web Services and ESBs, perhaps? But then there's second-generation, REST-based SOA. More lightweight and cloud-friendly, but many REST-based SOA practices predate the microservices wave. Today, microservices and containers go hand in hand – only the details of "container-oriented architecture" are largely on the drawing board – and are not likely to look much like S...
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at, and Tomer Levy, co-founder and CEO of, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the archi...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?