Welcome!

Microservices Expo Authors: Daniel Khan, Liz McMillan, Elizabeth White, Lori MacVittie, Ruxit Blog

Related Topics: Cloud Security, Java IoT, Industrial IoT, Microservices Expo, Microsoft Cloud, IoT User Interface

Cloud Security: Blog Post

Addressing the Root Cause – A Proactive Approach to Securing Desktops

The trouble with reactionary behavior

The computers on your network are protected from malware right? If you are operating an environment based largely on Windows based PCs you likely have some kind of anti-virus installed and centrally managed. If you have purchased a more complete desktop protection suite, you probably even have a Host Based IDS/IPS protecting your machine from incoming malicious TCP scans, or possible outbound connections to known malicious sites (like google.com occasionally). Operating system firewall activated? Yep! AV signatures current? Check! Global Threat Intelligence updated? Uh, yeah....sure. Then you should be covered against threats targeting your organization, right? Most likely not, and at times these tools actually mask intrusions as they provide a false sense of security and protection.

The Trouble with Reactionary Behavior
The problem with these tools, all of them, is that they are purely reactionary in nature. Reactionary protection tools on every level, is something that basically states that an event has already occurred on your host computer, and those protection mechanisms will now activate. That means when you get an antivirus alert on your computer, the malware ALREADY present on the system. Yes, it may have stopped it, deleted it or possibly quarantined it (all of which are good). It has only done so because the AV software either has an existing signature in its database or the malware has attempted to operate in a suspicious manner, flagging the heuristics detection of the AV. What about when brand new malware, 0-day exploits, or sophisticated targeted malware executes on your host?

Do you imagine your AV will detect and mitigate it? I would suggest that your AV will be none the wiser to the presence of this yet to be detected threat, and only once it has been submitted to an AV vendor for analysis will you be provided with an updated signature. Well certainly if my AV missed it, one of the other layers of protection should stop it, right? It is possible, if the malware uses outbound connections that aren't considered "normal" by your OS's firewall or HIDS/HIPS software, then the malware could potentially be detected. If the malware uses standard outbound connections, port 80 or more than likely port 443, this appears as "normal" to the other layers of your systems host based defenses in place.

These tools all require some kind of known characteristics of a particular threat in order to detect its presence and mitigate it. These characteristics are obtained through analysis of reported and discovered threats of a similar nature, of which are used to develop signatures or heuristic models to detect the presence of malware on a host. If that threat has not yet been submitted for analysis and the callback domains not reported as malicious, it may be a while for it to be "discovered" and signatures made available. Until that time, your computer, its files, all of your activities as well as other computers on your network are at the mercy of an attacker unabated.

Being Proactive Is Essentially Free
This is the part that is really frustrating for me as an analyst, and also as an advocate for root cause solutions. Reactionary defenses cost an unreal amount of money for consumers, businesses, governments (both state and local), federal and military. You would think with all of this time and money spent on the various products billed as "protecting" you from cyber threats & intrusions, your environment would be better protected whether it is an enterprise or a single computer. This is not the case.  In fact, many studies show computer related intrusions are on the rise. Nation state threats, advanced persistent threats (APT) and even less skilled hackers continue to improve their sophistication as tools get cheaper and information is freely exchanged. Why is it then that I say, Proactive defenses are essentially free? And if that is in fact the case, why is this not being used more frequently? Proactive defense measures are essentially free, minus the time and effort in securing the root problems within your network. For this particular blog post, I am focused on host based proactive defensive measures.

Denying Execution at the Directory Level
The "how" is actually quite simple to explain, and in fact it is not a new protection technique at all, its just not as widely used outside of *nix based systems. All that an operating system provides is a platform for applications to run on, sometimes graphical based, sometimes a simple command line. The applications are typically stored in a common location within the operating system, allowing for dynamic linking as well as simplifying the directory structure. Not all applications require the need for linking to a dynamic library as they contain all of the requirements to run on their own, so they can easily be placed anywhere within the OS and they will execute.

This is extremely convenient when a developer wants to provide software that doesn't need to officially "install", and can be easily moved around. Therein lies the issue with the execution of these "self contained" applications, they can execute from anywhere on the host, without restriction. For a demonstration of this, copy "calc.exe" from the "system32" folder on your Windows PC to your "desktop". The program "calc.exe" will execute just the same as if it were under "system32" as it is a completely self contained binary. Almost all malware is designed the same way, and typically executes from a "temp" location or the root of your currently logged in user directory. The execution of malware needs to be stopped from occurring in the first place. This way, regardless of your current AV signatures or HIDS/HIPS capabilities, the malware cannot run. If the malware is unable to run, the threat is effectively mitigated before it can gain any foothold.

So how on earth do you stop the malware from executing from within these locations, and do I need some kind of "agent" based solution to monitor those particular directories to stop them? The approach is simple: deny ALL execution of programs outside of a particular directory (e.g., "Program Files" and "System32"). Require all necessary applications on the host, putty for instance, to be placed within one of the approved directories. If you are running a Windows based environment, locking down execution outside of approved directories can be implemented through both Group Policy (GPO) and Local Policy.

By expanding on an existing Windows policy called "Microsoft Windows Software Restriction" (which has been around since 2002 BTW) you can define directories that allow for execution of applications. This exact same technique can be employed on OSX systems as well.  Simply remove the execute privilege from locations within the OS that you would like to protect. In fact, I would venture to say it is easiest to implement on any *nix based system (if it's not already, as is the case on most unix/linux flavors).

No Silver Bullet
No solution is 100% effective, and this is no exception, as there are a number of ways to get past this protection.  Having said that, it adds a layer to your defense and will stop the majority of execution-based attacks.  If your software is properly patched (0-days not included), you have user privileges locked down with separate dedicated accounts, directory protection just steps up the difficulty your attackers have in gaining a presence on your network. No single solution will solve all of your problems, no matter how much a vendor sales engineer tries to sell you. Holistic, full spectrum defenses are the future, not "plug & play" protection hardware or software that requires updates, patching, signatures and "threat intelligence". The other side extremely important level of protection is in your Infosec professionals you have supporting you. Spend the money on good, talented and well rounded security professionals that understand the cyber threat landscape and the ways in which they can help better protect your organization.

To research further into how your network and its assets can be better protected please check out CyberSquared for solutions to root cause issues.

More Stories By Cory Marchand

Cory Marchand is a trusted subject matter expert on topics of Cyber Security Threats, Network and Host based Assessment and Computer Forensics. Mr. Marchand has supported several customers over his 10+ years within the field of Computer Security including State, Federal and Military Government as well as the Private sector. Mr. Marchand holds several industry related certificates including CISSP, EnCE, GSEC, GCIA, GCIH, GREM, GSNA and CEH.

@MicroservicesExpo Stories
No matter how well-built your applications are, countless issues can cause performance problems, putting the platforms they are running on under scrutiny. If you've moved to Node.js to power your applications, you may be at risk of these issues calling your choice into question. How do you identify vulnerabilities and mitigate risk to take the focus off troubleshooting the technology and back where it belongs, on innovation? There is no doubt that Node.js is one of today's leading platforms of ...
SYS-CON Events announced today that LeaseWeb USA, a cloud Infrastructure-as-a-Service (IaaS) provider, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. LeaseWeb is one of the world's largest hosting brands. The company helps customers define, develop and deploy IT infrastructure tailored to their exact business needs, by combining various kinds cloud solutions.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Ovum, a leading technology analyst firm, has published an in-depth report, Ovum Decision Matrix: Selecting a DevOps Release Management Solution, 2016–17. The report focuses on the automation aspects of DevOps, Release Management and compares solutions from the leading vendors.
SYS-CON Events announced today that Venafi, the Immune System for the Internet™ and the leading provider of Next Generation Trust Protection, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Venafi is the Immune System for the Internet™ that protects the foundation of all cybersecurity – cryptographic keys and digital certificates – so they can’t be misused by bad guys in attacks...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...

This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni...
If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
Before becoming a developer, I was in the high school band. I played several brass instruments - including French horn and cornet - as well as keyboards in the jazz stage band. A musician and a nerd, what can I say? I even dabbled in writing music for the band. Okay, mostly I wrote arrangements of pop music, so the band could keep the crowd entertained during Friday night football games. What struck me then was that, to write parts for all the instruments - brass, woodwind, percussion, even k...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
SYS-CON Events announced today that Isomorphic Software will exhibit at DevOps Summit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, cutting-edge enterprise web applications for desktop and mobile. SmartClient combines the productivity and performance of traditional desktop software with the simp...
In his session at @DevOpsSummit at 19th Cloud Expo, Yoseph Reuveni, Director of Software Engineering at Jet.com, will discuss Jet.com's journey into containerizing Microsoft-based technologies like C# and F# into Docker. He will talk about lessons learned and challenges faced, the Mono framework tryout and how they deployed everything into Azure cloud. Yoseph Reuveni is a technology leader with unique experience developing and running high throughput (over 1M tps) distributed systems with extre...
Node.js and io.js are increasingly being used to run JavaScript on the server side for many types of applications, such as websites, real-time messaging and controllers for small devices with limited resources. For DevOps it is crucial to monitor the whole application stack and Node.js is rapidly becoming an important part of the stack in many organizations. Sematext has historically had a strong support for monitoring big data applications such as Elastic (aka Elasticsearch), Cassandra, Solr, S...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne...
This digest provides an overview of good resources that are well worth reading. We’ll be updating this page as new content becomes available, so I suggest you bookmark it. Also, expect more digests to come on different topics that make all of our IT-hearts go boom!
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...