Click here to close now.

Welcome!

Microservices Journal Authors: JP Morgenthal, Baruch Sadogursky, Elizabeth White, Liz McMillan, Plutora Blog

Related Topics: Security, Java, XML, Microservices Journal, .NET, AJAX & REA

Security: Blog Post

Addressing the Root Cause – A Proactive Approach to Securing Desktops

The trouble with reactionary behavior

The computers on your network are protected from malware right? If you are operating an environment based largely on Windows based PCs you likely have some kind of anti-virus installed and centrally managed. If you have purchased a more complete desktop protection suite, you probably even have a Host Based IDS/IPS protecting your machine from incoming malicious TCP scans, or possible outbound connections to known malicious sites (like google.com occasionally). Operating system firewall activated? Yep! AV signatures current? Check! Global Threat Intelligence updated? Uh, yeah....sure. Then you should be covered against threats targeting your organization, right? Most likely not, and at times these tools actually mask intrusions as they provide a false sense of security and protection.

The Trouble with Reactionary Behavior
The problem with these tools, all of them, is that they are purely reactionary in nature. Reactionary protection tools on every level, is something that basically states that an event has already occurred on your host computer, and those protection mechanisms will now activate. That means when you get an antivirus alert on your computer, the malware ALREADY present on the system. Yes, it may have stopped it, deleted it or possibly quarantined it (all of which are good). It has only done so because the AV software either has an existing signature in its database or the malware has attempted to operate in a suspicious manner, flagging the heuristics detection of the AV. What about when brand new malware, 0-day exploits, or sophisticated targeted malware executes on your host?

Do you imagine your AV will detect and mitigate it? I would suggest that your AV will be none the wiser to the presence of this yet to be detected threat, and only once it has been submitted to an AV vendor for analysis will you be provided with an updated signature. Well certainly if my AV missed it, one of the other layers of protection should stop it, right? It is possible, if the malware uses outbound connections that aren't considered "normal" by your OS's firewall or HIDS/HIPS software, then the malware could potentially be detected. If the malware uses standard outbound connections, port 80 or more than likely port 443, this appears as "normal" to the other layers of your systems host based defenses in place.

These tools all require some kind of known characteristics of a particular threat in order to detect its presence and mitigate it. These characteristics are obtained through analysis of reported and discovered threats of a similar nature, of which are used to develop signatures or heuristic models to detect the presence of malware on a host. If that threat has not yet been submitted for analysis and the callback domains not reported as malicious, it may be a while for it to be "discovered" and signatures made available. Until that time, your computer, its files, all of your activities as well as other computers on your network are at the mercy of an attacker unabated.

Being Proactive Is Essentially Free
This is the part that is really frustrating for me as an analyst, and also as an advocate for root cause solutions. Reactionary defenses cost an unreal amount of money for consumers, businesses, governments (both state and local), federal and military. You would think with all of this time and money spent on the various products billed as "protecting" you from cyber threats & intrusions, your environment would be better protected whether it is an enterprise or a single computer. This is not the case.  In fact, many studies show computer related intrusions are on the rise. Nation state threats, advanced persistent threats (APT) and even less skilled hackers continue to improve their sophistication as tools get cheaper and information is freely exchanged. Why is it then that I say, Proactive defenses are essentially free? And if that is in fact the case, why is this not being used more frequently? Proactive defense measures are essentially free, minus the time and effort in securing the root problems within your network. For this particular blog post, I am focused on host based proactive defensive measures.

Denying Execution at the Directory Level
The "how" is actually quite simple to explain, and in fact it is not a new protection technique at all, its just not as widely used outside of *nix based systems. All that an operating system provides is a platform for applications to run on, sometimes graphical based, sometimes a simple command line. The applications are typically stored in a common location within the operating system, allowing for dynamic linking as well as simplifying the directory structure. Not all applications require the need for linking to a dynamic library as they contain all of the requirements to run on their own, so they can easily be placed anywhere within the OS and they will execute.

This is extremely convenient when a developer wants to provide software that doesn't need to officially "install", and can be easily moved around. Therein lies the issue with the execution of these "self contained" applications, they can execute from anywhere on the host, without restriction. For a demonstration of this, copy "calc.exe" from the "system32" folder on your Windows PC to your "desktop". The program "calc.exe" will execute just the same as if it were under "system32" as it is a completely self contained binary. Almost all malware is designed the same way, and typically executes from a "temp" location or the root of your currently logged in user directory. The execution of malware needs to be stopped from occurring in the first place. This way, regardless of your current AV signatures or HIDS/HIPS capabilities, the malware cannot run. If the malware is unable to run, the threat is effectively mitigated before it can gain any foothold.

So how on earth do you stop the malware from executing from within these locations, and do I need some kind of "agent" based solution to monitor those particular directories to stop them? The approach is simple: deny ALL execution of programs outside of a particular directory (e.g., "Program Files" and "System32"). Require all necessary applications on the host, putty for instance, to be placed within one of the approved directories. If you are running a Windows based environment, locking down execution outside of approved directories can be implemented through both Group Policy (GPO) and Local Policy.

By expanding on an existing Windows policy called "Microsoft Windows Software Restriction" (which has been around since 2002 BTW) you can define directories that allow for execution of applications. This exact same technique can be employed on OSX systems as well.  Simply remove the execute privilege from locations within the OS that you would like to protect. In fact, I would venture to say it is easiest to implement on any *nix based system (if it's not already, as is the case on most unix/linux flavors).

No Silver Bullet
No solution is 100% effective, and this is no exception, as there are a number of ways to get past this protection.  Having said that, it adds a layer to your defense and will stop the majority of execution-based attacks.  If your software is properly patched (0-days not included), you have user privileges locked down with separate dedicated accounts, directory protection just steps up the difficulty your attackers have in gaining a presence on your network. No single solution will solve all of your problems, no matter how much a vendor sales engineer tries to sell you. Holistic, full spectrum defenses are the future, not "plug & play" protection hardware or software that requires updates, patching, signatures and "threat intelligence". The other side extremely important level of protection is in your Infosec professionals you have supporting you. Spend the money on good, talented and well rounded security professionals that understand the cyber threat landscape and the ways in which they can help better protect your organization.

To research further into how your network and its assets can be better protected please check out CyberSquared for solutions to root cause issues.

More Stories By Cory Marchand

Cory Marchand is a trusted subject matter expert on topics of Cyber Security Threats, Network and Host based Assessment and Computer Forensics. Mr. Marchand has supported several customers over his 10+ years within the field of Computer Security including State, Federal and Military Government as well as the Private sector. Mr. Marchand holds several industry related certificates including CISSP, EnCE, GSEC, GCIA, GCIH, GREM, GSNA and CEH.

@MicroservicesExpo Stories
How can you compare one technology or tool to its competitors? Usually, there is no objective comparison available. So how do you know which is better? Eclipse or IntelliJ IDEA? Java EE or Spring? C# or Java? All you can usually find is a holy war and biased comparisons on vendor sites. But luckily, sometimes, you can find a fair comparison. How does this come to be? By having it co-authored by the stakeholders. The binary repository comparison matrix is one of those rare resources. It is edite...
There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way ...
Working with Big Data is challenging, especially when decision makers depend on market insights and intelligence from your data but don't have quick access to it or find it unusable. In their session at 6th Big Data Expo, Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia; Zel Bianco, President, CEO and Co-Founder of Interactive Edge of Solgenia; and Ermanno Bonifazi, CEO & Founder at Solgenia, discussed how a revolutionary cloud-based BI along with mobile analytics is already c...
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...
Do you think development teams really update those BMC Remedy tickets with all the changes contained in a release? They don't. Most of them just "check the box" and move on. They rose a Risk Level that won't raise questions from the Change Control managers and they work around the checks and balances. The alternative is to stop and wait for a department that still thinks releases are rare events. When a release happens every day there's just not enough time for people to attend CAB meeting...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
Data-intensive companies that strive to gain insights from data using Big Data analytics tools can gain tremendous competitive advantage by deploying data-centric storage. Organizations generate large volumes of data, the vast majority of which is unstructured. As the volume and velocity of this unstructured data increases, the costs, risks and usability challenges associated with managing the unstructured data (regardless of file type, size or device) increases simultaneously, including end-to-...
SYS-CON Events announced today that the "First Containers & Microservices Conference" will take place June 9-11, 2015, at the Javits Center in New York City. The “Second Containers & Microservices Conference” will take place November 3-5, 2015, at Santa Clara Convention Center, Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud en...
I’ve been thinking a bit about microservices (μServices) recently. My immediate reaction is to think: “Isn’t this just yet another new term for the same stuff, Web Services->SOA->APIs->Microservices?” Followed shortly by the thought, “well yes it is, but there are some important differences/distinguishing factors.” Microservices is an evolutionary paradigm born out of the need for simplicity (i.e., get away from the ESB) and alignment with agile (think DevOps) and scalable (think Containerizati...
In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, president of Intellyx, panelists Roberto Medrano, Executive Vice President at Akana; Lori MacVittie, IoT_Microservices Power PanelEvangelist for F5 Networks; and Troy Topnik, ActiveState’s Technical Product Manager; will peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of ...
Announced separately, New Relic is joining the Cloud Foundry Foundation to continue the support of customers and partners investing in this leading PaaS. As a member, New Relic is contributing the New Relic tile, service broker and build pack with the goal of easing the development of applications on Cloud Foundry and enabling the success of these applications without dedicated monitoring infrastructure. Supporting Quotes "The proliferation of microservices and new technologies like Docker ha...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, will cover the union between the two topics and why this is important. He will cover an overview of Immutable Infrastructure then show how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He will end the session with some interesting case study examples.
SYS-CON Media named Andi Mann editor of DevOps Journal. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. DevOps Journal brings valuable information to DevOps professionals who are transforming the way enterprise IT is done. Andi Mann, Vice President, Strategic Solutions, at CA Technologies, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, communicator, and thought lea...
Even though it’s now Microservices Journal, long-time fans of SOA World Magazine can take comfort in the fact that the URL – soa.sys-con.com – remains unchanged. And that’s no mistake, as microservices are really nothing more than a new and improved take on the Service-Oriented Architecture (SOA) best practices we struggled to hammer out over the last decade. Skeptics, however, might say that this change is nothing more than an exercise in buzzword-hopping. SOA is passé, and now that people are ...
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...