Welcome!

Microservices Expo Authors: Jason Bloomberg, Elizabeth White, Liz McMillan, Pat Romanski, Kevin Jackson

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Agile Computing, @CloudExpo, @BigDataExpo

Microservices Expo: Article

In Defense of the Agent

They're not all bad...

Last week we published an article entitled 'Log Management 101 - Where Do Logs Come From?' to which one of our more witty readers retorted:

"Sometimes a server and an app love each other very very much..."  :-)

Screen Shot 2013-12-10 at 6.07.50 AM

The article covered some of the basics around collecting log data from various parts of the stack as shown here.

Application_Stack_Graphic

In short these fall into the following categories:

  • Libraries for common languages and frameworks - Allowing you to log directly from your application source code.
  • Collector agents - Usually built for common operating systems, agents will collect data from your file system in real time and forward it on to a third party service.
  • Syslog - Ships out of the box on all Linux and Unix distros and is commonly supported by devices such as routers and switches. It comes in a number of flavors (rsyslogsyslogdsyslog-ng...), with some more capable that others.

Over the coming weeks we'll be diving into these different options in more detail explaining the pros and cons and best practices around using these. This week we've decide to look at agents.

In Defense of the Agent
While some providers tout the evils of running agents on your system and can oft be heard shouting, "no agents here!!!", we prefer to keep an open mind at Logentries. We'd rather not dictate to our community what approach to take when collecting log data nor do we try to prescribe what's best for you - rather, we'd prefer to give you the different options and allow you to make that decision for yourself.

That being said, like most things in life, agents have their pros and cons. They are certainly not a silver bullet, but they do have their advantages in certain scenarios.

We Want Agents
The two main advantages of using an agent to forward your log data are (1) quick setup and (2) additional functionality.

Having the option to get setup with new tools and technologies quickly is important. It's often overlooked by providers, but it adds great value for users and, in my opinion, it is a critical component of a service that strives to provide a low barrier of entry to the wider community. From our many conversations with users over the past few years we have found that they do not have a lot of time when it comes to evaluating new tools and technologies. Having the ability to get setup and using features quickly is a must for many of them. I can certainly relate to this, even when I was completing my PhD - where I researched and built performance profiling tools for a living - I had a rule of thumb whereby if configuring a profiler took more than 10 minutes I usually just moved on. I generally had something more important to be doing that forced this - and that was in an academic setting where time could move more slowly than in the commercial world :) In the commercial world people usually have smaller time windows to work in.

Well built and documented agents should allow you to get up and running quickly. For example the Logentries agent can get you up and running within 60 seconds with a single command. It works as follows:

  • Copy and paste our single line instruction from our quick start guide to your terminal
  • The agent will be downloaded and installed
    • You will be asked for your Logentries credentials.
    • The install process will automatically find standard logs on your system and configure them to send data to your Logentries account.
    • The install process will automatically send some sample log events into your account to (1) make sure you have connected to our service and (2) to give you some data to play with so you can immediately play with our features without having to generate log data from your system.
    • The install process will automatically configure some sample tags and reports so you can immediately see the value of being able to highlight important events, creating alerts and building reports.

The alternative to the above is configuring syslog (which often assumes a level of understanding for syslog), where its config files live, and how to go about editing them. While this can also be documented (and we have been making our syslog process easier and easier to follow) we find that you can get more easily get tripped up, especially when there are lots of different flavous and versions of syslog. This can be particularly painstaking if you are running some outdated version where instructions or config formats can differ ever so slightly. Syslog can also be a challenge if you want to collect data from non-syslog log files that do not live in the /var/log folder.

Furthermore, if you are living in the Windows world, syslog is not going to be an option (well not out of the box anyway...you can always download and configure Snare - the windows equivalent of syslog). If you fall into this category you will likely require an agent to be able to start collecting your logs without a major time investment.

The second main advantage of agents is that they can come with additional functionality. For example the Logentries agent also provides for the following:

  • Data filtering - This can be important if you have sensitive data in your logs. The Logentries agent has a filtering component that can be configured to cleanse your data and to strip out any private information before it leaves your network.
  • A command line interface - Traditionally Sysadmins and devs worked with their logs on the command line with a combination of commands like tail -f, grep, awk, etc. So it makes sense that from time to time you may want to reuse some of these old skills even if you are using a log management tool with nice browser-based functionality (e.g. search, tagging, alerts, reports ...). The Logentries agent gives you command line access to all your logs contained within your account. For example you can easily search, export and filer data from your Logentries account via the CLI - you can also navigate your account and list your logs as if you are navigating your file system.

No Agents Here
The most common reasons for not using agents are:

  • Maintenance - If you have a large environment with 100's of server instances, the thought of installing/updating/patching another piece of code might be undesirable. This may especially be the case if your systems already ship with syslog. That being said, if you do have such a large environment, you are likely automating deployment through something like Chef or Puppet and so this may be less of an issue. Agents thus need to provide for a silent install so that they can be deployed en masse. Furthermore, if the agent is properly managed and maintained (e.g. though the various *nix package managers - as is the case with the Logentries agent) updating your agent to new versions will be fairly seamless and will happen along with the rest of your updates.
  • Trust - Running someone else's code on your system takes a level of trust. You need to know that it has been well written and isn't going to kill performance or have any major security holes. To help alleviate any concerns however, we have open sourced the Logentries agent so that you can view our code, and even modify it if you so wish. Although it is understandable if you do not have the time (or inclination) to spend reviewing our agent code base :) Furthermore, in some cases, using an agent is just not going to be an option (perhaps due to strict security policies or hard performance constraints). Again this is where syslog may be more of a known and trusted quantity.

In summary agents are not necessarily good or bad, they are not perfect, nor are they evil :) Like most of us, they have their good point and bad points.

This article originally posted on the Logentries blog.

Logentries_Try_It_Free_Promo_W

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@MicroservicesExpo Stories
If you cannot explicitly articulate how investing in a new technology, changing the approach or re-engineering the business process will help you achieve your customer-centric vision of the future in direct and measurable ways, you probably shouldn’t be doing it. At Intellyx, we spend a lot of time talking to technology vendors. In our conversations, we explore emerging new technologies that are either disrupting the way enterprise organizations work or that help enable those organizations to ...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
There's a lot to gain from cloud computing, but success requires a thoughtful and enterprise focused approach. Cloud computing decouples data and information from the infrastructure on which it lies. A process that is a LOT more involved than dragging some folders from your desktop to a shared drive. Cloud computing as a mission transformation activity, not a technological one. As an organization moves from local information hosting to the cloud, one of the most important challenges is addressi...
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
For most organizations, the move to hybrid cloud is now a question of when, not if. Fully 82% of enterprises plan to have a hybrid cloud strategy this year, according to Infoholic Research. The worldwide hybrid cloud computing market is expected to grow about 34% annually over the next five years, reaching $241.13 billion by 2022. Companies are embracing hybrid cloud because of the many advantages it offers compared to relying on a single provider for all of their cloud needs. Hybrid offers bala...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
Companies have always been concerned that traditional enterprise software is slow and complex to install, often disrupting critical and time-sensitive operations during roll-out. With the growing need to integrate new digital technologies into the enterprise to transform business processes, this concern has become even more pressing. A 2016 Panorama Consulting Solutions study revealed that enterprise resource planning (ERP) projects took an average of 21 months to install, with 57 percent of t...
Microservices are increasingly used in the development world as developers work to create larger, more complex applications that are better developed and managed as a combination of smaller services that work cohesively together for larger, application-wide functionality. Tools such as Service Fabric are rising to meet the need to think about and build apps using a piece-by-piece methodology that is, frankly, less mind-boggling than considering the whole of the application at once. Today, we'll ...
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to ma...
Hybrid IT is today’s reality, and while its implementation may seem daunting at times, more and more organizations are migrating to the cloud. In fact, according to SolarWinds 2017 IT Trends Index: Portrait of a Hybrid IT Organization 95 percent of organizations have migrated crucial applications to the cloud in the past year. As such, it’s in every IT professional’s best interest to know what to expect.
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
In the decade following his article, cloud computing further cemented Carr’s perspective. Compute, storage, and network resources have become simple utilities, available at the proverbial turn of the faucet. The value they provide is immense, but the cloud playing field is amazingly level. Carr’s quote above presaged the cloud to a T. Today, however, we’re in the digital era. Mark Andreesen’s ‘software is eating the world’ prognostication is coming to pass, as enterprises realize they must be...
A common misconception about the cloud is that one size fits all. Companies expecting to run all of their operations using one cloud solution or service must realize that doing so is akin to forcing the totality of their business functionality into a straightjacket. Unlocking the full potential of the cloud means embracing the multi-cloud future where businesses use their own cloud, and/or clouds from different vendors, to support separate functions or product groups. There is no single cloud so...
Colocation is a central pillar of modern enterprise infrastructure planning because it provides greater control, insight, and performance than managed platforms. In spite of the inexorable rise of the cloud, most businesses with extensive IT hardware requirements choose to host their infrastructure in colocation data centers. According to a recent IDC survey, more than half of the businesses questioned use colocation services, and the number is even higher among established businesses and busin...
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be...