Welcome!

Microservices Expo Authors: Liz McMillan, Zakia Bouachraoui, Elizabeth White, Pat Romanski, Yeshim Deniz

Related Topics: @DXWorldExpo, Java IoT, Microservices Expo, Linux Containers, Agile Computing, @CloudExpo

@DXWorldExpo: Article

What Is 'Real-Time' Anyway...?

One of my (least) favorite buzzwords in the logging space is “real-time."

I love a good buzzword...cloud, Big Data, analytics... And even more than the buzzwords, I love the liberties people tend to take applying these buzzwords to their new systems and services. Such buzzwords regularly get abused and often get washed into marketing material and product websites in an attempt to hoodwink and woo new unsuspecting customers. One of my (least) favorite buzzwords, that I've noticed popping up more recently in particular in the logging space is "real-time."

So what does real-time mean anyway? Like with all good computer science questions, it really depends on the context.  John Stankovic, in his seminal 1988 article in IEEE Computer entitled ‘Misconceptions of real-time computing' describes a real time system as follows:

"A real-time system is one in which the correctness of the system depends not only on the logical result of computation, but also on the time at which the results are generated."[1]

An example is, what are referred to as, "hard-real-time systems" [2], where computation must meet stringent timing constraints and one must guarantee that those computations must be completed before specified deadlines. Failure to do so can lead to intolerable system degradation and can in some applications lead to catastrophic loss of life or property.

Many safety-critical systems are hard-real-time systems, and include embedded tactical systems for military applications, flight mission control, traffic control, production control, nuclear plant control etc. and in many cases the real-time properties need to be guaranteed and proven - often using techniques such as formal methods [3] for example.

Near real-time is often defined as not having any hard constraints, but implies that there are no significant delay in processing of the event. In many cases this means within a few milliseconds or seconds of the event - again this really depends on the context. From a system monitoring perspective (of non-safety critical applications) near real-time, i.e. within a few seconds, is usually sufficient when it comes to alerting for example. And by non-safety critical, I mean, your site/app might be down, but it will not lead to loss of life. That being said it could be resulting in serious loss of $$$.

In such scenarios if there is something awry with one of system components you want to be notified about this immediately, so that right away you can go about rectifying the issue. A few minutes is usually unacceptable however, as generally this means that users/customers of your system/service are being effected without you knowing anything about this, resulting in damage to your brand and business top line.

This is what really surprises me when I see so many log management solutions that run their alerting as background jobs or saved searches which run periodically every 5, 10 or 15 mins. In my opinion this doesn't really cut it when it comes to alerting and is NOT real- time/near real-time by any standard. Consider this, if there was an emergency at home do you think it would be acceptable to wait 5, 10 or 15 minutes before you picked up the phone and called the emergency services. A few seconds, yes, a few minutes NO!

A further observation by Stankovic in his 1988 article was that another common misconception in relation to real-time systems is that throwing hardware at the problem can solve this issue - however as Stankovic rightly points out, throwing hardware at the problem is not the answer - it's all about the architecture.

That's why we have built a very different architecture to any other logging provider. In short, most log management solutions work as follows:

  • Data is sent to the log management service
  • It is indexed and written to disk
  • You can make use of their (complex) search language to dig into your data
  • If you want to create notifications you need to set up saved searches (using the complex search language) that you can schedule to run every 5, 10 or 15 mins.

At Logentries we have flipped this approach on its head and have built a unique pre-processing layer as part of our system architecture that allows for real-time processing of your data such that the analysis of your log events is done up front and in near real-time. And, when we say real-time, we mean real-time.

It works as follows:

LIVE_TAIL

  • Data is sent to Logentries
  • It passes through our pre-processing layer which analyzes each event for defined patterns(i.e.keywords,regular expressions,defined search expressions) in real-time
  • Notifications can be generated from our pre-processing layer such that you receive them within seconds of the important events occurring (e.g. exceptions, errors, warnings...)

The end result is that you get notified in seconds as opposed to minutes. In a world where time is money, and where buzzwords are only as useful as the architecture behind them, I vote for REAL real-time alerting - as it is an important requirement in any logging service!

[1] J. Stankovic, Misconceptions of real-time computing', IEEE Computer, 1988

[2] Jia Xu and David Lorge Parnas, ‘On Satisfying Timing Constraints in Hard-Real-Time Systems' IEEE Transactions on Software Engineering, Vol 19, No. 1, Jan 1993

[3] ‘Formal methods for the design of Real Time Systems', Springer, International School on Formal Methods for the Design of Computer, Communication and Software Systems, September 2004

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

Microservices Articles
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...