Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Flint Brenton, Karthick Viswanathan

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Microsoft Cloud, Machine Learning , Artificial Intelligence

Microservices Expo: Article

In Search of the Unknown

The importance of business process monitoring alongside of system monitoring,

I've talked at length about the importance of business process monitoring alongside of system monitoring, but in discussions I found that sometimes an overview and simple examples are not enough to convince people about the benefits of this approach. Business owners think they don't need to know anything about the operational performance of their systems as long as they have their numbers, and engineers often don't feel they need to invest time into understanding the business they are supporting in detail, finding examples shown too "common sense."

One question we ask our engineers during an interview at work is to describe the process of how they would go about troubleshooting a hypothetical issue, given only the minimum information. We often hear from our clients things like, "our website is slow" and "something's wrong with registration" with no additional information, and in order to figure out the potential issue we need to review the whole system at a glance. For large applications, with a myriad of moving and interweaving components, this is not an easy task. This is one of the reasons we are looking for best of the best. But if you are monitoring all of those components, in a lot of cases, the task can be simplified.

So let's examine a real problem. A large e-commerce company called and said that they are seeing less money coming in from web transactions. They have a pretty complex system with a lot of different revenue generation points, so this observation shed very little light on the root cause of the problem. Luckily, both systems and business processes were being monitored with Circonus, so the data was available to review.

As any engineer knows - step one of troubleshooting the problem is to confirm the problem, so looking at the revenue trends seemed like a good starting point.

The graph clearly shows that, starting around April 30th, the trend looked abnormal in comparison to the previous few weeks. So it seemed like there was an actual problem, and potentially, the issue could lie in payment processor itself or somewhere in the system, preventing certain users from making a purchase. So let's overlay the traffic trends, collected from Google Analytics, against revenue graph and see if there are any common trends.

Even though the traffic showed a clear drop at the same time as revenue, the ratio remained the same, allowing us to exclude payment processor and other application logic from the equation (for now).

Note: This is the first potential breaking point in the process. It is very tempting to look at the ratios, attribute revenue decrease to traffic decrease, and stop the investigation. 99% of the time, unfortunately, nothing "just" happens, so on we go.

Now for the next step - what would be a logical cause for a drop in overall traffic to the site? Response time is probably the first thought that should come to mind. So let's look at what the HTTP checks collected.

Load times didn't seem to be deviating from the norm, but the HTTP response metric doesn't provide full visibility into the load times for a dynamic application, so let's check the health of the database and CPU usage on the server(s), to validate that the underlying platform is not the bottleneck. There are numerous metrics to monitor database and system health that should be, and in this case, are collected, but when researching the root cause of the elusive problem, diving deep into a specific component can waste time early in the process.

Both of the metrics appear well within norm, so at first glance, it seems like the problem is not a systems issue.

Note: This is the second point of the investigation where the process can break down. A lot of technologists will either report that there is no confirmation to the problem reported; the reported problem is just an anomaly because the system monitors don't exhibit any issues. This is exactly why understanding of the business by the technology team is vital.

With that said, what would be the next logical process to validate? It is not uncommon for an e-commerce site to see a drop in purchases if they either stop promoting or if their marketing campaign is ineffective: traffic to the site slows down, subsequently decreasing the number of transactions. This company, in particular, sends out tens of millions of emails a day which bring in new users, and subsequently, new conversions. So let's take a look at the email deliverability and bounce rates collected from the company's MTAs.

Bingo! The bounce rates sky rocketed at the same time as the drop in traffic and revenue stream occurred. Upon closer investigation, it appeared that one of the major ESPs accidentally blocked the delivery domain, and the emails did not go through to the recipients. The issue was resolved (after some discussions with the ESP) and the trends returned back to the expected level.

Keep in mind, if email deliverability was not the issue - there are multiple other metrics that were on a list to be verified, both system (operational and development alike) and business. The amazing part of all of this is that I was able to view the whole system at a glance in just one graph. Granted, stacking everything on one graph is probably not the most optimal every day approach, but it is very useful in a certain cases when the direct overlay correlation is needed. For everything else - a real-time dashboard that displays all the vital points of the business at any given moment is a must-have for anyone responsible for business and/or system health.

Everyone responsible for the success of a business, regardless of the role, needs an ability to see the status of the whole business at a glance at any given point. System engineers don't need to know all the ins and out of marketing, but they should be aware of the overall organizational goals, and should be able to spot irregularities in the business trends. Similarly, CEOs don't need to know how systems work in the background, but should be able to correlate high email bounce rates (if it's critical to the business) to a decrease in purchases.

The point of all of this is that everything should be monitored, and to suggest some tools and methods that can enable users in all roles--within any organization--to ensure the success of the business. Get ‘em, learn 'em, use 'em! You will thank me later.

More Stories By Leon Fayer

Leon Fayer is Vice President at OmniTI, a provider of web infrastructures and applications for companies that require scalable, high performance, mission critical solutions. He possesses a proven background of both web application development and production deployment for complex systems and in his current role advises clients about critical aspects of project strategies and plans to help ensure project success. Leon can be contacted at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment? In his session at @DevOpsSummit at 21st Cloud Expo, Chris Van Tuin, Chief Technologist, NA West at Red Hat, will discuss: The top security r...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The nature of the technology business is forward-thinking. It focuses on the future and what’s coming next. Innovations and creativity in our world of software development strive to improve the status quo and increase customer satisfaction through speed and increased connectivity. Yet, while it's exciting to see enterprises embrace new ways of thinking and advance their processes with cutting edge technology, it rarely happens rapidly or even simultaneously across all industries.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...
Today companies are looking to achieve cloud-first digital agility to reduce time-to-market, optimize utilization of resources, and rapidly deliver disruptive business solutions. However, leveraging the benefits of cloud deployments can be complicated for companies with extensive legacy computing environments. In his session at 21st Cloud Expo, Craig Sproule, founder and CEO of Metavine, will outline the challenges enterprises face in migrating legacy solutions to the cloud. He will also prese...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
‘Trend’ is a pretty common business term, but its definition tends to vary by industry. In performance monitoring, trend, or trend shift, is a key metric that is used to indicate change. Change is inevitable. Today’s websites must frequently update and change to keep up with competition and attract new users, but such changes can have a negative impact on the user experience if not managed properly. The dynamic nature of the Internet makes it necessary to constantly monitor different metrics. O...
Hypertext Transfer Protocol, or HTTP, was first introduced by Tim Berners-Lee in 1991. The initial version HTTP/0.9 was designed to facilitate data transfers between a client and server. The protocol works on a request-response model over a TCP connection, but it’s evolved over the years to include several improvements and advanced features. The latest version is HTTP/2, which has introduced major advancements that prioritize webpage performance and speed.