Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Mehdi Daoudi, Pat Romanski, Flint Brenton

Related Topics: Microservices Expo, Java IoT, Industrial IoT, Microsoft Cloud, Machine Learning , @CloudExpo

Microservices Expo: Article

NoSQL or Traditional Database

From an APM perspective there isn’t really much difference

Traditional Enterprise Database vendors often bring up the lack of professional monitoring and management tool support for NoSQL solutions. Their argument is that enterprise applications require sophisticated tuning and monitoring of the database in order to ensure a performant and smooth operation. NoSQL Vendors, while arguing that this lack is not enough to favor RDBMS over their respective solutions, do agree. Several vendors try to differentiate themselves by providing enterprise level monitoring and management software, for example, Cassandra, MongoDB, HBase or others. Both are of course correct that monitoring and management of especially the performance aspect is important, but at the same time they are making the same mistake that RDBMS vendors have made for the last decade: they ignore the application.

Application Performance Management for Databases
What matters in the end is not the database performance itself, but the performance of the application that uses the database. We have explained the different problem patterns numerous times in this blog (...) so I won't go into them. All of them however have one thing in common: the application logic drives how the database is used and there is only so much that you can tune on the database to cover for mistakes on the application side. So we need to monitor and optimize that usage pattern itself. Application Logic again is driven by input data or in most cases end user interaction, thus we need to understand how end user behavior and end user actions drive the database usage. On the other hand we need to understand the impact of the database on these actions. What's important to understand is that the database can work and perform to the highest standards and still be the main bottleneck as far as the application is concerned - if it is wrongly used or has a bad access pattern. RDBMS and NoSQL Databases have that in common. Therefore the fundamental way I, as a performance engineer, do application performance analysis and management does not change:

First we need to understand if a particular business transaction slows down, has a general performance problem and if this has an impact on the end user:

Do any of the business transactions violate a baseline or have negative end user experience

Next we would isolate the high level cause of the slow down or performance issue. There are many ways of doing this, but it's always some kind of fault domain isolation:

Transaction Flow shows that we spend ~15 % waiting for the Database

This Transaction Flow shows that the Business Backend is calling a Cassandra Database Cluster

This tells us if we spend time waiting on the database. We see that there is not much difference between a regular database and something like an Apache Cassandra!

What's important here is that if the database shows up as the main contributor, this does not mean that the database itself is at fault, it might just be the applications usage of it. Thus I would need to check the usage and access pattern next:

This shows the select statements executed within a particular transaction type

This shows all Cassandra database statements against all participating Cassandra servers executed in a particular transaction

Here I might see that the reason for bad performance is that we execute too many statements per transaction, or that we read too much data. If that is the case I would need to check the application logic itself and potentially need a developer to fix it. The developer would of course want to understand where, why and in which particular transaction the statements are executed.

This shows a single Transaction (PurePath) and the Cassandra Statements executed within it

If however a particular statement is slow we, might very well have a database issue and I would talk to the DBA. The only difference in case of a NoSQL solution in this process is that you often have a database cluster, so I would want to understand if the problem is isolated to a particular node or not. And the DBA will want to understand if my access pattern leads to a good distribution across that cluster or if I am hammering away at a subset of them.

This hotspot view shows that Cassandra Server Node3 consumes much more Wait and I/O time than the others

The nice thing is that all in all my analysis does not differ between JDBC, ADO, Cassandra or one of the many NoSQL solutions.

APM Solution Support
There is of course a caveat here; it requires some level of support of the APM solution of choice. Sometimes it might be enough to see API calls on the NoSQL client in my response time breakdown. More often than not of course a little bit more context is desired, like which ColumnFamily is accessed and how many rows are read or which Database Node in the Cluster is serving a read. For this and the afore mentioned reasons I argue that APM solution support of your chosen Database or NoSQL solution is as important as the monitoring of the database itself.

Conclusion
I have spent considerable time tuning SQL statements and indexes, but in the end the best optimizations have always been those on the application and how the application uses the database. SQL Tuning almost always adds complexity and often is a workaround over bad application or data structure design. In the NoSQL world "SQL statement" tuning for the most part is a task of the past, but Data Structure Design has retained its importance! At the same time logic that traditionally resided in the database is now in the application layer, making application design even more important than before. So while some things have shifted, from an Application Performance Engineering Perspective I have to say: nothing really changed, it's still about the application. Now more than ever!

More Stories By Michael Kopp

Michael Kopp has over 12 years of experience as an architect and developer in the Enterprise Java space. Before coming to CompuwareAPM dynaTrace he was the Chief Architect at GoldenSource, a major player in the EDM space. In 2009 he joined dynaTrace as a technology strategist in the center of excellence. He specializes application performance management in large scale production environments with special focus on virtualized and cloud environments. His current focus is how to effectively leverage BigData Solutions and how these technologies impact and change the application landscape.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Enterprises are moving to the cloud faster than most of us in security expected. CIOs are going from 0 to 100 in cloud adoption and leaving security teams in the dust. Once cloud is part of an enterprise stack, it’s unclear who has responsibility for the protection of applications, services, and data. When cloud breaches occur, whether active compromise or a publicly accessible database, the blame must fall on both service providers and users. In his session at 21st Cloud Expo, Ben Johnson, C...
Most of the time there is a lot of work involved to move to the cloud, and most of that isn't really related to AWS or Azure or Google Cloud. Before we talk about public cloud vendors and DevOps tools, there are usually several technical and non-technical challenges that are connected to it and that every company needs to solve to move to the cloud. In his session at 21st Cloud Expo, Stefano Bellasio, CEO and founder of Cloud Academy Inc., will discuss what the tools, disciplines, and cultural...
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
‘Trend’ is a pretty common business term, but its definition tends to vary by industry. In performance monitoring, trend, or trend shift, is a key metric that is used to indicate change. Change is inevitable. Today’s websites must frequently update and change to keep up with competition and attract new users, but such changes can have a negative impact on the user experience if not managed properly. The dynamic nature of the Internet makes it necessary to constantly monitor different metrics. O...
With the rise of DevOps, containers are at the brink of becoming a pervasive technology in Enterprise IT to accelerate application delivery for the business. When it comes to adopting containers in the enterprise, security is the highest adoption barrier. Is your organization ready to address the security risks with containers for your DevOps environment? In his session at @DevOpsSummit at 21st Cloud Expo, Chris Van Tuin, Chief Technologist, NA West at Red Hat, will discuss: The top security r...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The nature of the technology business is forward-thinking. It focuses on the future and what’s coming next. Innovations and creativity in our world of software development strive to improve the status quo and increase customer satisfaction through speed and increased connectivity. Yet, while it's exciting to see enterprises embrace new ways of thinking and advance their processes with cutting edge technology, it rarely happens rapidly or even simultaneously across all industries.
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reducti...
Many organizations adopt DevOps to reduce cycle times and deliver software faster; some take on DevOps to drive higher quality and better end-user experience; others look to DevOps for a clearer line-of-sight to customers to drive better business impacts. In truth, these three foundations go together. In this power panel at @DevOpsSummit 21st Cloud Expo, moderated by DevOps Conference Co-Chair Andi Mann, industry experts will discuss how leading organizations build application success from all...
The last two years has seen discussions about cloud computing evolve from the public / private / hybrid split to the reality that most enterprises will be creating a complex, multi-cloud strategy. Companies are wary of committing all of their resources to a single cloud, and instead are choosing to spread the risk – and the benefits – of cloud computing across multiple providers and internal infrastructures, as they follow their business needs. Will this approach be successful? How large is the ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architec...
Today companies are looking to achieve cloud-first digital agility to reduce time-to-market, optimize utilization of resources, and rapidly deliver disruptive business solutions. However, leveraging the benefits of cloud deployments can be complicated for companies with extensive legacy computing environments. In his session at 21st Cloud Expo, Craig Sproule, founder and CEO of Metavine, will outline the challenges enterprises face in migrating legacy solutions to the cloud. He will also prese...
DevOps at Cloud Expo – being held October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real r...