Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Liz McMillan, Christoph Schell, Matt Brickey

Related Topics: Microservices Expo, Java IoT

Microservices Expo: Article

Developers Think Functionality

But less about scalability

Two weeks ago I co-hosted a Webinar with one of our users – Bill Mar, Director of Engineering Services from SmithMicro Software. SmithMicro provides the backbone of our digital life by connecting different digital devices together. In his role, Bill works in the Wireless Business unit working on Voice-related services, e.g.: VoiceSMS or Visual Voicemail – services that we’ve all become used to since we run around with smart phones such as the iPhone or Blackberry.

Bill talked about how SmithMicro had to move towards Proactive Performance Management as the company and the user base started to grow. In his presentation he made an interesting but bold statement: Developers Think Functionality – But Less About Scalability.

As I used to be a developer for many years (and still today, as dynaTrace still allows me to do a little coding on certain features) I had to think about this statement and didn’t know in the beginning whether I should agree with him from the perspective of my current role within dynaTrace or whether I should be offended from the perspective of a developer who just likes to code new features. In the end I agreed with him – especially after listening to all he had to say about his day-to-day challenges as Director of Engineering Services.

In the webinar Bill gave some great insight into what they did in order to become more proactive with performance management. He shared recommendations and their Best Practices that have worked for his team. He really told some great stories and had some great analogies. The bold statement I mentioned in the beginning is just a teaser :-)

Problems came with growing business success

Business success is a great thing, and is what every company is designed to achieve. More active users mean more money spent on the products or services you sell. If you provide Software as a Service – such as SmithMicro does – and you start with a rather small user base you don’t necessarily run into any software related issues right away. SmithMicro started realizing some certain usage peaks during the year – like during the holiday season or New Years when people send their Best Wishes to friends and family using their digital services. With their growing success, however, more volume related issues bubbled up to the surface. It was rather easy to find the initial load related problems by digesting log files and looking at exception stack traces. Even though this process took a certain amount of time it was still fast enough to react to problems that came in from a rather small user-base.

Problems happen faster if you drive faster

When driving 100 miles an hour you have much less time to react in order to avoid a fatal crash then when driving at 10 miles an hour. The same is true with the online business. If you have 100 transactions an hour you may lose the business of a hundred users if it takes you an hour to fix a problem. If you have 100 transactions per second (TPS) you will lose a whole lot of money in one hour. Bill also faced this problem as they reached 100 TPS. Looking at log files and analyzing exception stack traces was no longer fast enough to react on problems in order to avoid losing business. There is a two way approach to this problem:
a) don’t allow code to end up in production that has potential scalability issues and
b) bring tools into production that allows Operations to react more pro-actively (early alerting system) and that equips Devs with all information they need without needing to analyze log-files.

Developers need to understand their code and the real use case scenarios

Bill mentioned several interesting things on that topic and started with another great analogy: The plan used to build a house is not the same as the plan it was built. In order to have a clear understanding of what is actually going on in the application it is important to have plans of “the real” architecture. It is hard and not always practical to maintain blueprints or class diagrams as software is very dynamic – and often changes happen because they have to happen and nobody thinks about updating the documentation. A Best Practice therefore is that developers and architects need to understand the current architecture as it is – and not how they think the architecture should exist.

SmithMicro uses dynaTrace Sequence Diagrams from Real-Life  Transactions instead of using manual maintained UML Diagrams

SmithMicro uses dynaTrace Sequence Diagrams from Real-Life Transactions instead of using manual maintained UML Diagrams

On the topic of scalability Bill talked about having an early focus on things like memory allocation, performance and scalability of critical components. Coming back to his initial bold statement about developers only focusing on functionality, he made it clear that functional readiness doesn’t necessarily mean Production Ready. With some longer-running local tests that test real use-case scenarios, developers can easily identify problems like excessive memory consumption or non-performing code using simple load generators and profiling-like tools. Scalability is a key requirement, and the understanding of real use cases used to verifying scalability is another Best Practice for proactive performance management.

SmithMicro looking at individual PurePaths captured under load to  identify scalability issues and performance bottlenecks

SmithMicro looking at individual PurePaths captured under load to identify scalability issues and performance bottlenecks

Operations needs early indicators and an understanding about how the the applications work

Not all problems can be avoided by being proactive in development. Therefore another Best Practice from SmithMicro is to give Operations all they need to become more proactive in identifying problems early on and also help them understand what to do in case there are problems on the horizon without having to call in the engineering side every time a dashboard indicates an issue.

Operations therefore needs early indicators such as trend changes in transaction response times, memory consumption, garbage collection activity, number and execution time of database queries. In order to capture this information the right set of tools need to be brought in – tools that must be very lightweight in order to avoid unnecessary overhead but that provide enough information for both operations and developers to analyze problems that occur. Traditional monitoring tools that only monitor certain silos of the application stack, e.g. web server, app server, network, database – only help to identify problematic regions. In order for Operations to understand a problem and in order for developers to identify the root cause it is important to get End-to-End transactional tracing with the ability to view this data at a high-level as well as in-depth.
A high-level view provides Operations with enough data to identify performance trends and hotspots in their application infrastructure.

High-Level Operations Memory Dashboard used to identify trends in  Memory Allocations, Usage and Garbage Collection Activity

High-Level Operations Memory Dashboard used to identify trends in Memory Allocations, Usage and Garbage Collection Activity

The In-Depth view on the same collected data provides developers with enough method and component-level data for problem analysis without having to digest log files and stack traces:

Low Level Database Dashboard shows Database Activity as well as  individual SQL Statements and their Bind Variables

Low Level Database Dashboard shows Database Activity as well as individual SQL Statements and their Bind Variables

Developers tend to be curious and often try things that they shouldn’t: The goal for Bill is that Operations can do a better job in being proactive and not needing to call in developers every time a dashboard shows RED. With such early indicators and a better understanding about the application and it’s dependencies to all its involved components Operations can solve many of the production problems on their own. The problem they often ran into was that developers were rather “relaxed” when troubleshooting problems in production – often causing more problems than the problems they were working on.
As Bill said: If you don’t know it’s gonna work – you shouldn’t try it”. In order to prevent this situation it is important for SmithMicro to extract all information required by developers from the production system to help developers to understand what is going on without them needing to “mess with the real world” (I am still not offended by those comments :-) )

Where SmithMicro is heading?

The overall goal for Bill and his team is to become more pro-active when it comes to performance management. They want to enable Operations to become more self-sufficient by extending their knowledge about application internals and giving them early indicators of problems they can react to. They also want to make it easier for developers to understand what is really going on their application – especially spreading the knowledge in cross-functional teams.

Bill’s recommendations

At the end Bill gave his recommendations to all the rest of us out there.

  • Understand your use-case scenarios
    • What are your 5-15 main use case scenarios
    • Model these use case scenarios and monitor them
    • By doing this you become proactive.
  • Developers
    • Understand how the application works and
    • Understand the real life requirements that come from operations
  • Operations
    • Understand the run-time behaviour of the application
    • Look at trending and early indicators
    • Have actionable data for developers
  • By following such a process you become more proactive, and ensure your Application is Ready for Production

Further Information

I really hope this summary blog of the webinar made you want to hear more about it and actually listen to the recorded webinar. Follow this link and listen to what Bill and I had to say about Proactive Performance Management. There is also some other stuff that you might be interested in, like The Practical Guide to Performance Management in Development (How we at dynaTrace do it internally), Best Practices from Zappos on Performance Management and Alois’s Blogs in his Performance Almanac.

Related reading:

  1. Best Practice Webinar on Proactive Application Performance with Smith Micro on April 28th Besides blogging and speaking at conferences I often get the...
  2. Performance vs. Scalability When people talk about performance and scalability they very often...
  3. Week 6 – How to Make Developers Write Performance Tests I had an interesting conversation with our Test Automation team...
  4. Performance Antipatterns – Part 1 Last year december I gave a talkat DeVoxx on Performance...
  5. 5 Quick Steps to End-To-End Web Performance Visibility Web applications have evolved from the simple client-server structure of...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

@MicroservicesExpo Stories
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
From personal care products to groceries and movies on demand, cloud-based subscriptions are fulfilling the needs of consumers across an array of market sectors. Nowhere is this shift to subscription services more evident than in the technology sector. By adopting an Everything-as-a-Service (XaaS) delivery model, companies are able to tailor their computing environments to shape the experiences they want for customers as well as their workforce.
If you read a lot of business and technology publications, you might think public clouds are universally preferred over all other cloud options. To be sure, the numbers posted by Amazon Web Services (AWS) and Microsoft’s Azure platform are nothing short of impressive. Statistics reveal that public clouds are growing faster than private clouds and analysts at IDC predict that public cloud growth will be 3 times that of private clouds by 2019.
"Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and has done quite a bit of work with divisions of Dassault," explained Jackie Funk, Digital Marketing exec at Outscale, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
For over a decade, Application Programming Interface or APIs have been used to exchange data between multiple platforms. From social media to news and media sites, most websites depend on APIs to provide a dynamic and real-time digital experience. APIs have made its way into almost every device and service available today and it continues to spur innovations in every field of technology. There are multiple programming languages used to build and run applications in the online world. And just li...
If you are thinking about moving applications off a mainframe and over to open systems and the cloud, consider these guidelines to prioritize what to move and what to eliminate. On the surface, mainframe architecture seems relatively simple: A centrally located computer processes data through an input/output subsystem and stores its computations in memory. At the other end of the mainframe are printers and terminals that communicate with the mainframe through protocols. For all of its appare...
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
"Peak 10 is a hybrid infrastructure provider across the nation. We are in the thick of things when it comes to hybrid IT," explained Michael Fuhrman, Chief Technology Officer at Peak 10, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Data reduction delivers compelling cost reduction that substantially improves the business case in every cloud deployment model. No matter which cloud approach you choose, the cost savings benefits from data reduction should not be ignored and must be a component of your cloud strategy. IT professionals are finding that the future of IT infrastructure lies in the cloud. Data reduction technologies enable clouds — public, private, and hybrid — to deliver business agility and elasticity at the lo...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I will be talking about ChatOps and ChatOps as a way to solve some problems in the DevOps space," explained Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.