Microservices Expo Authors: Roger Strukhoff, Liz McMillan, Andreas Grabner, Elizabeth White, Ruxit Blog

Blog Feed Post

What Customers expect in a new generation APM (2.0) solution

In the last blog, I discussed the challenges with an APM 1.0 solution. 


As an application owner or application support personnel, you want to

  • Exceed service levels and avoid costly, reputation-damaging application failures through improved visibility into the end-user experience
  • Ensure reliable, high-performing applications by detecting problems faster and prioritizing issues based on service levels and impacted users
  • Improve time to market with new applications, features, and technologies, such as virtualization, acceleration, and cloud-based services


The APM 2.0 products enable you manage application performance leading with real user activity monitoring. Following are some of the top functionalities they provide that help you achieve your business objectives.

Visibility to real users and end-user driven diagnostics

  • APM 2.0 solutions provide visibility to end-to-end application performance as experience by real end users and help application support to focus on critical issues affecting end-users.


The dashboard shown in Figure 1, as an example, provides visibility of application performance as experienced by users in real-time.





  • As an application owner, you probably care about which users are impacted, what pages they are navigating and what kind of errors they are getting. You want your APM product to improve MTTR by identifying what is causing the latency issues or failure e.g. network, load balancer, ADN like Akamai, SSL or the application tier itself. Figure 2 shows a specific user session and what pages the user navigated and identified that the application tier is the cause.


  • The “details” link in Figure 2 allows the application support personnel to drill down further which application tier is the culprit for the slow or failed transaction in context to the specific user. This allows the application support personnel to track end-user request to the line of the code.

Ease of use and superior time-to-value

You want to use a product that is simple to use for your application support / operation team.

  • A modern APM solution does not require manual definition of instrumentation policies.
  • It should not require manual changes such as Java script injection for visibility to the end user.
  • APM 2.0 tools provide ability to drill down from end-user to deep-dive for diagnostics and drill up from deep-dive data to identify the impacted user and the context for the transaction without having to do a manual correlation, jumping between consoles.
  • The agent install is typically a 5-10 mts process in the modern APM deep-dive tools.
  • The APM 2.0 deep-dive solution provides automatic detection of application servers, business transactions, frameworks etc.


Figure 3 shows a specific user transaction request and latencies by tiers. It also shows the SQL and latencies information.




Suitable for production deployment

  • The real user monitoring tool should be non-invasive in nature and it should put additional overhead on application response time.
  • You should be able to deploy an always-on, deep-dive monitoring and diagnostic solution for your production enterprise and cloud-based applications.
  • It should work in an agile environment without having to configure new instrumentation policies with application releases.
  • It should scale for a large production deployment to 1000s of application servers that you want to manage in your production environment.


Operations Ready product and enables DevOps collaboration

The APM 1.0 products were originally built for developers and hence they were not very intuitive for operations use. The APM 2.0 products are operations friendly. Also you would expect some of those to enable DevOps collaboration for intelligent escalation to development.

  • Most application support personnel do not understand what frameworks or application technologies used by an application. The majority deep-dive tools in the market move very fast from a transaction view to line of code thus being not providing much value to operations team.


For example, Figure 4 shows the transaction break-down by specific technologies used by the transaction. This also provides baselines for different tiers and the system resource usage along with tiers to make intelligent decision. Figure 3 shows an application flow map for a specific transaction and time spent in each SQL or a remote web service call without having to drill down to the line of code.



  • There are many instances operations team need to escalate problems to developers. The tool should allow application support personnel to escalate to Tier 3/development for diagnostics by sending a direct link to the diagnostic instance. However in many organizations, developers do not have access to production environment and as shown in Figure 5, solution from BMC allows exporting the diagnostics data call tree with latency, parameters, etc in a HTML format.



Adaptive to virtualization and Cloud environment

The new APM 2.0 products are purpose-built and architected for cloud and virtualized environments. 

  • The APM 2.0 product components and agents are designed to communicate in a firewall-friendly protocol and can be encrypted / secured.
  • They support virtualized and dynamic environment without causing a lot of false alerts.
  • They support modern cloud frameworks and Big Data platforms such as Hadoop.



The APM 2.0 solution provides the functionalities that you need to manage your applications that will help exceed business expectations and increase customer loyalty. These tools provide capability to improve time to market. These provide you understanding how application performance affects user behavior — and how that behavior impacts the bottom line. You can leverage an APM 2.0 solutionlike BMC Application Performance Management to improve your application performance and thus meeting your business objectives.

Read the original blog entry...

More Stories By Debu Panda

Debu Panda is a Director of Product Management at Oracle Corporation. He is lead author of the EJB 3 in Action (Manning Publications) and Middleware Management (Packt). He has more than 20 years of experience in the IT industry and has published numerous articles on enterprise Java technologies and has presented at many conferences. Debu maintains an active blog on enterprise Java at http://debupanda.blogspot.com.

@MicroservicesExpo Stories
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past. In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are deve...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will show how customers are able to achieve a level of transparency that enables everyon...
What do dependency resolution, situational awareness, and superheroes have in common? Meet Chris Corriere, a DevOps/Software Engineer at Autotrader, speaking on creative ways to maximize usage of all of the above. Mark Miller, Community Advocate and senior storyteller at Sonatype, caught up with Chris to learn more about what his team is up to.
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
At its core DevOps is all about collaboration. The lines of communication must be opened and it takes some effort to ensure that they stay that way. It’s easy to pay lip service to trends and talk about implementing new methodologies, but without action, real benefits cannot be realized. Success requires planning, advocates empowered to effect change, and, of course, the right tooling. To bring about a cultural shift it’s important to share challenges. In simple terms, ensuring that everyone k...
JetBlue Airways uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications. The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-tim...
So you think you are a DevOps warrior, huh? Put your money (not really, it’s free) where your metrics are and prove it by taking The Ultimate DevOps Geek Quiz Challenge, sponsored by DevOps Summit. Battle through the set of tough questions created by industry thought leaders to earn your bragging rights and win some cool prizes.
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...