Welcome!

Microservices Expo Authors: Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog, Server Monitoring

@DevOpsSummit: Blog Feed Post

Configuration with #Jenkins | @DevOpsSummit #DevOps #APM #Monitoring

Configuration settings that are enabled during the application development are very useful

When Configuration Settings from Development Wreak Havoc in Production
By Asad Ali

As applications are promoted from the development environment to the CI or the QA environment and then into the production environment, it is very common for the configuration settings to be changed as the code is promoted. For example, the settings for the database connection pools are typically lower in development environment than the QA/Load Testing environment. The primary reason for the existence of the configuration setting differences is to enhance application performance. However, occasionally there are instances where the application code is mistakenly promoted into production without changing these settings. In such cases, such promotion of code can cause performance havoc in the production environment. This blog describes one such scenario.

During their proof of concept, a prospect requested we identify and resolve an issue that they were observing in the production environment. They had just promoted a newer version of their critical application to production and soon after the promotion of the code they stared to see significant increase in the response time when the end users tried to login to their web application. To diagnose the issue the prospect injected our agents into their production JVMs that were exhibiting the issue to the end user. With Dynatrace agents injected, we observed both high response time for login and that most of the time for the login request was spent in class loading.

Additionally, breakdown of the response time showed that most of the time for login web request was spent in synchronization (92%).

The response time breakdown of the login request clearly shows that 92% of the time is spent in synchronization.

The response time breakdown of the login request clearly shows that 92% of the time is spent in synchronization.

With the response time breakdown showing the highest amount of time in synchronization we took a thread dump on the JVM where the login request was being processed to get insight into thread locking issue. The thread dump showed 67 threads that were blocked in the JVM.

67 Threads were blocked on the JVM.

67 Threads were blocked on the JVM.

Locking Hotspots view shows the breakdown of the blocked threads

Locking Hotspots view shows the breakdown of the blocked threads

Further analysis of the content of the thread dump showed that most of the blocked threads were waiting for the resource (CompoundClassLoader) held by one running thread (Thread Id 5570678).

Thread Id 5570678 owned the monitor on which the other threads were blocked.

Thread Id 5570678 owned the monitor on which the other threads were blocked.

CompoundClassLoader is held by Thread 5570678.

CompoundClassLoader is held by Thread 5570678.

All blocked threads are waiting on CompoundClassLoader.

All blocked threads are waiting on CompoundClassLoader.

The thread stack trace for the running thread showed that it is trying to load the class from the file system. Examining the full trace of the thread dump shows that the threads on this JVM are creating a new facelet every time a web request is received.

getFacelet being called every time a request is processed by the thread.

getFacelet being called every time a request is processed by the thread.

While it is very normal for a JVM to spend some time to load classes when the JVM is initially started, it is NOT normal (or good for performance) for the class loading to continue even after the JVM has warmed up and most of the classes are already loaded.

The thread stack showed call to getFacelet(java.util.URL) method in the com.sun.facelets.impl.DefaultFaceletFactory class. Review of the source of this class showed that the method tries to load the class if the method needsToBeRefreshed() returns true.

Code snippet of com.sun.facelets.impl.DefaultFaceletFactory class.

Code snippet of com.sun.facelets.impl.DefaultFaceletFactory class.

And finally the code for needsToBeRefreshed() clearly shows that it returns true  if the refreshPeriod is set to 0.

needsToBeRefreshed() method shows that if the refresh period is set to 0, the class is refreshed every time.

needsToBeRefreshed() method shows that if the refresh period is set to 0, the class is refreshed every time.

The Facelets are capable of precompiling if you set javax.faces.FACELETS_REFRESH_PERIOD to -1. However, once set to -1, the JSF never re-compile/re-parses the Facelets files and holds the entire SAX-compiled/parsed XML tree.

Insert11

In the development process the REFRESH_PERIOD is typically set to 0 because it allows the developers to keep editing the Facelets file without the need to restart the server. What happened at this prospect is that the application code was promoted into production with the REFRESH_PERIOD is to 0 value and hence every time the user tried to login, the Facelets were forced to recompile and that in turn resulted in high response time.

Conclusion
Configuration settings that are enabled during the application development are very useful as they reduce the number of times the application server has to be started to test code changes. However, as this example shows, it is very important to disable development-level settings as the code moves out of the development environment because they can cause performance havoc in the production environment. One of the best practices to eliminate occurrences of such scenario is to make the configuration changes a part of the continuous integration tools like Jenkins.

The post When configuration settings from development wreak havoc in production appeared first on about:performance.

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

Microservices Articles
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Lori MacVittie is a subject matter expert on emerging technology responsible for outbound evangelism across F5's entire product suite. MacVittie has extensive development and technical architecture experience in both high-tech and enterprise organizations, in addition to network and systems administration expertise. Prior to joining F5, MacVittie was an award-winning technology editor at Network Computing Magazine where she evaluated and tested application-focused technologies including app secu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addresse...
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...