Welcome!

Microservices Expo Authors: Yeshim Deniz, Pat Romanski, Elizabeth White, Liz McMillan, Zakia Bouachraoui

Related Topics: Microservices Expo, Industrial IoT, Cognitive Computing , Agile Computing

Microservices Expo: Article

Reach Higher Scalability from Your Web Applications

Tuning vs. adding additional hardware

When the performance of a web application starts to degrade, often IT managers jump to the conclusion that more hardware is required. This increased infrastructure can be expensive and may not even solve the underlying problem. Let's talk about tuning as a solution to higher scalability. It's been debated as to whether tuning is a science or an art but we can all agree on the goal: alleviate bottlenecks so that the web application can scale to a higher workload. Tuning is more efficient and cost effective than adding more hardware to your deployment. However, you need to have the right tools and expertise in place in order to successfully tune an environment.

Hardware servers are restricted to their physical resources (IO, memory, CPU, etc.). With OS tuning, there are certain buffers and network configurations you can open up to increase overall capacity. Software servers are even more configurable because they use their own thread pools, caching, memory management, connection pools, etc. All of these software resources do actually run on a hardware server, but tuning software servers can allow it to take more or less advantage of hardware resources.

Tuning can allow or restrict throughput to an environment. Imagine a funnel. The large opening is on top, the small opening is on the bottom. A properly tuned environment will have more processing capacity on the front end (larger end of the funnel) and less dedicated processing at the end. This approach is used for a couple of reasons; one - typically the front end of a deployment does more lightweight processing (webserver) and the back end (database) does more CPU intensive and shared resource processing. You want to allow as many requests to be processed on the front end without flooding the back end.

Most people don't know where to begin in the tuning process, so here are some tips to point you in the right direction. First you will need a load test solution that generates a realistic load. The solution also needs embedded monitoring for your entire deployment. For starters, don't change a single thing. Simply identify and document all the configurable settings for each software server in the environment. You want to be especially aware of the settings that affect throughput - giving the server more or less processing power. For example, a webserver's worker threads or database and application servers' memory/threads/pools/buffers can affect the throughput. Once you have all the settings documented, start executing your load tests and monitoring all the configurable resources. Run the tests up to the points of degradation; don't try to identify saturation points while the test is running as this is far too complicated. Once degradation occurs, stop the test and graph out your monitors. Identify the first major bottleneck, make a change to alleviate that bottleneck, and rerun your test.

Keep to a methodical approach and change only one variable at a time. It's also greatly helpful if the tool has a built-in comparison analysis engine that visually shows you how your one change affected the scalability of your test. Use the load to validate all of your changes. When you are tuning, you need to essentially be aware of two very important statistics: throughput and response time. This is where the art comes in. You need to tune the environment for maximum capacity without allocating too much "work" for the underlying hardware to handle. If you make this mistake, and we all do, you'll see the response times increase. Remember, there are limitations in infrastructure. Nothing is infinite. Tune until your workload is using up most of the hardware resources while delivering acceptable response times.

Tuning can save time, save money, and save the planet (greener deployments use less electricity). The costs associated with more hardware are not just in dollars though, it's also in administration and maintenance. Also, each new piece of equipment is another potential point of failure (we've all seen machine's go down due to myriad reasons). Tuning is typically the best first approach to address any performance degradation. Expertise is in knowing where to look.

More Stories By Rebecca Clinard

Rebecca Clinard is a Senior Performance Engineer at Neotys, a provider of load testing software for Web applications. Previously, she worked as a web application performance engineer for Bowstreet, Fidelity Investments, Bottomline Technologies and Timberland companies, industries spanning retail, financial services, insurance and manufacturing. Her expertise lies in creating realistic load tests and performance tuning multi-tier deployments. She has been orchestrating and conducting performance tests since 2001. Clinard graduated from University of New Hampshire with a BS and also holds a UNIX Certificate from Worcester Polytechnic Institute.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...