Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Article

Dynamic Scaling in Windows Azure Revisited

Third-party tool support for dynamic scaling in Windows Azure

Auto Scaling Windows Azure
My last article on comparing Dynamic Scaling Features between Windows Azure vs Amazon EC2 mentioned that Both EC2 and Azure provide an auto scaling feature. While EC2 provides a backbone and framework for auto scaling, Azure provides an API that can be extended. We are already seeing several third-party providers delivering tools for Azure auto scaling.

One such third-party auto scaling product company, Paraleap Technologies, has recently released a product called AzureWatch that provides a SaaS-based approach to scale the Windows Azure compute roles. Some of the observations about the product are mentioned below. The company also provides a free 14-day trial of the product whereby some of the facts can be ascertained in a live situation.

The following aspects of the product were observed, based on the technical documentation available with the product vendor.

SaaS-Based Solution
The core of the Azurewatch data collection and aggregation and decision-making process is available as a SaaS-based solution in the form of Azure Watch Service. The AzureWatch Service aggregates and analyzes performance metrics and matches these metrics against user-defined rules on a regular and configurable basis. When a rule produces a "hit," a scaling action occurs.

However, there are some glue or controlling components installed on the ‘On Premise' systems in the form of the AzureWatch Monitor and Control Panel. The Monitor is responsible for sending raw metrics to SaaS-based systems and executing scaling actions. The Control Panel is a simple but powerful configuration and monitoring utility that allows you to configure custom rules and to monitor your instances.

This approach is useful, because much of the overhead of the Data Storage and maintenance of the metrics data is kept away from the enterprises and only a lightweight component in the form of the AzureWatch Monitor and Control Panel needs to be installed ‘On Premise.'

The following diagram, courtesy of vendor, explains the solution.

Rules Engine-Based Interface
As we have seen, Auto Scaling is typically handled by the Pro Active Monitoring, which is done by the AzureWatch Monitor coupled with the analysis and the metrics gathered by the AzureWatch Service. Finally the scaling action is taken based on the rules configured using an easy-to-use GUI tool.

For each of the roles as part of your Azure subscription, Azurewatch provides simple predefined rules that can be tailored further. The two sample rules offered are simple rules that rely upon calculating a 60-minute average CPU usage across all instances within a Role. The Rule Edit screen is simple yet powerful. You can specify what formula needs to be evaluated, what happens when the evaluation returns TRUE, and what time of day should evaluation be restricted to.

Dashboards & Reports
The success of the monitoring tools are measured by the nature of the dashboards and reports, as the metrics data in the raw form is very difficult to understand. Dashboards in AzureWatch provide the following information.

  • Instance Count
  • Instance History
  • Metrics Display based on Windows Counters

Proactive Monitoring
Like traditional data center-based monitoring tools, AzureWatch has built in notification capabilities so that emails are sent based on scaling conditions that happen. AzureWatch can track active/unresponsive/other instance counts for you. You can create rules that either trigger scaling actions or notification emails based upon conditions that rely on instance counts.

Nice to Have
Currently the metrics watch service needs to be carefully watched, and metrics can get stale if the service stopped for some reason. It would be nice to have more ways to avoid the metrics getting stale.

If new packages are installed, they may get missed from monitoring if the instructions are not followed.

If there are options to manually set the Metrics for some special reasons that may provide more control. It's similar to the way in which Oracle and other databases handle Stale statistics.

Summary
Over all AzureWatch and similar third-party tools will make the Cloud Deployments really fruitful because they're meant to improve the core tenant of the Cloud-based deployment, namely the elasticity and dynamic scaling.

More Stories By Srinivasan Sundara Rajan

Highly passionate about utilizing Digital Technologies to enable next generation enterprise. Believes in enterprise transformation through the Natives (Cloud Native & Mobile Native).

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...