Welcome!

Microservices Expo Authors: Stackify Blog, Automic Blog, Simon Hill, Pat Romanski, Liz McMillan

Related Topics: Microservices Expo, Containers Expo Blog

Microservices Expo: Article

Monitoring Web Applications Using Hyperic

Monitoring enables us to anticipate the issues

Monitoring applications is an important aspect of IT industry. Huge investments are made in setting and maintaining the IT infrastructure be it cloud or physical. To ensure maximum performance across your business, problems have to be identified and resolved before they affect the users. Monitoring enables us to anticipate the issues and hence resolve them before they start appearing as problems to users. We need to have visibility into IT infrastructure to identify the issue origin. For example, by monitoring the applications proactively, we can figure out the root cause whether its poor performance is due to the problem at resource level or at application level or at any other place and take action before it starts impacting the user. It gives us more control and confidence on the application and hence helps us in meeting the service quality requirements.

Performance issues can arise due to problems at any level - machine/ application/ transaction. We should be able to monitor through all levels so that exact point of origin can be located. Most of the current solutions provide monitoring at any one particular level. For example, some cloud based infrastructure solutions provide monitoring at machine level only. Problem need not be necessarily to be at machine level. There could be problem at the application level like increased thread count, more number of active connections to valuable resources, etc or at transaction level like response time, request rate, etc. Thus, monitoring across different levels gives us visibility to know where the problem lies. Once the problem origin is known, we can take corrective actions to resolve it before it impacts the user experience.

There should be a solution to achieve the same to dig out the actual problem. We are going to deal with it in this article. We are going to see how to monitor different metrics (CPU utilization, thread count per minute, response time) at different levels. We will use Hyperic as our monitoring tool to measure the metrics. It is a systems monitoring, server monitoring and IT management software. It is based on server-agent model. We will have a look at the basic architecture of Hyperic as well.

Use-case scenario - Problem statement

Consider a case where a web application becomes famous and workload increases on it. This causes the application's performance to go down which results in slow loading of web pages. There could be many reasons for the degradation of performance like CPU utilization exceeds certain level, application logic is poor, less number of resources in infrastructure which are not able to sustain the load and many more. Issue could be at any level as discussed. Availability and performance marks end users' experience. This necessitates the monitoring of applications at different levels so that any unexpected behaviour could be avoided by taking corrective measures on time.

Deep dive into the solution
One of the most common complaints heard from end user is ‘website responds very slowly'. Poor performance of application can drift the user away from web application and leaves a bad impact on them. To find out the reason for poor performance of application, we have to measure the metrics at different levels to dig out the actual issue. We are programmatically going to monitor CPU utilization at machine level, used thread count per minute at application level and response time at transaction level.

Monitoring requires a reliable tool which can track the metrics periodically and update the application administrators.  There are lots of monitoring tools present in the market with varying features. Some of the most commonly used are Hyperic, Nagios, Cacti, Ganglia, etc.  Most of the tools in the market represent the metric values in visual form which helps the administrators to monitor and analyze any unexpected behaviour of application.

Hyperic is one of the leading monitoring tools in this arena. Hyperic comes in two flavors - open source (under GPL licensing) and enterprise edition (commercial license). It is based on server-agent model.  Following is a short description of Hyperic key components:

  1. HQ Agent: HQ agent is responsible for collecting data of the machine in which it is installed and send it to server. Each machine which is to be monitored should have an HQ agent running.
  2. HQ Server: HQ server consolidates the data sent by agents installed on different machines and persist it in the Hyperic database. HQ server maintains the inventory which keeps the information of all the resources and their monitored values.
  3. HQ Portal: HQ portal is a graphical user interface which gives complete information about the resources being monitored. User can add/ remove/perform control actions/set alarms from this portal. It is highly customizable with provision of choosing the metric to be collected and changing the collection intervals.
  4. HQ Web Service API: It has java based API called HQAPI. It helps in accessing the resources and their metrics programmatically.
  5. HQ Plugins: Hyperic has its own set of resource plugins for collecting metrics and other operations. It is also possible to build a new plugin to support additional functionality.

A large number of metrics is managed by Hyperic. In case of virtual environment, Hyperic agents are installed on virtual machines. We will use HQAPI to access the resources.

Consider a web application deployed on Apache Tomcat as application server. We will deploy famous - Jpetstore web portal on application server and then monitor the same.

Deploy web application on application server
Copy the jpetstore application under ${Apache_Tomcat_Installation_dir}/webapps and start tomcat. Try accessing - http://localhost:8080/jpetstore link and if it shows jpetstore homepage that means application is deployed successfully.

The way resources are modelled in hierarchical fashion in Hyperic UI console, same way resources could be accessed programmatically using HQ Java API. Top most resource is Platform and beneath it are present services running which are allowed to be monitored in that machine. Let's see how to monitor different metrics at different levels using HQAPI.

More Stories By Akansha Jain

Akansha works as a Technology Analyst at SETLabs, R&D division, at Infosys Technologies Ltd. She has close to 4 years of experience in development of Cloud computing, Java and Java EE applications, Eclipse Plugin Architecture, Software Factory, Web 2.0,etc. Earlier publication works are available on devx.com.

@MicroservicesExpo Stories
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Digital transformation has changed the way users interact with the world, and the traditional healthcare experience no longer meets rising consumer expectations. Enterprise Health Clouds (EHCs) are designed to easily and securely deliver the smart and engaging digital health experience that patients expect today, while ensuring the compliance and data integration that care providers require. Jikku Venkat
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"We are an integrator of carrier ethernet and bandwidth to get people to connect to the cloud, to the SaaS providers, and the IaaS providers all on ethernet," explained Paul Mako, CEO & CTO of Massive Networks, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
From our perspective as consumers, perhaps the best thing about digital transformation is how consumerization is making technology so much easier to use. Sure, our television remote controls still have too many buttons, and I have yet to figure out the digital display in my Honda, but all in all, tech is getting easier for everybody. Within companies – even very large ones – the consumerization of technology is gradually taking hold as well. There are now simple mobile apps for a wide range of ...