|By Sebastian Kruk||
|March 19, 2013 11:00 AM EDT||
When the operations team gets an alert about potential performance problems that users might be experiencing, it is usually either the infrastructure or the actual application that is causing those problems. Things get interesting when neither the ISP nor the application provider is willing to admit fault. Can we tell who is to blame? Could it be that it is neither the ISP nor the application provider?
The IT department of our customer, SerciaFood, a food production company from Sercia (names changed for commercial reasons), received complaints about the performance of one of its applications. The IT department suspected network problems while the local ISP stood firmly behind its infrastructure and blamed the solution provider.
It Is Not Our Infrastructure
The SerciaFood IT team recently tested a new application before rolling it into production. During the tests the team complained about the performance of that application; the most likely cause of the poor performance was attributed to network problems.
SerciaNet, a big name not only in Sercia but also worldwide, was delivering the network infrastructure for SerciaFood. The ISP began to monitor the network with manual traces and other techniques; the company could not, however, provide any strong evidence that it was not their issue.
The pperations team at SerciaFood was appointed to look into the problem using a real user monitoring tool.
Its first observation was that network performance, i.e., the percentage of traffic that did not encounter network-related issues (Server Loss Rate, Client RTT and Errors), was varying between different regions where SerciaFood services were used.
Figure 1 shows a report where both network and application performance metrics for EMEA are good. EMEA is the most active region on the report since it is where the core business operations of SerciaFood are focused. Other distant regions reported performance problems; the second most active "third-party" region reported high Client RTT and Server Loss Rate. Client RTT is the time of the SYN packet (sent by a server) to travel from APM probe to client and back again. Server Loss Rate is the percentage of total packets sent from a server that were lost and needed to be retransmitted.
Figure 1: Overview of KPIs across all monitored regions
How Is the Network Performance in EMEA?
The Operations team decided to first confirm what was indicated in Figure 1: the key business region, EMEA, was not affected by network problems.
Figure 2 shows a report with all areas monitored within the EMEA region. According to this report the performance is consistently good with about 2.5 sec of operation time and no network-related problems (100% network performance) for all areas within EMEA region.
Figure 2: Performance across all areas appears consistent with operation time at around 2.5 sec for all operations. Network Performance is good across all areas.
After a drill-down to one of user sites (Switzerland), the report shows that the operation time is spent almost entirely on the server side and that the network performance is good too (see Figure 3).
Figure 3: Operation time is spent almost entirely on the server. Network performance is good at this location.
Another drill down to the report with transactions executed at that site (see Figure 4) shows that although server time varies between transactions, the network time remains consistently below 400 ms. The differences in server time between transactions are a result of the different computational complexity between these transactions. For example, responding to Query is likely to be more demanding than responding to Get File.
Figure 4: Server time varies across transactions between 0.5-4 sec. network time is consistently below 400ms
The operations team decided to further investigate two transactions: one that should be heavy on network (Get File) and one that might be heavy on the server side (Query). The former was mostly responsible for merely delivering files to the client application while the latter required more computational power of the server to execute the query. The performance of the former is good with an almost even split between server and network time (see Figure 5), which does not indicate any network-related problems. The operation time for the latter is almost exclusively spent on the server, with negligible network impact (see Figure 6).
Figure 5: Performance is good for the Get File transaction, which by its nature would be more impacted by network time
Figure 6: Performance is poor for the Query transaction with time spent almost exclusively on the server
The operations team concluded, based on the analyzed traffic in the EMEA region, that at least in that region the performance was good and that it was not affected by network infrastructure delivered by SerciaNet.
Who Is Really Affected?
The question remained: why were some users reporting performance problems? From the overview report (see Figure 1) the operations team decided to drill down through third party, the region with the lowest network performance and highest server loss rate.
This region reported poor network performance below 50% and a significant contribution of network component in the operation time (see Figure 7).
Figure 7: Network performance is degraded at third-party locations
Figure 8 shows the report with a list of transactions for the affected user site. Although server and network time varies between transactions, the application performance for all transactions is low, down to 0% for Get File and Query transactions.
Figure 8: Server and network time varies between transactions likely due to the nature of the transactions
Further analysis of the Get File operation across different users shows significant contribution of the network time (see Figure 9). The network time for both operations is inconsistent; it took 4x more time to deliver results of the Query operation to the second user than to the first one (see Figure 10). This might indicate that users represented in this report connect to the SerciaFood applications through different ISPs.
Figure 9: Performance is inconsistent for the Get File transaction with network time being the main contributor
Figure 10: Performance is improved for the Query transaction but network time again showing inconsistency
Based on that analysis the operations team could determine that some users did in fact experience performance problems caused by network issues. Further investigation revealed that those users who were experiencing poor performance were not connecting to the SerciaFood application using the SerciaNet infrastructure but were instead working remotely through VPN using various ISPs.
When operating a service accessed by users from various locations it is important to remember that the end user experience may vary, sometimes significantly. In the case of SerciaFood its most active users were coming from the EMEA region that was implemented on the SerciaNet infrastructure. However, the second most active users were connecting to the SerciaFood services via VPN. Since these users relied on the general Internet connection, their experience was affected by poor network quality. Different users were connected from different ISPs; as a result the network performance in the third-party region was inconsistent.
Using Compuware dynaTrace Data Center Real User Monitoring (DCRUM) the operations team was able to show evidence, which SerciaNet could not gather otherwise, that the problems were neither caused by SerciaNet infrastructure nor by the application itself. They were, in fact, only experienced by remote users connecting via VPN, who were negatively impacted by ISPs network performance problems.
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Jan. 20, 2017 10:45 AM EST Reads: 3,687
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
Jan. 20, 2017 10:30 AM EST Reads: 1,099
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Jan. 20, 2017 09:45 AM EST Reads: 2,911
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Jan. 20, 2017 09:15 AM EST Reads: 6,377
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
Jan. 20, 2017 09:00 AM EST Reads: 249
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
Jan. 20, 2017 08:15 AM EST Reads: 4,915
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
Jan. 20, 2017 08:00 AM EST Reads: 4,258
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Jan. 20, 2017 06:15 AM EST Reads: 5,503
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Jan. 20, 2017 06:15 AM EST Reads: 3,645
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Jan. 20, 2017 02:15 AM EST Reads: 6,054
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
Jan. 20, 2017 12:45 AM EST Reads: 2,880
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...
Jan. 20, 2017 12:00 AM EST Reads: 1,028
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
Jan. 19, 2017 11:30 PM EST Reads: 1,798
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 19, 2017 09:00 PM EST Reads: 4,613
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
Jan. 19, 2017 05:15 PM EST Reads: 1,389
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Jan. 19, 2017 04:45 PM EST Reads: 3,513
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 19, 2017 04:15 PM EST Reads: 5,433
I’m told that it has been 21 years since Scrum became public when Jeff Sutherland and I presented it at an Object-Oriented Programming, Systems, Languages & Applications (OOPSLA) workshop in Austin, TX, in October of 1995. Time sure does fly. Things mature. I’m still in the same building and at the same company where I first formulated Scrum. Initially nobody knew of Scrum, yet it is now an open source body of knowledge translated into more than 30 languages People use Scrum worldwide for ...
Jan. 19, 2017 03:15 PM EST Reads: 3,069
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
Jan. 19, 2017 02:45 PM EST Reads: 2,414
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
Jan. 19, 2017 02:30 PM EST Reads: 4,708