|By Sebastian Kruk||
|March 19, 2013 11:00 AM EDT||
When the operations team gets an alert about potential performance problems that users might be experiencing, it is usually either the infrastructure or the actual application that is causing those problems. Things get interesting when neither the ISP nor the application provider is willing to admit fault. Can we tell who is to blame? Could it be that it is neither the ISP nor the application provider?
The IT department of our customer, SerciaFood, a food production company from Sercia (names changed for commercial reasons), received complaints about the performance of one of its applications. The IT department suspected network problems while the local ISP stood firmly behind its infrastructure and blamed the solution provider.
It Is Not Our Infrastructure
The SerciaFood IT team recently tested a new application before rolling it into production. During the tests the team complained about the performance of that application; the most likely cause of the poor performance was attributed to network problems.
SerciaNet, a big name not only in Sercia but also worldwide, was delivering the network infrastructure for SerciaFood. The ISP began to monitor the network with manual traces and other techniques; the company could not, however, provide any strong evidence that it was not their issue.
The pperations team at SerciaFood was appointed to look into the problem using a real user monitoring tool.
Its first observation was that network performance, i.e., the percentage of traffic that did not encounter network-related issues (Server Loss Rate, Client RTT and Errors), was varying between different regions where SerciaFood services were used.
Figure 1 shows a report where both network and application performance metrics for EMEA are good. EMEA is the most active region on the report since it is where the core business operations of SerciaFood are focused. Other distant regions reported performance problems; the second most active "third-party" region reported high Client RTT and Server Loss Rate. Client RTT is the time of the SYN packet (sent by a server) to travel from APM probe to client and back again. Server Loss Rate is the percentage of total packets sent from a server that were lost and needed to be retransmitted.
Figure 1: Overview of KPIs across all monitored regions
How Is the Network Performance in EMEA?
The Operations team decided to first confirm what was indicated in Figure 1: the key business region, EMEA, was not affected by network problems.
Figure 2 shows a report with all areas monitored within the EMEA region. According to this report the performance is consistently good with about 2.5 sec of operation time and no network-related problems (100% network performance) for all areas within EMEA region.
Figure 2: Performance across all areas appears consistent with operation time at around 2.5 sec for all operations. Network Performance is good across all areas.
After a drill-down to one of user sites (Switzerland), the report shows that the operation time is spent almost entirely on the server side and that the network performance is good too (see Figure 3).
Figure 3: Operation time is spent almost entirely on the server. Network performance is good at this location.
Another drill down to the report with transactions executed at that site (see Figure 4) shows that although server time varies between transactions, the network time remains consistently below 400 ms. The differences in server time between transactions are a result of the different computational complexity between these transactions. For example, responding to Query is likely to be more demanding than responding to Get File.
Figure 4: Server time varies across transactions between 0.5-4 sec. network time is consistently below 400ms
The operations team decided to further investigate two transactions: one that should be heavy on network (Get File) and one that might be heavy on the server side (Query). The former was mostly responsible for merely delivering files to the client application while the latter required more computational power of the server to execute the query. The performance of the former is good with an almost even split between server and network time (see Figure 5), which does not indicate any network-related problems. The operation time for the latter is almost exclusively spent on the server, with negligible network impact (see Figure 6).
Figure 5: Performance is good for the Get File transaction, which by its nature would be more impacted by network time
Figure 6: Performance is poor for the Query transaction with time spent almost exclusively on the server
The operations team concluded, based on the analyzed traffic in the EMEA region, that at least in that region the performance was good and that it was not affected by network infrastructure delivered by SerciaNet.
Who Is Really Affected?
The question remained: why were some users reporting performance problems? From the overview report (see Figure 1) the operations team decided to drill down through third party, the region with the lowest network performance and highest server loss rate.
This region reported poor network performance below 50% and a significant contribution of network component in the operation time (see Figure 7).
Figure 7: Network performance is degraded at third-party locations
Figure 8 shows the report with a list of transactions for the affected user site. Although server and network time varies between transactions, the application performance for all transactions is low, down to 0% for Get File and Query transactions.
Figure 8: Server and network time varies between transactions likely due to the nature of the transactions
Further analysis of the Get File operation across different users shows significant contribution of the network time (see Figure 9). The network time for both operations is inconsistent; it took 4x more time to deliver results of the Query operation to the second user than to the first one (see Figure 10). This might indicate that users represented in this report connect to the SerciaFood applications through different ISPs.
Figure 9: Performance is inconsistent for the Get File transaction with network time being the main contributor
Figure 10: Performance is improved for the Query transaction but network time again showing inconsistency
Based on that analysis the operations team could determine that some users did in fact experience performance problems caused by network issues. Further investigation revealed that those users who were experiencing poor performance were not connecting to the SerciaFood application using the SerciaNet infrastructure but were instead working remotely through VPN using various ISPs.
When operating a service accessed by users from various locations it is important to remember that the end user experience may vary, sometimes significantly. In the case of SerciaFood its most active users were coming from the EMEA region that was implemented on the SerciaNet infrastructure. However, the second most active users were connecting to the SerciaFood services via VPN. Since these users relied on the general Internet connection, their experience was affected by poor network quality. Different users were connected from different ISPs; as a result the network performance in the third-party region was inconsistent.
Using Compuware dynaTrace Data Center Real User Monitoring (DCRUM) the operations team was able to show evidence, which SerciaNet could not gather otherwise, that the problems were neither caused by SerciaNet infrastructure nor by the application itself. They were, in fact, only experienced by remote users connecting via VPN, who were negatively impacted by ISPs network performance problems.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, will compare the Jevons Paradox to modern-day enterprise IT, e...
Sep. 27, 2016 07:30 PM EDT Reads: 2,122
Let's recap what we learned from the previous chapters in the series: episode 1 and episode 2. We learned that a good rollback mechanism cannot be designed without having an intimate knowledge of the application architecture, the nature of your components and their dependencies. Now that we know what we have to restore and in which order, the question is how?
Sep. 27, 2016 07:00 PM EDT Reads: 1,260
Whether they’re located in a public, private, or hybrid cloud environment, cloud technologies are constantly evolving. While the innovation is exciting, the end mission of delivering business value and rapidly producing incremental product features is paramount. In his session at @DevOpsSummit at 19th Cloud Expo, Kiran Chitturi, CTO Architect at Sungard AS, will discuss DevOps culture, its evolution of frameworks and technologies, and how it is achieving maturity. He will also cover various st...
Sep. 27, 2016 06:45 PM EDT Reads: 1,802
If you’re responsible for an application that depends on the data or functionality of various IoT endpoints – either sensors or devices – your brand reputation depends on the security, reliability, and compliance of its many integrated parts. If your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from the code you developed or from a component that you integrated. What can you do to ensure that the endpoints work as expect...
Sep. 27, 2016 05:45 PM EDT Reads: 1,655
In his general session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed cloud as a ‘better data center’ and how it adds new capacity (faster) and improves application availability (redundancy). The cloud is a ‘Dynamic Tool for Dynamic Apps’ and resource allocation is an integral part of your application architecture, so use only the resources you need and allocate /de-allocate resources on the fly.
Sep. 27, 2016 05:15 PM EDT Reads: 2,761
Enterprise IT has been in the era of Hybrid Cloud for some time now. But it seems most conversations about Hybrid are focused on integrating AWS, Microsoft Azure, or Google ECM into existing on-premises systems. Where is all the Private Cloud? What do technology providers need to do to make their offerings more compelling? How should enterprise IT executives and buyers define their focus, needs, and roadmap, and communicate that clearly to the providers?
Sep. 27, 2016 05:00 PM EDT Reads: 1,595
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes ho...
Sep. 27, 2016 04:00 PM EDT Reads: 2,657
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
Sep. 27, 2016 03:15 PM EDT Reads: 1,403
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
Sep. 27, 2016 03:15 PM EDT Reads: 2,868
SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management solutions, helping companies worldwide activate their data to drive more value and business insight and to transform moder...
Sep. 27, 2016 03:15 PM EDT Reads: 2,762
The many IoT deployments around the world are busy integrating smart devices and sensors into their enterprise IT infrastructures. Yet all of this technology – and there are an amazing number of choices – is of no use without the software to gather, communicate, and analyze the new data flows. Without software, there is no IT. In this power panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists will look at the protocols that communicate data and the emerging data analy...
Sep. 27, 2016 03:00 PM EDT Reads: 1,692
Large enterprises today are juggling an enormous variety of network equipment. Business users are asking for specific network throughput guarantees when it comes to their critical applications, legal departments require compliance with mandated regulatory frameworks, and operations are asked to do more with shrinking budgets. All these requirements do not easily align with existing network architectures; hence, network operators are continuously faced with a slew of granular parameter change req...
Sep. 27, 2016 02:45 PM EDT Reads: 716
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Sep. 27, 2016 02:45 PM EDT Reads: 2,781
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
Sep. 27, 2016 02:00 PM EDT Reads: 1,449
SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they mana...
Sep. 27, 2016 01:00 PM EDT Reads: 2,823
There is little doubt that Big Data solutions will have an increasing role in the Enterprise IT mainstream over time. Big Data at Cloud Expo - to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA - has announced its Call for Papers is open. Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is...
Sep. 27, 2016 01:00 PM EDT Reads: 2,665
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2016 Silicon Valley. The 19th Cloud Expo and 6th @ThingsExpo will take place on November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Interne...
Sep. 27, 2016 12:15 PM EDT Reads: 3,222
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo Silicon Valley Call for Papers is now open.
Sep. 27, 2016 12:15 PM EDT Reads: 4,565
Video experiences should be unique and exciting! But that doesn’t mean you need to patch all the pieces yourself. Users demand rich and engaging experiences and new ways to connect with you. But creating robust video applications at scale can be complicated, time-consuming and expensive. In his session at @ThingsExpo, Zohar Babin, Vice President of Platform, Ecosystem and Community at Kaltura, will discuss how VPaaS enables you to move fast, creating scalable video experiences that reach your...
Sep. 27, 2016 12:00 PM EDT Reads: 1,080
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...
Sep. 27, 2016 10:30 AM EDT Reads: 4,531