|By Sebastian Kruk||
|March 19, 2013 11:00 AM EDT||
When the operations team gets an alert about potential performance problems that users might be experiencing, it is usually either the infrastructure or the actual application that is causing those problems. Things get interesting when neither the ISP nor the application provider is willing to admit fault. Can we tell who is to blame? Could it be that it is neither the ISP nor the application provider?
The IT department of our customer, SerciaFood, a food production company from Sercia (names changed for commercial reasons), received complaints about the performance of one of its applications. The IT department suspected network problems while the local ISP stood firmly behind its infrastructure and blamed the solution provider.
It Is Not Our Infrastructure
The SerciaFood IT team recently tested a new application before rolling it into production. During the tests the team complained about the performance of that application; the most likely cause of the poor performance was attributed to network problems.
SerciaNet, a big name not only in Sercia but also worldwide, was delivering the network infrastructure for SerciaFood. The ISP began to monitor the network with manual traces and other techniques; the company could not, however, provide any strong evidence that it was not their issue.
The pperations team at SerciaFood was appointed to look into the problem using a real user monitoring tool.
Its first observation was that network performance, i.e., the percentage of traffic that did not encounter network-related issues (Server Loss Rate, Client RTT and Errors), was varying between different regions where SerciaFood services were used.
Figure 1 shows a report where both network and application performance metrics for EMEA are good. EMEA is the most active region on the report since it is where the core business operations of SerciaFood are focused. Other distant regions reported performance problems; the second most active "third-party" region reported high Client RTT and Server Loss Rate. Client RTT is the time of the SYN packet (sent by a server) to travel from APM probe to client and back again. Server Loss Rate is the percentage of total packets sent from a server that were lost and needed to be retransmitted.
Figure 1: Overview of KPIs across all monitored regions
How Is the Network Performance in EMEA?
The Operations team decided to first confirm what was indicated in Figure 1: the key business region, EMEA, was not affected by network problems.
Figure 2 shows a report with all areas monitored within the EMEA region. According to this report the performance is consistently good with about 2.5 sec of operation time and no network-related problems (100% network performance) for all areas within EMEA region.
Figure 2: Performance across all areas appears consistent with operation time at around 2.5 sec for all operations. Network Performance is good across all areas.
After a drill-down to one of user sites (Switzerland), the report shows that the operation time is spent almost entirely on the server side and that the network performance is good too (see Figure 3).
Figure 3: Operation time is spent almost entirely on the server. Network performance is good at this location.
Another drill down to the report with transactions executed at that site (see Figure 4) shows that although server time varies between transactions, the network time remains consistently below 400 ms. The differences in server time between transactions are a result of the different computational complexity between these transactions. For example, responding to Query is likely to be more demanding than responding to Get File.
Figure 4: Server time varies across transactions between 0.5-4 sec. network time is consistently below 400ms
The operations team decided to further investigate two transactions: one that should be heavy on network (Get File) and one that might be heavy on the server side (Query). The former was mostly responsible for merely delivering files to the client application while the latter required more computational power of the server to execute the query. The performance of the former is good with an almost even split between server and network time (see Figure 5), which does not indicate any network-related problems. The operation time for the latter is almost exclusively spent on the server, with negligible network impact (see Figure 6).
Figure 5: Performance is good for the Get File transaction, which by its nature would be more impacted by network time
Figure 6: Performance is poor for the Query transaction with time spent almost exclusively on the server
The operations team concluded, based on the analyzed traffic in the EMEA region, that at least in that region the performance was good and that it was not affected by network infrastructure delivered by SerciaNet.
Who Is Really Affected?
The question remained: why were some users reporting performance problems? From the overview report (see Figure 1) the operations team decided to drill down through third party, the region with the lowest network performance and highest server loss rate.
This region reported poor network performance below 50% and a significant contribution of network component in the operation time (see Figure 7).
Figure 7: Network performance is degraded at third-party locations
Figure 8 shows the report with a list of transactions for the affected user site. Although server and network time varies between transactions, the application performance for all transactions is low, down to 0% for Get File and Query transactions.
Figure 8: Server and network time varies between transactions likely due to the nature of the transactions
Further analysis of the Get File operation across different users shows significant contribution of the network time (see Figure 9). The network time for both operations is inconsistent; it took 4x more time to deliver results of the Query operation to the second user than to the first one (see Figure 10). This might indicate that users represented in this report connect to the SerciaFood applications through different ISPs.
Figure 9: Performance is inconsistent for the Get File transaction with network time being the main contributor
Figure 10: Performance is improved for the Query transaction but network time again showing inconsistency
Based on that analysis the operations team could determine that some users did in fact experience performance problems caused by network issues. Further investigation revealed that those users who were experiencing poor performance were not connecting to the SerciaFood application using the SerciaNet infrastructure but were instead working remotely through VPN using various ISPs.
When operating a service accessed by users from various locations it is important to remember that the end user experience may vary, sometimes significantly. In the case of SerciaFood its most active users were coming from the EMEA region that was implemented on the SerciaNet infrastructure. However, the second most active users were connecting to the SerciaFood services via VPN. Since these users relied on the general Internet connection, their experience was affected by poor network quality. Different users were connected from different ISPs; as a result the network performance in the third-party region was inconsistent.
Using Compuware dynaTrace Data Center Real User Monitoring (DCRUM) the operations team was able to show evidence, which SerciaNet could not gather otherwise, that the problems were neither caused by SerciaNet infrastructure nor by the application itself. They were, in fact, only experienced by remote users connecting via VPN, who were negatively impacted by ISPs network performance problems.
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
Nov. 27, 2014 11:00 AM EST Reads: 1,173
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
Nov. 27, 2014 10:00 AM EST Reads: 1,137
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
Nov. 27, 2014 08:00 AM EST Reads: 1,123
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Nov. 27, 2014 07:45 AM EST Reads: 1,443
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Nov. 27, 2014 07:00 AM EST Reads: 1,406
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Nov. 27, 2014 06:45 AM EST Reads: 1,317
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Nov. 27, 2014 06:45 AM EST Reads: 1,222
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
Nov. 27, 2014 04:00 AM EST Reads: 1,135
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 27, 2014 04:00 AM EST Reads: 1,072
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
Nov. 26, 2014 11:30 PM EST Reads: 1,126
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
Nov. 26, 2014 06:00 PM EST Reads: 1,121
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
Nov. 26, 2014 02:00 PM EST Reads: 1,547
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Nov. 25, 2014 09:30 PM EST Reads: 1,367
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Nov. 24, 2014 07:00 PM EST Reads: 1,676
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
Nov. 24, 2014 12:00 PM EST Reads: 1,564
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Nov. 24, 2014 11:00 AM EST Reads: 1,698
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
Nov. 24, 2014 09:00 AM EST Reads: 1,723
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
Nov. 23, 2014 07:30 PM EST Reads: 1,879
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Nov. 23, 2014 12:00 PM EST Reads: 1,833
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
Nov. 23, 2014 07:45 AM EST Reads: 1,872