Click here to close now.




















Welcome!

Microservices Expo Authors: Tim Hinds, AppDynamics Blog, Liz McMillan, Pat Romanski, Trevor Parsons

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, @BigDataExpo, SDN Journal

@CloudExpo: Article

Application Performance Doesn’t Have to Be a Cloud Detractor

The transition to the cloud offers IT teams an excellent opportunity to emerge as protectors of their organizations

Introduction: The Cloud as a Point of IT and Business Disconnect
For years, the benefits of moving to the cloud - including lowered costs, flexibility and faster time-to-market - have been espoused. But among IT professionals, there remains widespread reticence about migrating mission-critical applications over, due in large part to performance concerns. "Can the cloud really deliver the same level of speed and reliability as physical services?" IT asks. Business managers often minimize or downplay these worries, saying that the potential business benefits to be reaped are just too mission-critical to ignore.

Therein lies a major conflict. Regardless, as numerous surveys like this one from IDC demonstrate, cloud adoption is moving forward at a rapid pace. Clearly, the cloud is where we're headed. IT must learn to accept the cloud as just part of how services are delivered today, rather than some exotic and potentially dangerous new technology. The good news is IT's concerns about application performance are not insurmountable and can actually be eased with specific approaches.

This article will discuss factors leading to IT's wariness of the cloud. It will also highlight recent survey findings that show just how concerned IT professionals actually are, even as organizations move to the cloud en masse. Finally, the article offers several recommendations to help assuage IT's concerns and minimize risks as cloud adoption continues at a rapid clip.

Why Is IT Uneasy?
It would seem that cloud computing and high performance should go hand-in-hand. Theoretically, performance for cloud-based applications should match that of applications hosted on physical servers, assuming the configuration is right. However, in the real world, many factors can impact the performance of cloud-based applications, including limited bandwidth, disk space, memory and CPU cycles. Cloud customers often have little visibility into their cloud service providers' capacity management decisions, leaving IT professionals with a sense of distrust.

IT's concerns are exacerbated by the fact that when major cloud services do fail, they tend to fail spectacularly. Media often flock to this news like vultures, further undermining IT's confidence. As an example, this past summer Apple's iCloud service, which helps connect iPhones, iPads and other Apple devices to key services, went down for more than six hours. This captured headlines around the world. While Apple claimed that the outage impacted less than one percent of iCloud customers, the sheer size of this user base - 300 million users - translated to approximately three million users being disconnected from services for 11 hours. Shortly thereafter, an Amazon EC2 outage rocked Instagram, Vine, Netflix and several other major customers of this cloud service, inflicting unplanned downtime across all of them and igniting a frenzy of negative press.

In an attempt to ease IT's worries, several cloud service providers have begun offering "premium" performance features along with cloud instances. For example, Amazon EC2 now offers its customers dedicated IOPS (input/output operations per second) to benchmark disk performance. Other cloud service providers are also marketing ways to configure their platforms for different performance thresholds. The challenge here is that few companies can afford premium features for every cloud-based node and service. Without a view into the cloud's impact on end-user performance on the other side of the cloud, it is nearly impossible to identify poor end-user performance, never mind where these premium features could be applied for maximum ROI.

Finally, an awareness of their growing - and often precarious - reliance on cloud services further forces IT to face their own vulnerability. Enterprise use of cloud technology grew 90 percent between early 2012 and mid 2013, according to Verizon's recent "State of the Enterprise Cloud Report." Another important trend worth noting is that businesses are hosting less and less of what gets delivered on their websites. Instead, they're relying on a growing number of externally hosted (third-party) web elements to enrich their web properties, such as ad servers and social media plug-ins. This often results in a company becoming a cloud customer indirectly, without their even knowing it.

Recent Survey Results Demonstrate IT's Wariness
Recently, Research in Action (on behalf of Compuware) conducted a survey of 468 CIOs and other senior IT professionals from around the world, which determined cloud computing to be the top IT investment priority. No surprises there, as clearly these professionals are being driven by the promised benefits of greater agility, flexibility and time-to-value.

What is surprising is the fact that 79 percent of these professionals expressed concern over the hidden costs of cloud computing, with poor end-user experience resonating as the biggest management worry. According to the survey, here are the four leading concerns with cloud migration:

  • Performance Bottlenecks: (64%) Respondents believe that cloud resources and e-commerce will experience poor performance due to cloud application bottleneck usage.
  • Poor End-User Experience: (64%) End users may end up dissatisfied with the cloud performance due to heavy traffic from application usage.
  • Reduced Brand Perception: (51%) Customer loyalty may be greatly reduced due to poor experience and poor cloud performance.
  • Loss of Revenue: (44%) Companies may lose revenues as a result of poor performance, reduced availability or slow technical troubleshooting services.

Ironically, these responses come at a time when the cloud is increasingly being used to support mission-critical applications like e-commerce. More than 80 percent of the professionals surveyed are either already using cloud-based e-commerce platforms or are planning to do so within the next year. It's evident that even as cloud adoption marches forward, a layer of trepidation remains, at least among IT staffs.

Business managers believe the efficiency benefits of the cloud are just too mission-critical to ignore. But IT's primary concern - application performance - is also mission-critical, and perhaps a bit more visceral and tangible. After all, a major service outage is a blatant, clear-cut scenario while efficiency gains or losses are often more subtle and less quantifiable. Ultimately, it's IT that takes the blame when business services don't work exactly as planned.

It used to be that issues like security and cost dominated the list of cloud concerns. But application performance is increasingly making headway as users grow more demanding. For the average user, 0.1 seconds is an instantaneous, acceptable response, similar to what they experience with a Google search. As response times increase, interactions begin to slow and dissatisfaction rises. The impact of a slowdown can be devastating: Amazon has calculated that a page load slowdown of just one second could cost it $1.6 billion in sales each year. In addition, Google found that slowing search response times by just four-tenths of a second would reduce the number of searches by eight million per day - a sizeable amount.

Getting the Performance You Need from the Cloud
As more and more companies begin or extend their journey to the cloud, there are things IT can do to increase their comfort level. These include:

1. Don't Be Afraid to Experiment: Cloud computing offers businesses the opportunity to leverage computing resources they might not otherwise have the expertise or wherewithal to employ. But it can be intimidating to move critical operations out of one's own hands. That's where free trials come in. A number of cloud computing vendors offer free test runs that let companies figure out how cloud services would meld with their current operations.

Getting the most out of a trial period takes some planning and effort, and this includes making certain to measure the cloud service provider's performance. Unfortunately, most cloud service providers today don't measure and provide performance statistics as part of these trial periods, so it's incumbent upon prospective customers to do so. It's often best to experiment in the cloud with a non-critical system, such as a sales support application that doesn't have a huge impact on customers, should performance degrade. Organizations should also be sure to measure performance for as broad a cross-section of users as possible.

2. Insist on Performance-Focused SLAs: Inherent cloud attributes like on-demand resource provisioning and scalability are designed to increase confidence in the usability of applications and data hosted in the cloud. But the most common mistake that people often make is interpreting availability guarantees as performance guarantees in a cloud computing environment. Availability shows that a cloud service provider's servers are up and running - but that's about it. Service-level agreements (SLAs) based on availability say nothing about the user experience, which can be significantly impacted by the cloud - such as, when an organization's "neighbor" in the cloud experiences an unexpected spike in traffic. Yet, despite the mission-critical nature of many cloud applications, our survey found that 73 percent of companies are still using outdated methods like availability measurements to track and manage application performance.

The fact is that most traditional monitoring tools simply don't work in the cloud. Effectively monitoring and managing modern cloud-based applications and services requires a new approach based on more granular user metrics such as response time and page rendering time. This approach must be based on an understanding of the true user interaction "on the other side" of the cloud. It must enable cloud customers to directly measure the performance of their cloud service providers and validate SLAs. With this type of approach, cloud customers can be better assured that application performance issues will not undercut the benefits of moving to the cloud. In addition, an understanding of true end-user experiences across key geographies can help companies identify the most strategic opportunities for applying premium performance features, as discussed above.

3. Utilize Industry Resources: There are resources available to help companies better assess if the source of a performance problem lies with them or with a cloud service provider, as well as the likely performance impact on customers. As an example, Compuware's Outage Analyzer is a free new generation performance analytics solution that tracks Internet web service outages, including cloud service outages, in real-time around the world. Outage Analyzer provides instant insight into the performance of thousands of cloud services and the resulting impact on the websites they service. Resources like this may not prevent cloud service outages from happening, but they can help companies better understand the source of performance problems so they can get in front of them more confidently and efficiently.

Conclusion: Cloud Computing Is the "New Normal"
Like it or not, cloud computing is here to stay, and its adoption will only accelerate further in the years to come. In many ways, the move to the cloud is reminiscent of the adoption of Linux. At one time, IT administrators had significant concerns about Linux, including its scalability and reliability. But sure enough, businesses continued their adoption of Linux, propelled largely by the promise of lower costs and greater efficiencies. Today, Linux is a well-integrated component of corporate data centers worldwide.

In reality, neither IT nor the business is wrong when it comes to their strong opinions on adopting the cloud for mission-critical applications. Ultimately both sides share the same goal, which is to maximize a company's revenues and profits. It's just that the two teams approach the problem differently: IT emphasizes application performance as a means of driving productivity and conversions, while business leaders look to increase cash flow, seek greatest return on capital investments and lower operating expenses.

The move to the cloud can be a very good thing for today's enterprises. It's also a good thing to be cloud-wary, and this is where the business will ultimately depend on IT to be vigilant. By paying due attention to performance issues, the transition to the cloud offers IT teams an excellent opportunity to emerge as protectors of their organizations, thus maximizing return on cloud investments.

More Stories By Ronald Miller

For almost a decade Ronald Miller has served in product marketing roles in the enterprise software, mobile, and high technology industries. In his current role managing Dynatrace’s go-to-market efforts, he is dedicated to helping Dynatrace customers get the most performance and ROI from their applications. In his spare time he enjoys being an amateur judge of the best BBQ in Austin, Texas. You can Tweet him at @RonaldMiller

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Microservices are individual units of executable code that work within a limited framework. They are extremely useful when placed within an architecture of numerous microservices. On June 24th, 2015 I attended a webinar titled “How to Share Share-Nothing Microservices,” hosted by Jason Bloomberg, the President of Intellyx, and Scott Edwards, Director Product Marketing for Service Virtualization at CA Technologies. The webinar explained how to use microservices to your advantage in order to deliv...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction an...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Microservices are hot. And for good reason. To compete in today’s fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery. But is it too...
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.