Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, VictorOps Blog, SmartBear Blog, Liz McMillan, Ruxit Blog

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Linux Containers, @BigDataExpo, SDN Journal

@CloudExpo: Article

Application Performance Doesn’t Have to Be a Cloud Detractor

The transition to the cloud offers IT teams an excellent opportunity to emerge as protectors of their organizations

Introduction: The Cloud as a Point of IT and Business Disconnect
For years, the benefits of moving to the cloud - including lowered costs, flexibility and faster time-to-market - have been espoused. But among IT professionals, there remains widespread reticence about migrating mission-critical applications over, due in large part to performance concerns. "Can the cloud really deliver the same level of speed and reliability as physical services?" IT asks. Business managers often minimize or downplay these worries, saying that the potential business benefits to be reaped are just too mission-critical to ignore.

Therein lies a major conflict. Regardless, as numerous surveys like this one from IDC demonstrate, cloud adoption is moving forward at a rapid pace. Clearly, the cloud is where we're headed. IT must learn to accept the cloud as just part of how services are delivered today, rather than some exotic and potentially dangerous new technology. The good news is IT's concerns about application performance are not insurmountable and can actually be eased with specific approaches.

This article will discuss factors leading to IT's wariness of the cloud. It will also highlight recent survey findings that show just how concerned IT professionals actually are, even as organizations move to the cloud en masse. Finally, the article offers several recommendations to help assuage IT's concerns and minimize risks as cloud adoption continues at a rapid clip.

Why Is IT Uneasy?
It would seem that cloud computing and high performance should go hand-in-hand. Theoretically, performance for cloud-based applications should match that of applications hosted on physical servers, assuming the configuration is right. However, in the real world, many factors can impact the performance of cloud-based applications, including limited bandwidth, disk space, memory and CPU cycles. Cloud customers often have little visibility into their cloud service providers' capacity management decisions, leaving IT professionals with a sense of distrust.

IT's concerns are exacerbated by the fact that when major cloud services do fail, they tend to fail spectacularly. Media often flock to this news like vultures, further undermining IT's confidence. As an example, this past summer Apple's iCloud service, which helps connect iPhones, iPads and other Apple devices to key services, went down for more than six hours. This captured headlines around the world. While Apple claimed that the outage impacted less than one percent of iCloud customers, the sheer size of this user base - 300 million users - translated to approximately three million users being disconnected from services for 11 hours. Shortly thereafter, an Amazon EC2 outage rocked Instagram, Vine, Netflix and several other major customers of this cloud service, inflicting unplanned downtime across all of them and igniting a frenzy of negative press.

In an attempt to ease IT's worries, several cloud service providers have begun offering "premium" performance features along with cloud instances. For example, Amazon EC2 now offers its customers dedicated IOPS (input/output operations per second) to benchmark disk performance. Other cloud service providers are also marketing ways to configure their platforms for different performance thresholds. The challenge here is that few companies can afford premium features for every cloud-based node and service. Without a view into the cloud's impact on end-user performance on the other side of the cloud, it is nearly impossible to identify poor end-user performance, never mind where these premium features could be applied for maximum ROI.

Finally, an awareness of their growing - and often precarious - reliance on cloud services further forces IT to face their own vulnerability. Enterprise use of cloud technology grew 90 percent between early 2012 and mid 2013, according to Verizon's recent "State of the Enterprise Cloud Report." Another important trend worth noting is that businesses are hosting less and less of what gets delivered on their websites. Instead, they're relying on a growing number of externally hosted (third-party) web elements to enrich their web properties, such as ad servers and social media plug-ins. This often results in a company becoming a cloud customer indirectly, without their even knowing it.

Recent Survey Results Demonstrate IT's Wariness
Recently, Research in Action (on behalf of Compuware) conducted a survey of 468 CIOs and other senior IT professionals from around the world, which determined cloud computing to be the top IT investment priority. No surprises there, as clearly these professionals are being driven by the promised benefits of greater agility, flexibility and time-to-value.

What is surprising is the fact that 79 percent of these professionals expressed concern over the hidden costs of cloud computing, with poor end-user experience resonating as the biggest management worry. According to the survey, here are the four leading concerns with cloud migration:

  • Performance Bottlenecks: (64%) Respondents believe that cloud resources and e-commerce will experience poor performance due to cloud application bottleneck usage.
  • Poor End-User Experience: (64%) End users may end up dissatisfied with the cloud performance due to heavy traffic from application usage.
  • Reduced Brand Perception: (51%) Customer loyalty may be greatly reduced due to poor experience and poor cloud performance.
  • Loss of Revenue: (44%) Companies may lose revenues as a result of poor performance, reduced availability or slow technical troubleshooting services.

Ironically, these responses come at a time when the cloud is increasingly being used to support mission-critical applications like e-commerce. More than 80 percent of the professionals surveyed are either already using cloud-based e-commerce platforms or are planning to do so within the next year. It's evident that even as cloud adoption marches forward, a layer of trepidation remains, at least among IT staffs.

Business managers believe the efficiency benefits of the cloud are just too mission-critical to ignore. But IT's primary concern - application performance - is also mission-critical, and perhaps a bit more visceral and tangible. After all, a major service outage is a blatant, clear-cut scenario while efficiency gains or losses are often more subtle and less quantifiable. Ultimately, it's IT that takes the blame when business services don't work exactly as planned.

It used to be that issues like security and cost dominated the list of cloud concerns. But application performance is increasingly making headway as users grow more demanding. For the average user, 0.1 seconds is an instantaneous, acceptable response, similar to what they experience with a Google search. As response times increase, interactions begin to slow and dissatisfaction rises. The impact of a slowdown can be devastating: Amazon has calculated that a page load slowdown of just one second could cost it $1.6 billion in sales each year. In addition, Google found that slowing search response times by just four-tenths of a second would reduce the number of searches by eight million per day - a sizeable amount.

Getting the Performance You Need from the Cloud
As more and more companies begin or extend their journey to the cloud, there are things IT can do to increase their comfort level. These include:

1. Don't Be Afraid to Experiment: Cloud computing offers businesses the opportunity to leverage computing resources they might not otherwise have the expertise or wherewithal to employ. But it can be intimidating to move critical operations out of one's own hands. That's where free trials come in. A number of cloud computing vendors offer free test runs that let companies figure out how cloud services would meld with their current operations.

Getting the most out of a trial period takes some planning and effort, and this includes making certain to measure the cloud service provider's performance. Unfortunately, most cloud service providers today don't measure and provide performance statistics as part of these trial periods, so it's incumbent upon prospective customers to do so. It's often best to experiment in the cloud with a non-critical system, such as a sales support application that doesn't have a huge impact on customers, should performance degrade. Organizations should also be sure to measure performance for as broad a cross-section of users as possible.

2. Insist on Performance-Focused SLAs: Inherent cloud attributes like on-demand resource provisioning and scalability are designed to increase confidence in the usability of applications and data hosted in the cloud. But the most common mistake that people often make is interpreting availability guarantees as performance guarantees in a cloud computing environment. Availability shows that a cloud service provider's servers are up and running - but that's about it. Service-level agreements (SLAs) based on availability say nothing about the user experience, which can be significantly impacted by the cloud - such as, when an organization's "neighbor" in the cloud experiences an unexpected spike in traffic. Yet, despite the mission-critical nature of many cloud applications, our survey found that 73 percent of companies are still using outdated methods like availability measurements to track and manage application performance.

The fact is that most traditional monitoring tools simply don't work in the cloud. Effectively monitoring and managing modern cloud-based applications and services requires a new approach based on more granular user metrics such as response time and page rendering time. This approach must be based on an understanding of the true user interaction "on the other side" of the cloud. It must enable cloud customers to directly measure the performance of their cloud service providers and validate SLAs. With this type of approach, cloud customers can be better assured that application performance issues will not undercut the benefits of moving to the cloud. In addition, an understanding of true end-user experiences across key geographies can help companies identify the most strategic opportunities for applying premium performance features, as discussed above.

3. Utilize Industry Resources: There are resources available to help companies better assess if the source of a performance problem lies with them or with a cloud service provider, as well as the likely performance impact on customers. As an example, Compuware's Outage Analyzer is a free new generation performance analytics solution that tracks Internet web service outages, including cloud service outages, in real-time around the world. Outage Analyzer provides instant insight into the performance of thousands of cloud services and the resulting impact on the websites they service. Resources like this may not prevent cloud service outages from happening, but they can help companies better understand the source of performance problems so they can get in front of them more confidently and efficiently.

Conclusion: Cloud Computing Is the "New Normal"
Like it or not, cloud computing is here to stay, and its adoption will only accelerate further in the years to come. In many ways, the move to the cloud is reminiscent of the adoption of Linux. At one time, IT administrators had significant concerns about Linux, including its scalability and reliability. But sure enough, businesses continued their adoption of Linux, propelled largely by the promise of lower costs and greater efficiencies. Today, Linux is a well-integrated component of corporate data centers worldwide.

In reality, neither IT nor the business is wrong when it comes to their strong opinions on adopting the cloud for mission-critical applications. Ultimately both sides share the same goal, which is to maximize a company's revenues and profits. It's just that the two teams approach the problem differently: IT emphasizes application performance as a means of driving productivity and conversions, while business leaders look to increase cash flow, seek greatest return on capital investments and lower operating expenses.

The move to the cloud can be a very good thing for today's enterprises. It's also a good thing to be cloud-wary, and this is where the business will ultimately depend on IT to be vigilant. By paying due attention to performance issues, the transition to the cloud offers IT teams an excellent opportunity to emerge as protectors of their organizations, thus maximizing return on cloud investments.

More Stories By Ronald Miller

For almost a decade Ronald Miller has served in product marketing roles in the enterprise software, mobile, and high technology industries. In his current role managing Dynatrace’s go-to-market efforts, he is dedicated to helping Dynatrace customers get the most performance and ROI from their applications. In his spare time he enjoys being an amateur judge of the best BBQ in Austin, Texas. You can Tweet him at @RonaldMiller

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.
The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ...