Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Elizabeth White, Liz McMillan, XebiaLabs Blog, Lori MacVittie, Cloud Best Practices Network

Related Topics: @CloudExpo Blog, Java IoT, @MicroservicesE Blog, Linux Containers, Agile Computing, Cloud Security

@CloudExpo Blog: Article

How to Compare Hosting Companies’ Speed & Reliability

In an 'always-on' world, there’s never a time when someone isn't surfing through your website

What do you look for when you choose a Web hosting provider? These days, it seems difficult to compare the differences between each service, whether you're talking about supported languages, databases or bandwidth. You might be tempted to pick the cheapest provider and plan in hopes of saving a few dollars. However, you should not overlook the importance of speed and reliability.

Take a small business for example - if its website is down or under-performing, the host is actually hurting the business. Even if the hosting were $1 per month, losing $100 in revenue because of unreliable performance means the customer loses $101. At that price, they could afford to grow their business to a dedicated server.

For the past year, my Hosting Performance Monitoring team at GoDaddy has scrutinized our environment and made some changes to make sure we are offering best in breed web hosting. Offering your customers a reliable, stable and fast platform is the most important feature you can provide. But that means more than just "Is the site up?" or "Is it fast?" You need to ask, "Is it up and fast all the time?"

In an "always-on" world, there's never a time when someone isn't surfing through your Website. Therefore, you want to double check that your Web hosting provider is fast and has great uptime, consistently. Outlined below is our tried and true performance and reliability measurement method, and a sample of results from studies that we've conducted.

The Hosting Reliability Measurement Method

1. Identify Your Providers
Make a list of the hosting companies you want to compare. It can be two for a head-to-head comparison, or hundreds to get an understanding of the entire industry.

2. Get Accounts
Purchase a Web hosting account from each company. Depending on the provider, this step can be daunting - some companies' sites make it incredibly difficult. For example, certain providers asked us to email or fax in our driver's license or credit card to buy an account.

3. Deploy the WordPress Sites
Load a cloned WordPress site to each hosting provider. This site should be completely single-sourced, which means the site only loads from its own resources, i.e. it doesn't reference any third-party scripts, images, etc. This makes the tests purely about the server's performance.

4. Audit
After setting up all the sites, make sure they're identical before you start testing. One way to do this is through webpagetest.org. Load each of your sites, then compare the Bytes In, Requests, and make sure there's only one domain listed in the Domains tab.

5. Measure
There are plenty of measurement techniques. You can pick any combination you see fit. We recommend trying a combination of the following:

Once you have all of the data on file, you can share it with the world! If you want to take your observations on a longer term, you can use APIs from the sites you used to perform the tests that automatically updates daily, weekly or monthly.

Here's a sample output from our own trials using two of the techniques we've outlined here. It compares GoDaddy cPanel to 6 of its closest competitors from January 1 2014 through March 1 2014.  The competitors listed here (A through F) are real competitors.

Pingdom, Response Time*                                            Pingdom, Downtime*

(Jan 1, 2014 through Mar 1, 2014)                                   (Jan 1, 2014 through Mar 1, 2014)

*Disclaimer: based on one site per product. It is not necessarily representative of the provider's product as a whole.

Analysis: In 60 days, 99.9% uptime means ~90 minutes down. Anything more should be unacceptable and likely violates the provider's uptime guarantee.

Competitor C and D had respectable response time averages at 0.804 seconds and 0.857 seconds, but should be disqualified for having worse than 99.9% uptime with 5,853 minutes (that's 4d 1h 33m) and 199 minutes of downtime.  Doesn't matter what the speed is, this much downtime should not be tolerated.

GoDaddy cPanel performed exceptionally well, with the lowest response times and better than 99.9% uptime. GoDaddy cPanel is the clear performance winner in this 60-day study.

Gomez Results

Jan 1, 2014 through Mar 1, 2014

*Disclaimer: based on one site per product.  It is not necessarily representative of the provider's product as a whole.

Analysis: This is full-page load, in this case 15 page objects (i.e., CSS, JS, images) totally approximately 750KB. Gomez test nodes are sitting on high bandwidth connections on the edge of their networks in top tier data centers. This is not the typical home user on wifi sitting 100 feet away through 4 walls, it's fast. Gomez nodes have latency to tend with, however. Whichever Gomez node location is fastest for a provider is very likely the closest node to the provider.

GoDaddy cPanel performed under 1.0s on avg, at 0.743s, throughout the 60 day period. Competitor D was close at 1.001s

Conclusion
There are very few, if any, Web hosting performance studies available to help consumers make the right choice. We urge industry review analysts to adopt the method described in this article because we believe it provides a comprehensive view of how hosting companies perform. It's straightforward, too. Just set up a cloned WordPress site on a few different hosts, and then use a tool like Pingdom to monitor performance.

If we can get trusted, third-party sources to publish information like this on a continual basis, customers will have all of the information they need to make informed decisions.

More Stories By David Koopman

David Koopman is Principal Engineer, Hosting Infrastructure Performance Engineering Team at GoDaddy. He leads the Hosting Performance Team at GoDaddy and is responsible for measuring and monitoring performance of thousands of servers, hosting millions of websites. His team works closely with product and infrastructure teams to ensure consistent performance across the GoDaddy web hosting product line.

Since joining GoDaddy in 2002 as a software developer, David has helped transform the company’s Web-based email product into a multi-million account operation. During his tenure at GoDaddy, he has held several development positions including Dedicated and VPS Hosting Development Manager, Architect, Sr. VP of Product Development, Chief Technology Officer, Chief Scientist and Principal Engineer.

Prior to joining GoDaddy, David was the Technical Director of The Web Mark, a medical Internet services company. He attained a BS in Computer Science from Southwest Missouri State University and is a certified Project Management Professional (PMP).

When not developing new product ideas at GoDaddy, David enjoys spending time with his family, skiing, off-roading and boating.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Summer is finally here and it’s time for a DevOps summer vacation. From San Francisco to New York City, our top summer conferences list is going to continuously deliver you to the summer destinations of your dreams. These DevOps parties are hitting all the hottest summer trends with Microservices, Agile, Continuous Delivery, DevSecOps, and even Continuous Testing. Move over Kanye. These are the top 5 Summer DevOps Conferences of 2015.
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, dc infra axisreally, super focus...
The most often asked question post-DevOps introduction is: “How do I get started?” There’s plenty of information on why DevOps is valid and important, but many managers still struggle with simple basics for how to initiate a DevOps program in their business. They struggle with issues related to current organizational inertia, the lack of experience on Continuous Integration/Delivery, understanding where DevOps will affect revenue and budget, etc. In their session at DevOps Summit, JP Morgenthal...
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. ...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the Wor...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Many people recognize DevOps as an enormous benefit – faster application deployment, automated toolchains, support of more granular updates, better cooperation across groups. However, less appreciated is the journey enterprise IT groups need to make to achieve this outcome. The plain fact is that established IT processes reflect a very different set of goals: stability, infrequent change, hands-on administration, and alignment with ITIL. So how does an enterprise IT organization implement change...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations migh...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand w...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud envir...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous chall...