Click here to close now.

Welcome!

Microservices Journal Authors: AppDynamics Blog, Pat Romanski, Elizabeth White, Carmen Gonzalez, Liz McMillan

Related Topics: Microservices Journal, Java, Web 2.0, Security, GovIT

Microservices Journal: Article

Why Obama Administration Should Have Paid More Attention to Load Testing

What needs to be understood here is that it’s important to test early and often

October 1, 2013, was the most anticipated date for the Obama administration since his re-election. It was to be the day every American would have access to health care on one centralized website. However, according to at least one report only six people enrolled in Obamacare on the first day. Then shortly after, the entire website crashed along with its infrastructure.

The massive crash happened because within the first 10 days of launch HealthCare.gov had over 14.6 million unique views. Something the Obama administration was not prepared for, nor the testers.

The website should have been able to handle tens of thousands of people at once, but in a trial test before the launch a mere 500 users caused the website to crash. In testimony before U.S. Congress, the contractors responsible for HealthCare.gov said they didn't have enough time to fully test the website. The inability to properly load test the website well before the launch date of October 1st led to one of the worst federal website debacles of all time.

What Went Wrong
The HealthCare.gov website was designed to provide Americans with a simple solution as a one-stop-shop for health care insurance, but as we all know it wasn't that simple.

The site was built by 55 contractors and is considered one of the most complex software projects ever undertaken for the federal government, which might be where their problems all started.

According to Louis Woodhill, a contributor to Forbes magazine, the Obamacare website is comparable to the Soviet Union. "In their effort to build an IT system to implement Obamacare, the U.S Department of Health and Human Services was trying to do the same thing as the USSR's Gosplan agency: elicit coordinated, purposeful action from a collection of entities that don't know each other, don't trust each other, have conflicting objectives, and face diverging incentives."

Mixing contractors wasn't their only issue, the Obama administration continued to make a series of rookie mistakes that led to the demise of the website.

Incorrectly Assessing User Behavior. First, the administrators in charge of the website decided in late September to exclude the feature that would let people shop for health plans before registering for an online account. This lead to a bottleneck in the process because more people than expected had to go through the registration process before they could even browse through plans.

Broken Systems Integration. Second, the registration process was flawed. The consumer was supposed to enter basic account information, a security question and so on, but the communication between the systems responsible for storing this information wasn't working properly. This resulted in thousands of users who were unable to successfully create an account.

Rebuilding Components from Scratch When Proven Systems Were Available. Last, the Data Services Hub, which is a proven identity service available to the government for consumer applications, was surprisingly not used to its full extent. Instead, the website builders created new software systems meant to do exactly the same thing. In an article by Mashable the author emphasizes the fact that if the HealthCare.gov site had in fact fully leveraged the Data Hub, then it wouldn't have been such a mess.

With all of these missteps and rookie mistakes under consideration, what is known is the fact that HealthCare.gov was overwhelmed with the amount of visitors to one site.

Why the Government Should Have Made Load Testing a Priority
It seems like those responsible for deploying the site didn't really appreciate the importance of load testing, which is especially surprising when you consider that the website had in fact failed a pre-launch load test miserably. Of course, politics came into play as the deadline for the website was non-negotiable. But with all the red flags warning of failure, load testing should have played a much more critical role and here's why:

Prioritization of Problems and Fixes
A big issue with HealthCare.gov was that the contractors claimed they didn't have enough time and felt extreme pressure to roll out the website before it was properly tested. If load testing occurred earlier in the website development phase, testers would have been able to identify the parts of the website that were not working properly.

The major pain point in the entire HealthCare.gov website was the registration process that millions of Americans attempted to fill out. Had they load tested the website months out from the launch, the team would have been able to identify the root causes of performance issues and determine whether they were in application code or the app servers and infrastructure components.

Earlier Identification of Issues

 

This chart illustrates how much it costs the paying client to fix a bug according to the stage of development. At the operation stage, a bug can cost clients more than 150 times as much as a bug caught in the requirement stage.

Had the testers broken down their tests into smaller test cases, over time the administration might have taken the time to listen and understand that these little bugs needed to be fixed prior to the public launch.

Decisions Made from Intelligence on the Ground
We know the tension between testers and business owners can be pretty intense. The funders of the website want it up and running right away, but testers want to properly identify errors and have enough time to fix the issues that arise.

The administration decided to completely ignore the classic project management triangle.

The only way to increase the scope of a project without changing the due date would be to add more resources. Since the administration was rigid on all three sides of the triangle, the quality of the website suffered.

It's no wonder this website failed. The dynamics between the testers and heads of HealthCare.gov were strained, and it appeared the Obama administration chose to ignore testers who knew the website was not ready.

HealthCare.gov Today
The HealthCare.gov website isn't through the woods just yet. According to The Washington Post, the website has been flagged by over 22,000 people trying to correct errors the system made when they were signing up for a new federally-mandated health care plan.

Apparently, federal workers aren't able to access consumer data manually. "An unknown number of customers who are trying to get help through less formal means - by calling the health care marketplace directly - are told that HealthCare.gov's computer system isn't yet allowing federal workers to go into enrollment records and change them."

What needs to be understood here is that it's important to test early and often. If tests would have been conducted throughout the entire website development, the Obama administration would have avoided such an embarrassing and reputation-tarnishing event.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The 5th International DevOps Summit, co-located with 17th International Cloud Expo – being held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the...
Over the years, a variety of methodologies have emerged in order to overcome the challenges related to project constraints. The successful use of each methodology seems highly context-dependent. However, communication seems to be the common denominator of the many challenges that project management methodologies intend to resolve. In this respect, Information and Communication Technologies (ICTs) can be viewed as powerful tools for managing projects. Few research papers have focused on the way...
As the world moves from DevOps to NoOps, application deployment to the cloud ought to become a lot simpler. However, applications have been architected with a much tighter coupling than it needs to be which makes deployment in different environments and migration between them harder. The microservices architecture, which is the basis of many new age distributed systems such as OpenStack, Netflix and so on is at the heart of CloudFoundry – a complete developer-oriented Platform as a Service (PaaS...
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises a...
The 17th International Cloud Expo has announced that its Call for Papers is open. 17th International Cloud Expo, to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, APM, APIs, Microservices, Security, Big Data, Internet of Things, DevOps and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding bu...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block – and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming on...
Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects. Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and d...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading in...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo – to be held June 3-5, 2015, at the Javits Center in New York City – will expand the DevOps community, enable a wide...
In her General Session at 15th Cloud Expo, Anne Plese, Senior Consultant, Cloud Product Marketing, at Verizon Enterprise, focused on finding the right mix of renting vs. buying Oracle capacity to scale to meet business demands, and offer validated Oracle database TCO models for Oracle development and testing environments. Anne Plese is a marketing and technology enthusiast/realist with over 19+ years in high tech. At Verizon Enterprise, she focuses on driving growth for the Verizon Cloud platfo...
How does one bridge the gap between traditional enterprise storage infrastructures and the private, hybrid, and public cloud? In his session at 15th Cloud Expo, Dan Pollack, Chief Architect of Storage Operations at AOL Inc., examed the workload differences and required changes to reuse existing knowledge and components when building and using a cloud infrastructure. He also looked into the operational considerations, tool requirements, and behavioral changes required for private cloud storage s...
Cloud Expo, Inc. has announced today that Andi Mann returns to DevOps Summit 2015 as Conference Chair. The 4th International DevOps Summit will take place on June 9-11, 2015, at the Javits Center in New York City. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at ...
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed...
How can you compare one technology or tool to its competitors? Usually, there is no objective comparison available. So how do you know which is better? Eclipse or IntelliJ IDEA? Java EE or Spring? C# or Java? All you can usually find is a holy war and biased comparisons on vendor sites. But luckily, sometimes, you can find a fair comparison. How does this come to be? By having it co-authored by the stakeholders. The binary repository comparison matrix is one of those rare resources. It is edite...
There’s a lot of discussion around managing outages in production via the likes of DevOps principles and the corresponding software development lifecycles that does enable higher quality output from development, however, one cannot lay all blame for “bugs” and failures at the feet of those responsible for coding and development. As developers incorporate features and benefits of these paradigm shift, there is a learning curve and a point of not-knowing-what-is-not-known. Sometimes, the only way ...
Working with Big Data is challenging, especially when decision makers depend on market insights and intelligence from your data but don't have quick access to it or find it unusable. In their session at 6th Big Data Expo, Ian Khan, Global Strategic Positioning & Brand Manager at Solgenia; Zel Bianco, President, CEO and Co-Founder of Interactive Edge of Solgenia; and Ermanno Bonifazi, CEO & Founder at Solgenia, discussed how a revolutionary cloud-based BI along with mobile analytics is already c...
Hardware will never be more valuable than on the day it hits your loading dock. Each day new servers are not deployed to production the business is losing money. While Moore's Law is typically cited to explain the exponential density growth of chips, a critical consequence of this is rapid depreciation of servers. The hardware for clustered systems (e.g., Hadoop, OpenStack) tends to be significant capital expenses. In his session at Big Data Expo, Mason Katz, CTO and co-founder of StackIQ, disc...
Container frameworks, such as Docker, provide a variety of benefits, including density of deployment across infrastructure, convenience for application developers to push updates with low operational hand-holding, and a fairly well-defined deployment workflow that can be orchestrated. Container frameworks also enable a DevOps approach to application development by cleanly separating concerns between operations and development teams. But running multi-container, multi-server apps with containers ...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immed...
SYS-CON Events announced today that EnterpriseDB (EDB), the leading worldwide provider of enterprise-class Postgres products and database compatibility solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. EDB is the largest provider of Postgres software and services that provides enterprise-class performance and scalability and the open source freedom to divert budget from more costly traditiona...