Welcome!

Microservices Expo Authors: John Worthington, Liz McMillan, Elizabeth White, Stackify Blog, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo

@DevOpsSummit: Blog Post

HTTP/2: Changes, Challenges and Considerations | @DevOpsSummit #APM #DevOps

Brace yourself, HTTP/2 is coming

HTTP/2: Changes, Challenges and Considerations for Load & Performance Testers

Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.

That being said, in this post we'd like to highlight some HTTP/2 changes, challenges and considerations as they relate to load and performance testing.

HTTP/2 Changes
Because of its compatibility with HTTP/1.1, HTTP/2 will not change the way existing websites or applications function. In fact, not only will your application code and HTTP APIs continue to work properly, but it's likely that your applications will also perform better and consume fewer client- and server-side resources. But what does this mean for performance testers?As we mentioned previously, the implementation of HTTP/2 will not affect all testing activities. For example, when it comes to automated software testing, the activity won't be impacted until the solution supports HTTP/2. In other words, if the browser supports the new protocol there won't be an issue.

From the moment a team begins to engage in protocol-based testing, however, HTTP/2 becomes a challenge. Testers will need a tool that records and supports the new protocol. Performance engineers will still have to give recommendations in terms of server configuration and architecture, which may be tricky unless there is a comprehensive understanding of each web layer (proxies, web server, caching server, load balancer, etc.) impacted by HTTP/2 implementation.

While we'll have to wait for the first real benchmarks to have a clear understanding of the technical behavior of each module, new modules are currently in beta release-so now is the time to play around and learn how to modify the settings.

HTTP/2 Challenges
Keep in mind, although HTTP/2 is built to decrease latency in order to improve page load speed in web browsers, this protocol will not guarantee good performance. It is in no way an excuse to skip load and performance testing.

As HTTP/2 becomes more prevalent, one of the largest challenges testing teams will face is the doubling up of test cases. Moving forward, most systems will support both HTTP/1.1 and HTTP/2. For performance testers, this means the creation of extra scenarios. Testing teams will need to generate load from users with browsers that do not support HTTP/2 and from users whose browsers support the latest release. These tests will be more complex and will result in more work for load and performance testers. Teams will have to validate that their project's web layers support the load of both protocols. Few companies will deploy HTTP gateways to avoid the extra management task associated with the use of both protocols, but in these cases, HTTP gateways will need to be tested to minimize latency.

Of course, HTTP/2 also comes with a cost. Companies will need to bring on high-level experts that can upgrade web servers, vendors will need to update their own equipment, and all of this will need to be tested. Load and performance testers will need to understand this new protocol and the modules of each vendor to properly configure the architecture. There will be a tricky, expensive learning curve. However, as the first engineers interact with HTTP/2, their knowledge and expertise may help lessen its severity.

HTTP/2 Considerations
Because so little is known about the widespread effects HTTP/2 may eventually have, there are still several considerations and questions floating about that, when addressed, may change the game for load and performance testers.

Timing is a huge concern. How long will it take before every browser supports the new protocol? Sourcesassert that HTTP/1.1 will be around for at least another decade, and most servers and clients will have to support both HTTP/1.1 and HTTP/2 standards. This means twice as many test cases resulting in a more time consuming, expensive, labor intensive testing process.

Before the testing process even begins, however, testers will need to determine an ideal response time with HTTP/2-but how will we measure the response time to download resources pushed by the server? Historically, we've measured it as the time between the client sending a request to the server and receiving the requested data. With HTTP/2, this logic won't make sense anymore.

Regardless, at some point, new benchmarks and limits must be set in order to establish a consistent, industry-wide standard. Already, big Internet players like Microsoft and Twitter have implemented the protocol and they, along with other early adopters, will begin to shape and mold these standards.

HTTP/2: No Pain, No Gain
Though it may cause load and performance testing teams some grief in terms of cost and workload, HTTP/2 will ultimately make our applications faster, simpler and more robust by offering new opportunities to optimize apps and improve performance.

Moving forward, testers should work with app architects and developers to understand when their own application will support HTTP/2 - it will need to be tested. As browser support for HTTP/2 becomes more widespread, tester will also need to know which browsers have been optimized for the protocol in order to include those in their test plan.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@MicroservicesExpo Stories
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...