Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo, @DXWorldExpo, FinTech Journal

@DevOpsSummit: Blog Post

What’s Killing Software Testers on Halloween? By @Parasoft | @CloudExpo

On Halloween day let's take a quick look at some of the top things that are killing software testers

On Halloween day, let's take a quick look at some of the top things that are killing software testers...

Accelerated Release Cycles

In response to today's demand for speed and "Continuous Everything," the software delivery conveyer belt keeps moving faster and faster.  Considering that software testing has long been a thorn in the side of the software delivery process, it's unreasonable to expect that simply trying to speed up an already-troubled quality process will achieve the desired results. (I Love Lucy fans: Just think of Lucy and Ethel at the candy factory, struggling to keep pace as the conveyer belt starts putting out chocolates faster and faster.)

If there's never enough time allotted for testing, it's probably a sign that your organization needs to reassess its culture as it relates to building and testing software.  In most organizations, quality software is clearly the intention, yet the culture of the organization yields trade-off decisions that significantly increase the risk of exposing faulty software to the market.

Poor Quality Code From Development

Testers are hired to perform advanced testing, not chase after defects stemming from simple development mistakes which could (and should) have been caught during implementation. If the development team consistently applies development testing practices such as unit testing, static analysis, and peer code review to ensure that code meets expectations before it progresses to QA, that reduces the number of avoidable defects that QA has to spend time identifying, reporting, then later re-verifying. This is not only increases the team's overall velocity, but also allows testers to concentrate on the already-daunting task of developing and executing their test plans within the very limited available timeframe.

Realistic Test Data

Access to realistic test data significantly improves the effectiveness of a test suite.  Good test data and test data management practices will increase coverage as well as reduce risks  However, developing or accessing test data can be a considerable challenge—in terms of time, effort, and compliance.  Copying production data can be risky (and potentially illegal).  Asking database administrators to provide the necessary data is typically fraught with delays. Moreover, burdening dev or QA with this task moves team members beyond their core competencies, potentially delaying other aspects of the project for what might be imprecise or incomplete results.

Some organizations have found that simulation technologies such as service virtualization reduce the horror of test data management.

Access to a Complete Test Environment

With multiple dependent systems, a complete and realistic test environment is nearly impossible to stage.  Developers, QA testers, and performance engineers commonly face:

  • Systems that are impractical or too complex for test labs
  • Divisional and political boundaries that limit access to resources
  • Inaccessible 3rd party/partner systems and services
  • Scheduling constraints that limit testing to inadequate, inconvenient windows
  • Missing/unstable components
  • Evolving development environments

Attempting to resolve test environment access constraints by building out a staged test environment or virtual test lab can be extraordinarily expensive. In many situations, building such an environment with staged application instances and virtual test labs can be technically impossible—for example, when the dependent application is a third-party application, a complex system (like a mainframe) hosted by another division, or an application beyond the “geo-political” boundaries of the group executing the tests. Even when building a "complete" test environment is feasible, configuring and maintaining all the dependent applications involves a high ongoing operational cost.

The unfortunate impact: testers can't test. Recent studies show that 64% of testers currently spend little to no time creating automated tests and only 50% of expected test plans are completed due to test environment access constraints.

If you're trying to escape this tester-killer, service virtualization might provide you a safe refuge.

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Content Strategist/Writer at Tricentis, writes about software testing and the SDLC—specializing in continuous testing, functional/API testing, DevOps, Agile, and service virtualization. She has written articles for publications including SD Times, Stickyminds, InfoQ, ComputerWorld, IEEE Computer, and Dr. Dobb's Journal. She also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...