Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Article

SOA World - Approaching SOA Testing

Even testing needs testing

So, does testing change with SOA? You bet it does. Unless you're willing to act now, you may find yourself behind the curve as SOA becomes systemic to all that is enterprise architecture, and we add more complexity to get to an agile and reusable state.

If you're willing to take the risk, the return on your SOA investment will come back three fold...that is, if it is a well-tested SOA. Untested SOA could cost you millions.

Truth be told, testing SOAs is a complex, disturbed computing problem. You have to learn how to isolate, check, and integrate, assuring that things work at the service, persistence, and process layers. The foundation of SOA testing is to select the right tool for the job, having a well-thought-out plan, and spare no expense in testing cycles or else risk that your SOA will lay an egg, and have no creditability.

Organizations are beginning to roll out their first instances of SOA, typically as smaller projects. While many are working fine, some are not living up to expectations due to quality issues that could have been prevented with adequate testing. You need to take these lessons, hard learned by others, and make sure that testing is on your priority list if you're diving into SOA.

How Do You Test Architecture?
The answer is, you don't. Instead you learn how to break the architecture down to its component parts, working from the most primitive to the most sophisticated, testing each component then the integration of the holistic architecture. In other words, you have to divide the architecture up into domains, such as services, security, governance, etc. and test each domain using whatever approach and tools are indicated. If this sounds complex, it is. Indeed, the notion of SOA is loosely coupled complex interdependence and so the approach for testing must follow the same patterns.

Before we can properly approach SOA testing, it's best to first understand the nature of the concept of SOA, and its component parts. There are many other references about the notion of SOA, so I won't dwell on it here. However, it's the foundation of the approaches and techniques you'll employ to test this architecture. SOA, simply put, is best defined thus:

SOA is a strategic framework of technology that allows all interested systems, inside and outside of an organization, to expose and access well-defined services, and information bound to those services, that may be further abstracted to orchestration layers and composite applications for solution development.

The primary benefits of a SOA, and so the objective of a test plan, include:

  • Reuse of services, or the ability to leverage application behavior from application to application without a significant amount of re-coding or integration.
  • Agility, or the ability to change business processes on top of existing services and information flows, quickly, and as needed to support a changing business.
  • Monitoring, or the ability to monitor points of information and points of service, in real-time, to determine the well being of an enterprise or trading community. Moreover, the ability to change processes or adjust processes for the benefit of the organization in real-time.
  • Extend Reach, or the ability to expose certain enterprise processes to other external entities for the purpose of inter-enterprise collaboration or shared processes.
What is unique about a SOA is that it's as much a strategy as a set of technologies, and it's really more of a journey than a destination. Moreover, it's a notion that is dependent on specific technologies or standards, such as Web Services, but really requires many different types of technologies and standards for a complete SOA. All of these must be tested.

Figure 1 represents a model of the SOA components, and how they're interrelated. What's key here is that those creating the test plan have both a macro understanding of how all the components work together as well as how each components exists unto itself and the best approach to testing those components.

You can group the testing domains for SOA into these major categories:

  • Service-Level Testing
  • Security-Level Testing
  • Orchestration-Level Testing
  • Governance-Level Testing
  • Integration-Level Testing
I'm going to focus more on service-level testing, since it's critical to SOA. Moreover, the categories or domains that you choose to test in your architecture may differ due to the specific requirements of your project. Moreover, there are other areas that need attention as well, including quality assurance for the code, performance testing, and auditing.

Service-Level Testing
In the world of SOA, services are the building blocks, and are found at the lowest level of the stack. Services become the base of a SOA, and while some are abstract existing "legacy services," others are new and built for specific purposes. Moving up the stack, we then find composite services, or services made up of other services, and all services abstract up into the business process or orchestration layer, which provides the agile nature of a SOA since you can create and change solutions using a configuration metaphor. It's also noteworthy that, while most of the services tested in SOAs will be Web Service-based, it's still acceptable to build SOAs using services that leverage other enabling technologies such as CORBA, J2EE, and even proprietary approaches.

When testing services, you need to keep the following in mind:
Services are not complete applications or systems and must be tested as such. They are a small part of an application. Nor are they subsystems; they are small parts of subsystems as well. So you need to test them with a high degree of independence, meaning that the services are both able to function properly by themselves, as well as part of a cohesive system. Indeed, services are more analogous to traditional application functions in terms of design, and how they are leveraged to form solutions, fine- or course-grained.

The best approach to testing services is to list the use cases for those services. At that point you can design testing approaches for that service including testing harnesses, or the use of SOA testing tools (discussed later). You also have to consider any services that the service may employ, and so be tested holistically as a single logical service. In some cases you may be testing a service that calls a service, that calls a service where some of the services are developed and managed in house, and some of them exist on remote systems that you don't control. All use cases and configurations must be considered.

Services should be tested with a high degree of autonomy. They should execute without dependencies, if at all possible, and be tested as independent units of code using a single design pattern that fits in other systems that use many design patterns. While all services can't be all things to all containers, it's important to spend time understanding their foreseeable use and make sure those are built into the test cases.

Services should have the appropriate granularity. Don't focus on too fine-grained or too loose-grained. Focus on that correct granularity for the purpose and use of the SOA. Here the issues related to testing are more along the lines of performance than anything else. Too finely grained services have a tendency to bog down due to the communications overhead required when dealing with so many services. Too loosely grained and they don't provide the proper autonomic values to support their reuse. You have to work with the service designer on this one.


More Stories By David Linthicum

David Linthicum is the Chief Cloud Strategy Officer at Deloitte Consulting, and was just named the #1 cloud influencer via a recent major report by Apollo Research. He is a cloud computing thought leader, executive, consultant, author, and speaker. He has been a CTO five times for both public and private companies, and a CEO two times in the last 25 years.

Few individuals are true giants of cloud computing, but David's achievements, reputation, and stellar leadership has earned him a lofty position within the industry. It's not just that he is a top thought leader in the cloud computing universe, but he is often the visionary that the wider media invites to offer its readers, listeners and viewers a peek inside the technology that is reshaping businesses every day.

With more than 13 books on computing, more than 5,000 published articles, more than 500 conference presentations and numerous appearances on radio and TV programs, he has spent the last 20 years leading, showing, and teaching businesses how to use resources more productively and innovate constantly. He has expanded the vision of both startups and established corporations as to what is possible and achievable.

David is a Gigaom research analyst and writes prolifically for InfoWorld as a cloud computing blogger. He also is a contributor to “IEEE Cloud Computing,” Tech Target’s SearchCloud and SearchAWS, as well as is quoted in major business publications including Forbes, Business Week, The Wall Street Journal, and the LA Times. David has appeared on NPR several times as a computing industry commentator, and does a weekly podcast on cloud computing.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...