Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Article

SOA Pattern of the Week (#4): Service Normalization

The "SOA Pattern of the Week" article series is comprised of original content and insights

Like data normalization, the Service Normalization pattern is intent on reducing redundancy and waste in order to avoid the governance burden associated with having to maintain and synchronize similar or duplicate bodies of service logic."

You can see it introduces the Pattern on our publisher page.

When designing data architectures, you can easily end up with different databases or even different database tables containing the same or similar data. This has been the root of many well documented data maintenance and quality issues that helped establish data normalization as widely accepted data modeling best practice. On a fundamental level, the aim of data normalization is to reduce data redundancy to whatever extent possible. This forces any applications that need to use a specific type of data to access it in one location. Therefore, by eliminating data redundancy, data normalization also promotes data reuse.

Reusability is, of course, also a primary goal of service-orientation. So much so that one of its eight principles (the Service Reusability principle) is dedicated solely to enabling this quality in services. Service Normalization is one of many patterns that support service reusability, but its goals go beyond that. Like data normalization, the Service Normalization pattern is intent on reducing redundancy and waste in order to avoid the governance burden associated with having to maintain and synchronize similar or duplicate bodies of service logic.

To accomplish this, Service Normalization essentially draws lines in the sand that establish the boundaries of services so that they do not overlap. Unlike data normalization, Service Normalization is not limited to data. Its primary concern is the normalization of functional service boundaries. Therefore, you will usually find yourself applying this pattern during the service modeling stages, when services are first conceptualized.

One of the most important aspects of understanding the practice of normalizing services is the actual scope or boundary in which the normalization effort is carried out. As we explained in the previous installment in this series, the Domain Inventory pattern enables you to establish multiple collections of independently standardized and governed services within the same IT enterprise. These service inventories (or “continents of services” as they are sometimes referred to) correspond to domains that still allow you to achieve service-orientation goals to a meaningful extent.

A service inventory blueprint is also defined during the analysis and modeling stages and the boundary of a given blueprint typically determines the scope at which Service Normalization is applied. This means that you are allowed to have overlapping service boundaries and redundant service logic, as long as it occurs across domain service inventories (not within a given service inventory).

The rules established by Service Normalization make their way into service modeling processes and overall service delivery methodologies. Avoiding functional overlap becomes a constant consideration and often forms the basis of a dedicated process step (especially for modeling processes that are carried out iteratively). It is also one of those considerations that needs to be tracked and coordinated when you have different teams working in parallel to model services for the same service inventory.

Yet despite best efforts, functional overlap still can happen. Something may get missed within the service inventory blueprint and services with similar capabilities are then inadvertently built. Or, there may even be hard constraints that prevent this pattern from being fully applied, such as when different services need to encapsulate legacy systems that themselves cannot be normalized. In this case, there may be embedded or entrenched logic that unavoidably leads to an extent of redundancy. Then, of course, there is the performance issue. You may run into a situation where delivering fully normalized services will impose unreasonable runtime latency and the only way out of it is to intentionally design some measure of denormalization into the services.

While you can add a real world twist and interpret this pattern as “Within a given service inventory, no two service boundaries can overlap, and if they do, there better be a darn good reason for it!”, the point is that the overarching objective of Service Normalization is to establish a solid foundation in support of the many goals of service-orientation.

The SOA Pattern of the Week series is comprised of original content and insights provided to you courtesy of the authors and contributors of the SOAPatterns.org community site and the book “SOA Design Patterns (Erl et al., ISBN: 0136135161, Prentice Hall, 2009), the latest title in the “Prentice Hall Service-Oriented Computing Series from Thomas Erl” (www.soabooks.com).

More Stories By Thomas Erl

Thomas Erl is a best-selling IT author and founder of Arcitura Education Inc., a global provider of vendor-neutral educational services and certification that encompasses the Cloud Certified Professional (CCP) and SOA Certified Professional (SOACP) programs from CloudSchool.com™ and SOASchool.com® respectively. Thomas has been the world's top-selling service technology author for nearly a decade and is the series editor of the Prentice Hall Service Technology Series from Thomas Erl, as well as the editor of the Service Technology Magazine. With over 175,000 copies in print world-wide, his eight published books have become international bestsellers and have been formally endorsed by senior members of many major IT organizations and academic institutions. To learn more, visit: www.thomaserl.com

More Stories By Herbjorn Wilhelmsen

Herbjorn Wilhelmsen is an Architect and Senior Consultant at Objectware in Stockholm, Sweden. His main focus areas include service-oriented architecture, Web services and business architecture. Herbjörn has many years of industry experience working as a developer, development manager, architect and teacher in several fields of operations, such as telecommunications, marketing, payment industry, health care and public services. He is active as an author in the Prentice Hall Service-Oriented Computing Series from Thomas Erl and has contributed design patterns to SOAPatterns.org. He leads the Business-to-IT group in the Swedish chapter of the International Association of Software Architects, which performs a comparative study of a number of business architecture methodologies. Herbjörn holds a Bachelor of Science from Stockholm University.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...