Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo, @CloudExpo

Microservices Expo: Article

Model First, Service-Enable Next

The next few years will be less about new application development, and more about existing application composition and reuse.

Reducing the cost of IT management is one of the primary pressures for most organizations. One of the most common ways to reduce such costs is to enable the reuse of applications that developers have already created and configured for the enterprise. In the past decade, especially in the past 3-5 years, companies have spent millions of dollars on enterprise software applications of all sorts: CRM, ERP, and other operational applications. The next few years will be less about new application development, and more about existing application integration and reuse.

The Service-oriented approach helps solve the challenge of reuse by imposing a design methodology that promotes the use of self-describing, published, loosely coupled, and dynamically bound components rather than static, tightly-coupled components. Reuse becomes a matter of publishing available Web Services and developing the Services themselves to make sure they are not inadvertently tightly coupled. However, many companies believe that Services are nothing more than a standards-based API--basically just SOAP calls sent over HTTP.

Fundamentally, this is belief is a far too limiting way of thinking about Services in the long term. Rather than thinking about Services as a "universal API," enterprises can realize the greatest ROI by thinking about application and system functionality as loosely coupled, abstracted components. In order to realize this vision, however, companies must get a better understanding of how to "componentize" their enterprises in a Services context.

Enterprises must model the various constituent parts of the enterprise and how they interact in order to gain a fundamental understanding of them. In many ways, business modeling means taking design methodologies appropriate for object-oriented computing and recursively applying them to various business functions. In the Services context, therefore, coarse-grained business processes consist of loosely coupled business capabilities that contain the more fine grained software objects.

Business modeling Service-oriented architectures achieves a number of objectives:

  • Helping to identify the level of granularity of systems for reuse.
  • Providing requirements for business logic components and subsystems, and their interaction with other subsystems as well as external businesses.
  • Identifying areas where developers can combine and separate components for greater flexibility and reuse.
  • Developing systems than can evolve over time.

In order to meet these objectives, businesses must avoid modeling components using complex, nonstandard interfaces. Modeling in a Service-oriented architecture will then enable reuse and the other goals mentioned above. Just like Services themselves, business models shouldn't make any assumptions about the underlying architecture or framework that developers will produce the solutions with. Rather, the enterprise should model the components and their interactions in order to enumerate the requirements, but without imposing any restrictions on how to meet those requirements. This level of abstraction is very much the same as with the Service-oriented architecture: allow services to fulfill needs while abstracting the actual way in which businesses implement them. Without this approach, businesses must build components in a custom fashion, making each component unique and non-reusable.

ZapThink believes that what will become increasingly important is not the middleware platform itself, but the thought processes that go into to deciding how to create Web Services and Service-oriented integration solutions. Business modeling and identifying the granularity of Web Services will become as important as the business logic contained within the Web Services themselves, since Services components that are too coarse-grained will be just as difficult to reuse as tightly coupled object components. Since it's easier to model processes that are under one's control, the first major Services implementations are internally focused integration efforts.

More Stories By Ron Schmelzer

Ron Schmelzer is founder and senior analyst of ZapThink. A well-known expert in the field of XML and XML-based standards and initiatives, Ron has been featured in and written for periodicals and has spoken on the subject of XML at numerous industry conferences.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...