Welcome!

Microservices Expo Authors: Stackify Blog, Automic Blog, Simon Hill, Pat Romanski, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Article

The Two Faces of #Microservices | @CloudExpo @JPMorgenthal #AI #DevOps

Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm

Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as Common Object Request Broker Architecture (CORBA) and Distributed Component Object Model (DCOM).

Eventually, these approaches waned in popularity as the distribution frameworks were clumsy and the separation of responsibilities between developer and operations did not meet with the promised goals. That is, developers still needed to understand too much about how the entire application behaved in a distributed mode to troubleshoot application problems and the implementation was too developer-centric to allow operations to be able to fulfill this role.

One aspect of this early architecture that did succeed, however, was the concept of the Remote Procedure Call (RPC). The RPC represented a way to call functionality inside of another applications across a network using the programming language function call constructs such as passing parameters and receiving a result. With the emergence of declarative syntaxes, such as XML, and then JSON, marshalling—the packaging of the data to and from the RPC—became simpler and the need for specialized brokers were replaced with generic transports, such as HTTP and asynchronous messaging. This gave rise to the era of Web Services and Service Oriented Architecture (SOA).

To make a long story short, Web Services were extremely popular, SOA, required too much investment in software infrastructure to be realized on a massive scale. Web Services was eventually rebranded Application Programming Interface (API)—there is really no difference architecturally between a Web Service and an API—and JSON became the primary marshalling scheme for Web-based APIs.

Apologies for the long-winded history lesson, but it is important to understand μServices in context. As you can see from this history the more we moved away from the principles of object-oriented toward a more straightforward client/server paradigm there was a rise in adoption. The primary reason for this is that architecture takes time, satisfies needs of longer-term goals, and requires skilled individuals that can often be expensive. With the growing need for immediacy driven by the expanding digital universe, these are characteristics that many business leaders believed were luxuries where speed was essential.

Needless to say, there was an immediate benefit of rapid growth of new business capabilities and insight into petabytes of data that was previously untouchable. Version 1.0 was a smashing success. Then came the need for 2.0. Uh-oh! In the race to get something fast, what was ignored was sustainability of the software. Inherent technical debt fast became the inhibitor to deriving 2.0 enhancements at the same speed as 1.0 was developed. For example, instead of envisioning that three applications all implemented similar logic and developing that once as a configurable component, it was developed three times each specific to a single application.

Having realized the value of architecture and object-oriented design that was dropped in favor of speed, the vacuum created was for a way to use the speedy implementation mechanics while still being able to take advantage of object-oriented design paradigm. The answer is μServices.

While Martin Fowler and others have done a great job explaining the “what” and “how” of μServices, for me the big realization was in the “why” (described herein). Without the “why” it’s too easy to get entangled in the differentiation between this and the aforementioned Web Services. For me, the “why” provided ample guidelines for describing the difference between a μServices, an API and a SOA service.

For simplicity I’ll review the tenets of OO here and describe their applicability to μServices:

  • Information hiding – the internal representation of data is not exposed externally, only through behaviors on the object
  • Polymorphic – a consumer can treat a subtype of an object identically to the parent. In this particular case, μServices that implements a particular interface can be consumed in the identical fashion
  • Inheritance – the ability for one object to inherit from another and override one or more behaviors. In the case of μServices, we can create a new service that delegates some or all behavior to another service.

The interesting thing about these tenets as a basis for μServices, and subsequently the basis for the title of this article, is that answering the mail on this does not necessitate complete redevelopment. Indeed, in many cases, existing functionality can be refactored from the 1.0 software and packaged up using container technology delivering the exact same benefit as having developed the 2.0 version from scratch as a μService.

Let’s revisit our earlier dilemma that similar logic was developed three different times into three different applications. For purposes of this blog, let’s assume that is a tax calculation and was written once each for US, Canada and Europe. Each of these has a table implemented in a database. It would not take much work to take these different implementations, put them into a single  μService with a single REST interface using a GET operation with the region and providing the necessary inputs on the query string. That new μService could then be packaged up inside a Docker container with its own Nginx (Web Server) and MySQL database with required tax tables for each region. In fact this entire process could probably be accomplished, tested and deployed in the span of a week. Now, we can create four new applications that all leverage the same tax calculation logic without writing it four more times.

This works great as long as the tax tables don’t change or we don’t want to add a new region. In that case, additional development would be required and the container would need to be re-created, tested and re-deployed.

Alternatively, we could develop a reusable tax service and deploy this new μService in a Platform-as-a-Service (PaaS). Assumedly, we could extend this service with new regions and changes to tax tables without impacting any other region, having to regression test the entire tax service, or take the μService out of service during the redeployment period. Moreover, the new region would be available simply by modifying the routing rules for the REST URL to accept the new region.

The diagram below illustrates these two different options. The .WAR file represents the deployable tax calculator. As you can see one or more containers would need to be either patched or re-created to deploy new functionality in the Deployment Architecture model, whereas we could continue to deploy multiple .WAR files in the PaaS Architecture, which would handle routing off the same URL-based interface giving appearance of being a single application.

Thus, the two faces of μServices are those create through deployment and those created through design and development. As a lifelong software architect, I recognize the pragmatism in getting to market faster using the deployment architecture, but highly-recommend redesign and development for greater sustainability and longevity.

If you found this article useful, please leave a comment.

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@MicroservicesExpo Stories
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Digital transformation has changed the way users interact with the world, and the traditional healthcare experience no longer meets rising consumer expectations. Enterprise Health Clouds (EHCs) are designed to easily and securely deliver the smart and engaging digital health experience that patients expect today, while ensuring the compliance and data integration that care providers require. Jikku Venkat
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"We are an integrator of carrier ethernet and bandwidth to get people to connect to the cloud, to the SaaS providers, and the IaaS providers all on ethernet," explained Paul Mako, CEO & CTO of Massive Networks, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
From our perspective as consumers, perhaps the best thing about digital transformation is how consumerization is making technology so much easier to use. Sure, our television remote controls still have too many buttons, and I have yet to figure out the digital display in my Honda, but all in all, tech is getting easier for everybody. Within companies – even very large ones – the consumerization of technology is gradually taking hold as well. There are now simple mobile apps for a wide range of ...