Welcome!

Microservices Expo Authors: Stackify Blog, Elizabeth White, Dalibor Siroky, Pat Romanski, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Open Source Cloud, Agile Computing, Apache

@CloudExpo: Article

The Power of Opacity in REST

There’s more to the opacity story than opaque URIs

Ever wonder how a sophisticated Web site works? Take Facebook, for example. You can view the source and you can hardly pick out any recognizable HTML, let alone divine how the wizards back at Facebook HQ get the site to work. Now, try viewing the source at a simpler Web site, like ZapThink’s. Sure enough, there’s HTML under the covers, but you still can’t tell from the file the Web server sends to your browser what’s going on behind the scenes (we use WordPress, in case you were wondering).

Put into RESTful terms, there is a separation between resource (e.g., the program running on the server) and the representation (e.g., the Web page it sends to your browser). In fact, this separation is a fundamental REST constraint which allows the resource to be opaque.

When people talk about opacity in the REST context, they are usually referring to Uniform Resource Indicators (URIs). You should be able to construct URIs however you like, the theory goes, and it’s up to the resource to figure out how to respond appropriately. In other words, it’s not up to the client to know how to provide specific instructions to the server, other than by clicking the hyperlinks the resource has previously provided to the client.

But there’s more to the opacity story than opaque URIs. Fundamentally, the client has no way of knowing anything at all about what’s really going on behind the scenes. The resource might be a file, a script, a container, an object, or some complicated combination of these and other kinds of things. There are two important lessons for the techies behind the curtain: first, don’t assume resources come in one flavor, and second, it’s important to understand the full breadth of capabilities and patterns that you can leverage when architecting or building resources. After all, anything you can give a URI to can be a resource.

Exploring the Power of Opacity
Let’s begin our exploration of opacity with HTTP’s POST method. Of the four primary HTTP methods (GET, POST, PUT, and DELETE), POST is the only one that’s not idempotent: in other words, not only does it change the state of the resource, but it does so in a way that calling it twice has a different effect than calling it once. In the RESTful context, you should use POST to initialize a resource. According to the HTTP spec, POST creates a subordinate resource, as the figure below illustrates:

In the interaction above, the client POSTs to the cart resource, which initializes a cart instance, names it “abcde,” and returns a hyperlink to that new subordinate resource to the client. In this context, subordinate means that the abcde comes after cart and a slash in the URI http://example.com/cart/abcde.

Here’s the essential question: just what do cart and abcde represent on the server? cart looks like a directory and abcde looks like a file, given the pathlike structure of the URI. But we know that guess probably isn’t right, because POSTing to the cart resource actually created the abcde resource, which represents the cart instance. So could abcde be an object instance? Perhaps. The bottom line is you can’t tell, because as far as the client is concerned, it doesn’t matter. What matters is that the client now has one (or more) hyperlinks to its own cart that it can interact with via a uniform interface.

One way or the other, however, POST changes the state of the abcde cart instance, which requires a relatively onerous level of processing on the server. To lighten the future load on the server, thus improving its scalability, we may want to cache the representation the resource provides. Fortunately, REST explicitly supports cacheability, as the figure below illustrates:

In the pattern above, a gateway intermediary passes along the POST to the server, fetching a static representation it puts in its cache. As long as clients make requests that aren’t intended to change the state of the resource (namely, GETs), then serving up the cached copy is as good as passing along the request to the underlying resource, until the representation expires from the cache.

Opacity plays a critical role in this example as well, since saying the cached copy is just as good as a response directly from the resource is an example of opacity. As a result, the gateway is entirely transparent to the client, serving in the role of server in interactions with the client but in the role of client in interactions with the underlying server.

The limitation of the example above, of course, is the static nature of the cache. If the client wants to change the state of the resource (via PUT or another POST), then such a request would necessarily expire the cache, requiring the intermediary to pass the request along to the underlying server. In situations where the resource state changes frequently, therefore, caching is of limited value.

Opacity and RESTful Clouds
We can extend the pattern above to provide greater capabilities on the intermediary. In the example below, the intermediary is a full-fledged server in its own right, and the underlying server returns executable server scripts for the intermediary to execute on behalf of the underlying server. In other words, the intermediary caches representations that are themselves server programs (e.g., php scripts). Furthermore, these server scripts are prepopulated with any initial state data in response to the original POST from the client.

Increasing the sophistication of our cache would provide little value, however, if we didn’t have a better way of dealing with state information. Fortunately, REST grants our wishes in this case as well, because it enables us to separate resource state (maintained on the underlying server) from application state, which we can transfer to the client.

In the figure above, after the client has initialized the resource, it may wish to, say, update its cart. So, the user clicks a link that executes a PUT that sends the updated information, along with values from one or more hidden form fields to the intermediary. However, instead of updating resource state, the state information remains in the messages (both requests from the client and representations returned from the intermediary) as long as the client only executes idempotent requests. There is no need to update resource state in this situation, because the scripts on the intermediary know to pass along state information in hidden form fields, for example. When the cart process is complete and the user is ready to submit an order, only then does the client execute another POST, which the intermediary knows to pass along to the underlying server.

However, there’s no strict rule that says that the intermediary can only handle idempotent requests; you could easily put a script on it that would handle POSTs, and similarly, it might make sense to send an idempotent request like a DELETE along to the underlying server for execution. But on the other hand, the rule that the intermediary handles only the idempotent requests may be appropriate in your situation, because POST would then be the only method that could ever change state on the underlying server.

As we explained in an earlier ZapFlash, one of the primary benefits to following the pattern in the figure above is to support elasticity when you put the intermediary server in the Cloud. Because it is stateless, it doesn’t matter which virtual machine (VM) instance replies to any client request, and if a VM instance crashes, we can bootstrap its replacement without losing any state information. In other words, opacity is essential to both the elasticity and fault tolerance of the Cloud, and furthermore, following a RESTful approach provides that opacity.

The ZapThink Take
There’s one more RESTful pattern that ZapThink is particularly interested in: RESTful SOA, naturally. For this pattern we need another kind of intermediary: a RESTful SOA intermediary, in addition to the Cloud-based stateless server intermediary, or anything else we want to abstract for that matter. The figure below illustrates the RESTful SOA pattern.

The role of the RESTful SOA intermediary is to provide abstracted (in other words, opaque) RESTful Service endpoints that follow strict URI formatting rules. Furthermore, this intermediary must handle state information appropriately, that is, following a RESTful approach that transfers state information in messages. As a result, the SOA intermediary can support stateless message protocols for interactions with Service consumers while remaining stateless itself. Most ESBs maintain state, and therefore a RESTful SOA intermediary wouldn’t be a typical ESB, although it could certainly route messages to one.

So, which pattern is the best one? As we say in our Licensed ZapThink Architect (LZA) and Cloud Computing for Architects (CCA) courses, it depends. The architect is looking for the right tool for the job. You must understand the problem before recommending the appropriate solution. We cover REST-based SOA in our LZA course (coming to Johannesburg) and RESTful Clouds in the CCA course (coming to London, DC, and San Diego). See you there!

Image credit: Derek Keats

More Stories By Jason Bloomberg

Jason Bloomberg is the leading expert on architecting agility for the enterprise. As president of Intellyx, Mr. Bloomberg brings his years of thought leadership in the areas of Cloud Computing, Enterprise Architecture, and Service-Oriented Architecture to a global clientele of business executives, architects, software vendors, and Cloud service providers looking to achieve technology-enabled business agility across their organizations and for their customers. His latest book, The Agile Architecture Revolution (John Wiley & Sons, 2013), sets the stage for Mr. Bloomberg’s groundbreaking Agile Architecture vision.

Mr. Bloomberg is perhaps best known for his twelve years at ZapThink, where he created and delivered the Licensed ZapThink Architect (LZA) SOA course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, the leading SOA advisory and analysis firm, which was acquired by Dovel Technologies in 2011. He now runs the successor to the LZA program, the Bloomberg Agile Architecture Course, around the world.

Mr. Bloomberg is a frequent conference speaker and prolific writer. He has published over 500 articles, spoken at over 300 conferences, Webinars, and other events, and has been quoted in the press over 1,400 times as the leading expert on agile approaches to architecture in the enterprise.

Mr. Bloomberg’s previous book, Service Orient or Be Doomed! How Service Orientation Will Change Your Business (John Wiley & Sons, 2006, coauthored with Ron Schmelzer), is recognized as the leading business book on Service Orientation. He also co-authored the books XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996).

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting).

@MicroservicesExpo Stories
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...