|By Jason Bloomberg||
|May 30, 2014 01:00 PM EDT||
A question we commonly get at EnterpriseWeb is whether our platform follows REST or not. Representational State Transfer (REST) is an architectural style for distributed hypermedia systems such as the World Wide Web, and is perhaps best known for providing a lightweight, uniform Web-style application programming interface (API) to server-based resources. One the one hand, EnterpriseWeb can both consume and expose any type of interface, including tightly coupled APIs, Web Services, as well as RESTful APIs, and the platform has no requirement that customers must build distributed hypermedia systems. It would be easy to conclude, therefore, that while EnterpriseWeb supports REST, it is not truly RESTful.
Such a conclusion, however, would neglect the broader architectural context for EnterpriseWeb. The platform builds on top of and extends REST as the foundation for the dynamic, enterprise-class architectural style we call Agent-Oriented Architecture (AOA). EnterpriseWeb's intelligent agent, SmartAlex, leverages RESTful constraints as part of the core functionality of the EnterpriseWeb platform. The resulting AOA pattern essentially reinvents application functionality and enterprise integration, heralding a new paradigm for distributed computing.
The Limitations of REST
One of the primary challenges to the successful application of REST is understanding how to extend REST to distributed hypermedia systems in general, beyond the straightforward interactions between browsers and Web servers. To help clarify this point, Figure 1 below illustrates a simple RESTful architecture. In this example, the client is a browser, and it sends GETs and PUTs or other RESTful queries to URIs that resolve to resources on a server, which responds by sending the appropriate representation back to the client. In addition, REST allows for a cache intermediating between client and server that might resolve queries on behalf of the server for scalability purposes.
Figure 1: Simple RESTful Architecture
As an architectural style, however, the point of REST isn't the uniform interface that the HTTP verbs enable. REST is really about hypermedia, where hypermedia are the engine of application state - the HATEOAS constraint essential to building hypermedia systems. In figure 1, we're representing HATEOAS by the interactions between human users and their browsers as people click links on Web pages, thus advancing the application state. The RESTful client (in other words, the browser) maintains application state for each user by showing them the Web page (or other representation) they requested when they followed a given hyperlink.
However, software clients that do not necessarily have user interfaces may be problematic for REST, but they are a familiar part of the Service-Oriented Architecture (SOA) architectural style, where we call such clients Service consumers. Combining REST and SOA into the combined architectural style we call REST-Based SOA introduces the notion of an intermediary that presents a Service endpoint and resolves interactions with that endpoint into underlying interactions with various legacy systems. The SOA intermediary in this case exposes RESTful endpoints as URIs that accept GETs, PUTs, etc. from Service consumers, which can be any software client. See figure 2 below for an illustration of the REST-Based SOA pattern.
Figure 2: REST-Based SOA
Note that adding SOA to REST augments the role of the intermediary. Pure REST allows for simple caching and proxy behavior, while SOA calls for policy-based routing and transformation operations that provide the Service abstraction. SOA also reinforces the notion that the Service consumer can be any piece of software, regardless of whether it has a user interface.
Even with REST-based SOA, however, we still have problems with implementing HATEOAS: coding our clients so that they are able to gather the metadata they need by following hyperlinks. In other words, how do we apply REST to any hypermedia system, where instead of a browser we have any piece of software as a client? How do we code the software client to know how to follow hyperlinks, where it doesn't know what the hyperlinks are ahead of time or what representations they're supposed to interact with? Humans simply click hyperlinks until they get the representation they want, even if they don't know beforehand how to find it. How do we teach software to automate this process and gather all the metadata it needs by following a sequence of hyperlinks?
Introducing Agent-Oriented Architecture
The answer to these questions is to cast an intelligent agent in the role of SOA intermediary in the REST-Based SOA pattern in Figure 2. Intelligent software agents (or simply intelligent agents when we know we're talking about software) are autonomous programs that have the authority to determine what action is appropriate based upon the requests made of them. In this new, Agent-Oriented architectural pattern, the agent interacts with any resource as a RESTful client, where the agent must be able to automatically follow hyperlinks to gather all the information it requires in order to respond appropriately to any request from the client.
In other words, when following this newly coined AOA architectural style, software clients do not have to comply with HATEOAS (they may, but such compliance is optional). Instead, the agent alone must follow the HATEOAS constraint as it interacts with resources. To achieve this behavior, we must underspecify the intelligent agent. In other words, the agent can't know ahead of time what it's supposed to do to respond to any particular request. Instead, it must be able to process any request on demand by fetching related resources that provide the appropriate metadata, data, or code it needs to properly respond to that request with a custom response, for each interaction in real time. Figure 3 below illustrates the basic AOA pattern.
Figure 3: Agent-Oriented Architecture
For each request from any client, regardless of whether it has a user interface, the agent constructs a custom response based on latest and most relevant information available. In fact, requests to the agent can come from anywhere (i.e., they follow an event-driven pattern). The agent's underspecification means that it doesn't know ahead of time what behavior it must exhibit, but it does know how to find the information it needs in order to determine that behavior - and it does that by following hyperlinks, as per HATEOAS. In other words, the goal-oriented agent resolves URIs recursively in order to gather and execute the information it needs - a particularly concise example of fully automated HATEOAS in action.
The Benefits of AOA
An earlier Loosely-Coupled newsletter explained that if you follow REST, you're unable to accept out-of-band metadata or business context outside of the hypermedia. Agent-Oriented Architecture, however, solves these problems, because the agent is free to fetch whatever it needs to complete the request, since it treats all entities - metadata, data, code, etc. - as resources. In other words, the agent serves as a RESTful client, even when the software client does not. What was out-of-band for REST isn't out-of-band for AOA. Everything is on the table.
The true power of AOA, though, lies in how it resolves the fundamental challenge of static APIs. Whether they be Web Services, RESTful APIs, or some other type of loosely-coupled interface, every approach to software integration today suffers from the fact that interactions tend to break when API contract metadata change.
By adding an intelligent agent to the mix, we're able to resolve differences in interaction context between disparate software endpoints dynamically and in real time. Far more than a traditional broker, which must rely on static transformation logic to resolve endpoint differences, the agent must be able to interpret metadata, as well as policies, rules, and the underlying data themselves to create real time interactions that maintain the business context - an example of dynamic coupling, a central principle to AOA.
Dynamic coupling, therefore, represents a paradigm shift in how to build and utilize APIs. Up to this point in time, the focus of both SOA and REST has been on building loosely-coupled interfaces: static, contracted interfaces specified by WSDL and various policy metadata when those interfaces are Web Services, or Internet Media Types and related metadata for RESTful interactions. Neither approach deals well with change. AOA, in contrast, relies upon dynamic coupling that responds automatically to change, since the agent interprets current metadata for every interaction in real time.
Icons by http://dryicons.com
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
Jan. 22, 2017 03:15 PM EST Reads: 4,741
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
Jan. 22, 2017 03:00 PM EST Reads: 1,188
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
Jan. 22, 2017 02:30 PM EST Reads: 2,636
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Jan. 22, 2017 02:30 PM EST Reads: 3,753
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Jan. 22, 2017 02:00 PM EST Reads: 5,283
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
Jan. 22, 2017 12:15 PM EST Reads: 2,628
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Jan. 22, 2017 12:00 PM EST Reads: 3,633
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Jan. 22, 2017 11:45 AM EST Reads: 2,968
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Jan. 22, 2017 11:45 AM EST Reads: 6,457
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
Jan. 22, 2017 11:00 AM EST Reads: 1,240
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
Jan. 22, 2017 10:30 AM EST Reads: 3,272
An overall theme of Cloud computing and the specific practices within it is fundamentally one of automation. The core value of technology is to continually automate low level procedures to free up people to work on more value add activities, ultimately leading to the utopian goal of full Autonomic Computing. For example a great way to define your plan for DevOps tool chain adoption is through this lens. In this TechTarget article they outline a simple maturity model for planning this.
Jan. 22, 2017 10:15 AM EST Reads: 863
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
Jan. 22, 2017 08:30 AM EST Reads: 5,007
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
Jan. 22, 2017 08:30 AM EST Reads: 976
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Jan. 22, 2017 06:30 AM EST Reads: 5,605
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
Jan. 22, 2017 02:45 AM EST Reads: 6,160
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Ca...
Jan. 22, 2017 02:30 AM EST Reads: 7,920
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
Jan. 22, 2017 01:00 AM EST Reads: 2,912
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Jan. 22, 2017 12:45 AM EST Reads: 3,575
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
Jan. 22, 2017 12:00 AM EST Reads: 1,284