Microservices Expo Authors: John Worthington, Liz McMillan, Elizabeth White, Stackify Blog, Pat Romanski

Related Topics: Microservices Expo

Microservices Expo: Article

Mission Critical Web Services

Mission Critical Web Services

Web services are a great vision to talk about. as evidenced by the increasing number of companies declaring themselves the leader in the Web services market. Hype aside, just as with XML, sooner or later we'll all realize there's no Web services market per se but only ways to apply Web services as part of B2B machine-to-machine integration, enterprise portals, knowledge management, marketplaces, self-service forms, and so on. In other words, what we need to focus on is how to use Web services to solve specific technical problems rather than getting excited about new and more dynamic "plumbing."

Returning to the vision part, it's easy to be almost overwhelmed by the many technologies that need to be in place to make the full vision of Web services a reality. The good news is that not all of these specifications and standards need to be in place to get started. But first, let's look at the technology landscape today and how it affects the delivery of the vision of Web services.

Technology Landscape
Interface Standards/Vertical
Business Objects

As we know, SOAP can be deployed following a more messaging-oriented paradigm or a more remote procedure call (RPC) paradigm. There are others also, but let's focus on these two.

If we use SOAP for RPC, we don't need vertical standards per se as we're just calling methods on remote objects. What worries me a bit is that we see many wizards and tutorials that can easily transform Java objects or COM objects to a Web service, which in itself leads to RPC-type Web services to execute a particular transaction. Because we're calling a method and its associated parameters, I still consider RPC-type calls as tighter coupling.

This is great news if I just want to integrate the Web services of one or two business partners. It becomes more problematic if what we're interested in is transparent information exchange across an entire value chain, where we need to have agreements on messaging structures rather than just APIs.

On the side of vertical business objects, thanks to the work of the numerous industry consortia, we already have a pretty good set of business objects available and we're starting to see the adoption of some of these.

Web Services Security and SSO
What's the most annoying thing in dealing with Internet e-business services? Did you think "having to log into each system (over and over again)"? Right!

The magic word is single-sign-on. It's a core part of the Web services vision to be able to integrate and aggregate services provided by numerous geographically and logically dispersed providers.

Without standards to provide means for services to exchange users' authentication information, large-scale deployment of dynamically aggregated services won't happen.

The good news is that we have a few very promising initiatives going on. One is the OASIS Security Assertions Markup Language; the other is Passport, driven by Microsoft.

Distributed Transactions
Gone are the days when ACID was everything we had to worry about (and I don't mean kids doing drugs in discos). Atomicity, Consistency, Isolation, Durability were everything we needed to ensure well behaved transactions in our systems.

It became more complex as we started to distribute our systems. All distributed databases and other such systems today support "two-phase commits" to deal with the increased complexity of transactions in distributed environments. In this scenario, all participating processing nodes have to either commit or rollback a particular transaction.

The Internet, however, is the most distributed and complex system we can imagine. Two-phase commits alone are not suitable to address the complexity of Internet-scale transactions. In the Internet realm we typically deal with multiple entities (customers, suppliers, and business partners) that use loosely coupled Web services to execute a particular transaction. This overall transaction in turn consists of smaller transactions that are wired by some implicitly or explicitly defined workflow. Should an individual transaction fail, we may not be able to roll it back so we have to introduce a compensating transaction to regain system integrity.

While the standards and vendor community is working hard to address this problem, no broadly accepted standards exist. Examples of interesting efforts in this arena include the OASIS Business Transaction Protocol.

Business Processes
"Workflow goes Internet." While in the past we could be content with describing the workflow within the confines of our corporate boundaries, today we have to deal with complex interactions between a number of parties involved in our extended enterprise (including customers, suppliers, and business partners).

Emerging specifications will help us find a common, reusable, and hopefully interchangeable format for describing these processes. The most promising initiatives include the OASIS/UN-CEFACT ebXML suite, Rosetta Net, Biztalk, BPMI, and WSFL, just to name a few.

In this category of emerging technologies, the difficulty isn't finding novel ways of addressing the problem, but rather coming up with a broadly agreed upon way of doing it.

On a smaller scale, we'll need business process management as well. Web services help us to break up applications into discrete components. Once that has happened, something has to integrate them back together. This will be either some glue-code or - even better - some externalized business rules driven by the business process engines.

Reliable Protocols
We want our systems and our Web services to be reliable. The problem is, while we know that SOAP doesn't care about the transport protocol used to get a SOAP message from A to B, we also know that we'll use HTTP, especially between companies.

Unfortunately, while HTTP is known for being ubiquitous, it isn't known for being reliable. Two approaches can be used to fix this. You can build code around these protocols - which is what most designers have to do today - or you can make the protocol more reliable, which is what the guys at IBM are suggesting with their "Reliable HTTP" - httpr.

Payment Services
Software as a service? This is unlikely to work without an associated plan for charging for it. (Larger software companies seem to prefer this model to the old-fashioned buy-and-own.) Certainly a recurring revenue-stream seems to be valuable, and it will be much harder to do a "backup" copy for a service-based software package.

As we talk about payment services, however, we need to distinguish carefully between paying for software and the use of it (as described above), or whether we're talking about a business service (like a credit card check) that we pay for (whereas the Web service is more the means than anything else). I'm not yet convinced we can treat them in the same way (but I'm always willing to listen).

Web Services Adoption
So are we doomed? Will we have to wait for all these things to be defined and standardized? Not really!

Informational Web Services
These kinds of services are the best examples for using Web services without having anything else in place. Not mission critical in nature, these services offer stock quotes, weather reports, traffic reports, and so on. In this case I am not really concerned with anything else but getting to information (and showing off "the power of dynamic Web services").

These services lend themselves nicely to RPC-type SOAP calls where we want instant gratification. I believe all the services on the well-known Xmethods site follow this paradigm without exception (which isn't surprising).

Web Services as Next Generation APIs
This should be the most straightforward thing to do (especially for applications that aren't tied to a heavy user interface component). Is there an existing SDK to your product? If yes, then all you need to do is wrap the EJP, JSP, COM, or whatever components you use into a collection of Web services and suddenly your product is Web services- enabled and Internet-ready.

Extending the Reach
This is where it gets really exciting with respect to Web services in the short term. In the past I could have a few online forms as part of my extranet and people would use them to order goods, check on availability or delivery, and so forth.

What limited the reach of these applications was that, for one, all of the user interface was hardwired. Many intranets have set guidelines for colors, fonts, and other style-related attributes. With Web services you can integrate a service and have complete control over the visual appearance, providing seamless integration.

Secondly, interfaces weren't explicitly described. This made it hard to programmatically call an old-style Web service without having to dig into the HTML code to figure out what the parameters were (not talk about valid types, error-handling, etc.).

With such Web services, it's also easier to extend the reach beyond the browser to cell phones, PDAs, and all the other great devices that are still looking for the "killer application."

Small to Medium Enterprises
With Web services, a smaller enterprise can now do what it previously would have required costly EDI infrastructure for. Now it can finally afford electronic data interchange and participation in larger markets (we just need to wait for the right tools to hit the market). The OASIS/UN-CEFACT was designed with this category of users in mind and it's also architected to use Web services from a transport perspective.

Dynamic Trading Networks
In the short term we'll see some vendors providing platforms to develop, publish, locate, integrate, and charge for Web services. Since none of the standards I described are in place, these vendors will have to make up for some shortcomings by "proprietary" approaches.

This is a part of the vision of dynamic Web services where we see most disconnect between current practice and future promise. The problem with this vision is that, while exciting and promising, it does have to overcome many existing hurdles of today's business and technical everyday practices.

A company's data structures (read schemas) contain a lot of information about how they do business (often, information they want to share with just a handful of business partners). The same holds true for business processes or other aspects of a company.

Maybe that's why the adoption rate of public service and schema registries is so slow (and I am not talking about quantities of objects stored, but rather the way people use them and for what purpose). Unless you're really interested in the vision of dynamic discovery of business services, you're more likely to just publish your Web service descriptions, schemas, and other interchange relevant components to your known business partners, and that's all you need to do..

Trust has been at the heart of business since the dawn of time. The "new economy" requires a company to trust an unknown, almost anonymous entity to deliver goods or (Web) services that are critical for its business. That's a lot to ask for, which is probably one of the reasons why, according to the various assessments, private and more controlled e-marketplaces are more pervasive than open and "public" ones.

We can see Web services beginning to emerge as a means to solve business problems. Once we've covered distributed transactions, single sign-on, business process management, and so forth, we can talk about the new world order. Until then, and even then, Web services - just like any other technology - will change the way we do business, but often not as fast as we i-technology visionaries would like.

More Stories By Norbert Mikula

Norbert Mikula has more than 10 years of experience in building and
delivering Internet and e-business technologies. He serves as
vice-chairman of the board of directors of OASIS and is industry
editor of Web Services Journal. Norbert is recognized internationally
as an expert in Internet and e-business technologies and speaks
regularly at industry events.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...