Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Article

The Real Niche forWeb Services: Part 2

The Real Niche forWeb Services: Part 2

Last month, in Part 1 of this article, I cautioned about the potential invasiveness of Web services. It's a scary thought that companies could have that much personal information about their customers, but I added then that there are some advantages to Web services, especially in the area of business-to-business. This month I focus on these advantages.

The Last Gold Rush
Business-to-business, or B2B, may have been the last gold rush of the 20th century. And like the gold rushes of yore, usually the guy selling the shovels is the only one making the real profit.

Web services would be perfect in the B2B arena, right? At least, that's what all the e-commerce companies are saying.

The idea for B2B is simple. A significant number of the transactions that occur in business take place between the boundaries of companies. A purchase order is sent in, a shipping transaction is made, and an invoice is sent out.

Each of these transactions, crossing enterprise boundaries, involves the passing of a piece of paper and the time and work to process that paper. By eliminating the paper (and incidentally many of the people originally involved in handling the paper), you significantly reduce both the time and cost of each transaction. Of course, that was obvious back in the 1970s.

After a lengthy and complex process, the UN came up with the Electronic Data Interchange (EDI) standard; a highly efficient system of describing most of the standard documents used in commercial ventures. Of course, the same issue of efficiency versus flexibility reared its ugly head here - and many companies discovered that there were always specific EDI fields that were either missing or not appropriate to their particular circumstances.

Semantic incompatibility rapidly emerged between the standards, and soon a cottage industry of converting from one EDI format to another became the foundation for many of the largest software companies in the world.

These Value Added Networks (VANs) performed a critical service - and charged accordingly. In all fairness, the VANs often did provide ancillary data services as well, but there is no question that the core business of rectifying incompatibilities between different supposedly standard documents remained their primary bread and butter until the late 1990s.

Hello XML
When XML first emerged, it was meant to solve a different problem: providing a framework for describing different forms of documentation. It was not, initially, meant to be either an e-commerce or data solution. But XML's ability to model a wide number of data objects raised the possibility that it could in fact serve as the right wedge for a number of upstarts to get into the same business as the old guard VANs. To a certain extent it was also seen by some leaders in other industries as a way to eliminate the costs of the VANs from their transactions.

I remember watching with some amusement in the late 1990s as these companies raced to either declare standards for their industry as a whole or raced to become the repository of all of the schemas that everyone was declaring. What happened instead was the realization that (even with languages such as XSLT to handle the sometimes thorny problem of translating syntax) most commercial documents are semantically different from one another.

In some arenas, where one or two large companies effectively dominated the landscape, e-commerce systems emerged dedicated to that particular industry vertical. The automobile and aerospace industries made significant gains in setting uniform standards, and in both cases you were looking at one company that dominated enough of the field that they could push standards down the pipe to their suppliers and up the pipe to their distributors. These companies, however, also had fairly robust EDI solutions in place as well, and they had the deep pockets necessary to test a number of different solutions and eventually choose the best ones.

The incarnation of Web services in this arena basically works along the scenario that a company can write an application submitting purchase orders to the company they buy from through the use of a Web service. You could then send in one or more purchase orders to the company in question (or potentially to a company that the other company outsourced to handle its Web services interactions - the new VANs) and get your order fulfilled in the space of seconds, not days.

Of course, if that wasn't enough, the next logical stage was also offered by companies such as Microsoft, IBM, and Ariba - the Universal Discovery, Description, and Integration language (UDDI). This particular XML-based standard proposal works on the premise that there are one or more UDDI databases that provide information about companies, in the order of White Pages, Yellow Pages, and Green Pages.

  • The White Pages standard provides critical contact information for people who work at a given company (think LDAP information data here) as well as a directory of public servers.
  • The Yellow Pages contain a listing of the company in various taxonomies - what field it's in, what its primary services are.
  • The Green Pages provide direct access to the Web services that the company supports.
UDDI is still in its infancy, but it's interesting to note that while it has garnered a great deal of interest, most of that interest comes from companies in the computer field that are interested in building services around such a directory. The response from most other industries has been deafening silence.

A Superfluous Service
UDDI to me represents a significant part of the reason why Web services will not be adopted for several years (and maybe a few decades). It's a solution to the software provider's problem of providing services in this so-called services economy - become a phone book for someone who culls information from that phone book. Of course, if companies already have a perfectly good phone book (and most do), then what's being offered is largely superfluous.

Consider this: a phone book is an advertising vehicle. If you're looking for a certain type of vendor, you open up the yellow pages and see the listings of all of the providers of a given type, listed in alphabetical (or perhaps first- come-first-served) order. You would also see that some ads are bigger than others and some have two- or three- or four-color pictures and large type while others are no more than a name and a number. Being human, you would most likely choose the one with the biggest ad.

UDDI, on the other hand, gives you all of the companies that have signed up in that taxonomy. You might use other UDDI fields to narrow the choice a bit, but that also may exclude companies that provide similar services or products or that didn't end up in the right taxonomy.

But what is perhaps most important to a company is that you don't simply become one of all the companies that provide the service in Seattle, Washington; you become one of all the companies that provide the service in the world. It's a highly efficient solution from a programming standpoint, but from the standpoint of a company's marketing manager, it's absolute suicide - no advertising, no real way of differentiating yourself, no game plan. In fact, chances of being selected by someone wanting your wares is largely dependent upon your place in the hierarchy.

Some retrenchment has occurred in this position. Now UDDI is being touted as a way (assuming you've already selected a company to work with) to know which Web services they have as well as to provide connections to key people in the organization. This last point of course is a bonanza for headhunters, corporate snoops, and anyone who may be disaffected with the company - not to mention breaching the social firewall that a well-trained receptionist usually provides for a company.

This gets back to the Green Pages, which in many respects are the only pieces of UDDI worth considering, from a corporate standpoint, though again not necessarily for the reasons that are promoted.

One useful way of thinking about a Web service is that it's an API function for a specific server. The aggregate of all the Web services on all of the host servers in a company essentially makes the company into a document object model. Thus, you could effectively createmy Company.financial. purchaseOrder with the specific service myCompany.financ ial. purchaseOrder. send( ). The Green Pages provide descriptions to the companies' Web services, including the expected parameters and result sets.

This information can be quite useful, but the creation of a UDDI infrastructure that performs this actually adds an additional layer that could be solved just as easily by an e-mail or phone call (Hey Joe, send me the link to your WSDL - Fred). In many respects it's far more secure than posting this information in open repositories, and makes it easier to keep such a system up-to-date.

The Intrabusiness Solution
I think that business-to-business Web services solutions will eventually come but not for a number of years. The principal reason for this is that currently the B2B approach to Web services seems arbitrarily tacked onto the existing business infrastructure. It requires a great deal of cooperation between people who ordinarily don't even talk to one another, pre-empts many major roles in a company (marketing, sales, even management solu-tions), and exposes companies to the dual possibility of buggy software and deliberate hacking. Finally, in this day and age of layoffs and companies paring to the bone, the adoption, integration, and management of such external points of contact are hard to justify when existing systems work adequately and the cost of the new systems (in terms of developer time and potential disruption) is simply too great.

Yet I would argue that there are three areas where Web services make perfect sense: intrabusiness APIs, communication systems, and device-to-device telematics. None of these are being pushed as sexy solutions, in great part because the boundaries they cross don't offer the potential to make a profit - though they can significantly reduce costs.

Intrabusiness APIs First
How many computer applications does a typical company have? Even a small company probably has dozens - between commercial applications like word processing programs and internal applications developed to fulfill specific needs within the company. Some of these applications are finding their way to portals, single points of contact within a company, but in many cases people need access to the data directly rather than to someone else's view of that data.

Suppose for a moment that you have several databases with different types of data that can be extracted for use. A typical developer needs to determine where that data is, move it into a convenient form for processing, then write the requisite code for manipulating that data.

In a small shop, it's likely that one person has written most of the applications and can generally tell you when duplications occur or where some piece of software mirrors something that already exists. On the other hand, if you have a company with distinct facilities and distributed IT departments, then in all likelihood you'll run into situations where the same or similar tools are developed (sometimes repeatedly); where multiple versions of tools are in circulation at any given time; and where incompatibilities and gridlock soon develop. This is especially true in places that develop class libraries. Such incom-patibilities can be both costly and frustrating to resolve.

Web services actually work ideally here. Consider the case of Acme Widgets. They have run into the problems described earlier, and in the wake of layoffs are now struggling with too many applications that are either unsupported or suffer from incompatibilities between versions. The IT manager, Sheila Jenkins, looks at the company's needs and sets up a set of Web services APIs that developers can start referencing in their own work. The services include version information and multiple versions of each API are maintained as they are developed. This means that a programmer writing against the API needs only to specify the version in a Web services call to ensure that the code continues working even as the system evolves.

In addition, applications written against these services are themselves designed as Web services that run on local servers. On any given day, Jennie Martin, a programmer in R&D, writes an application for tracking progress of research efforts and makes information available in a wide variety of forms. The program works well and becomes popular. In a review of local services, Sheila decides to promote Jennie's application to the level of a company-wide API. A new version of the program is set up, and older versions that ping against the local servers basically do nothing but link to the more recent versions of the API on the company servers.

An Organic API
Over time, then, an organic API develops for the company. The limitations of versioning actually become an advantage, and increasingly the code in a company becomes its own; assets that in turn could be sold to other companies as a product. Documentation becomes simpler as well, as any API version would be designed to include its own documentation.

Finally, because Web services are effectively agnostic, Web services could be developed in any language - C++, VB, Java, JavaScript, XSLT, Perl. It doesn't really matter. This is not due to any deep IDL or other "magic" technology. Rather, a Web service is simply an abstraction, an interface, into the actual API set.

The fact that such Web-service implementations are now appearing for most languages (including several that are nonproprietary or Open Source) effectively makes it possible to achieve that holy grail of programming: language-independent code. It does this pretty much by default; not by enforcing a single code base as Java does but by rendering the interfaces in XML and the implementation in the language of choice. This isn't all that different from the way that NET operates, save that Microsoft approaches the problem by creating an intermediate IDL from each language, then compiles that.

The idea of Web services as the basis of organic, internal APIs fits well into the XML paradigm as well. XML has this curious characteristic anyway: once it finds its way into a system, it tends over time to become a pervasive part of that system. I think that's because XML is an abstraction layer that makes the interconnections between data, code, and documentation become far more obvious than they tend to be in a procedural, object-oriented world. Web services similarly abstracts not just the implementation but also the language and connection points of the interface, turning API calls into Web addresses.

The Irony of Web Services
The irony of an organic intracompany Web services package is that it will, in the end, provide the basis for a true business-to-business Web services environment.

One of the principal problems with Web services is security, which is generally less of an issue on an internal network. Once the intra-company Web services network is created, there's no reason that a separate set of APIs couldn't be written that simply wrap the existing APIs in a stronger security layer. Thus, rather than spending huge amounts of money and resources now trying to tack on yet another set of commercial interfaces into an already stressed system, organic intranets let outside users leverage the unique expertise of the company's tools and resources at a much lower cost in an incremental fashion.

This last point can't be stressed enough. There are some deep systemic problems with the way software is created now because it is still seen largely as a product that must be presented all at once. In my experience, however, software grows within a company: new programs are added to solve problems, older programs get archived to handle legacy systems, periodically a program is killed, and a system is upgraded when the cost of maintenance becomes too high to justify continued use of the project. Occasionally you get systems management consultants who come in and attempt to impose top-down re-architecture, but not surprisingly most of these initiatives end up failing because this approach neglects the fact that software evolves in response to need.

Web services in that regard represent a profound shift because a Web services system can change incrementally. For example, consider a document-management system based on Web services. The system is Web-based, with a browser for a front end that configures its menus and other options based upon a call to a Web service. One night, a Web services administrator adds support for a new document type to the system, creates an incremental version change to the Web services that support the file chooser, and notifies the master versioning system that a new version has been added. When an editor opens up the application in her browser the next morning, a listing of all changes that took place throughout the night appear in the initial splash screen - and when she goes to open files she discovers that she can now read, edit, and write the new document format.

This transparency means that a basic system can be created and then, as people respond to the application, it can be shaped to more accurately represent the requirements for the app. Is a particular feature especially disliked? The feature can be made to go away, but, because of versioning, people who found they preferred the older system can choose to work on earlier versions. Note that this is obviously not perfect - you will undoubtedly run into situations where a user has to choose between a feature that they liked and working with functionality that a newer version added. But compared to the current deployment hell that most systems administrators run into every day, a problem like this would be fairly minor.

The one area this approach stresses is user education: new features take time to learn. If, however, one of the requirements for promoting a feature to a production-level server is the creation of adequate documentation for that feature, then you can simultaneously solve the problem of documenting the application as a whole (by distributing the writing of such documenting) and of ensuring that features are sufficiently constructed so that documentation can be written for them.

Moreover, currently upgrading to new software packages often involves learning significantly different ways of doing things, not to mention all of the bells and whistles that were added for marketing purposes but that users need to evaluate for their own needs. Incremental development and deployment minimize this.

I think this particular use of Web services will ultimately become one of two dominant ones. It's instructive to note that for all the hype that many of the larger software vendors make about Web services being destined for B2B systems and interchanges, developers who are creating pilot and test projects using Web services are finding they are increasingly their own best customers of these services. Organic development is already under way in a lot of companies without admitting that this is precisely what's happening. Time will tell.

Peering into Peer-To-Peer
There is another aspect to Web services that will likely emerge in the same organic, subtle way, though it will take longer for it to happen.

The Internet has two operating models at the moment. The first was a major reason the Web exploded in the first place: when Tim Berners-Lee wrote the first Web browser, he also, simultaneously, wrote the first Web server. At the time, these two pieces of software were seen as analogous to a short-wave radio - the server sent messages out while the browser received them. So long as they were in the same box, it was easy to see the nascent Web as a collection of peers.

In the early 1990s, however, as the Web exploded, the demand to access the Web exceeded the computational and network capabilities that existed. More and more people accessed the Web through dial-up accounts because the cost of owning a T-1 (or even a fractional T-1) was so high.

This discrepancy turned the Web into a hub-and-spoke system where large ISPs invested heavily in infrastructure to support the biggest bandwidth connections. They then leased these to intermediate providers who made the services available to people with slow dial-up connections. The Web became strongly hierarchical, which meant that someone wanting to set up a Web site on their own machine was serving it at a thousandth of the speed that the most powerful servers were sending information.

The Second Phenomenon
A second, related phenomenon has been the effort on the part of service providers to restrict upload access to the Web. In some places this involved a cap on uplink speeds, in some places it meant the creation of DHCP that would efficiently allocate increasingly rare DSN numbers. Ipv6 is intended to solve this latter problem, but for at least some server vendors, a move to Ipv6 will likely take some time.

In other words, an increasing asymmetry between those who provide content and those who consume it has been occurring. The current vision of Web services is conceived to strengthen this, as it places a significant premium on the server to provide the applications involved. Microsoft's Hailstorm is in fact a strong example of this mindset in that, while touted as being a distributed service, it will place a significant amount of the utilization of personal Web services in one of the largest client/server systems the world has ever seen.

Yet, much like the organic development of Web services as intrabusiness applications, there are some intriguing hints that Web services may ultimately lead to the systematic decentralization of the Web, though it will be a process that may take years to play out. To understand why, consider again what a Web services system is. Ultimately, it's a set of APIs that abstract the server as a semantic provider of information. Thus, you can talk about a "server" labeled as www.mycompany.com/finance that provides a set of finance-oriented methods. This is the big-box view of servers.

But more and more you're seeing cell phones, PDAs, and personal laptops with wireless connections become the dominant way of connecting to the Web, and these devices pose two problems to those who prefer to see centralization of services. The first is that they're increasingly connected to the Web via always-on connections, which in turn means that dynamic IP allocation can't be used - you need absolute address.

The second is that a Web services model actually works better with these devices than the traditional high bandwidth HTML that has been the staple of Web pages. Most handhelds (as well as toys, a subject I'll bring up again) are from a programming standpoint simple and well-defined APIs; far simpler in fact than the rich API set of even the most basic general server.

Helping Bedeviled Programmers
Encoding a Web services stack into these devices could bypass many of the incompatibility and versioning issues that currently bedevil programmers working within that sphere. Such servers would be simple and relatively static, but communicating with any mobile device often requires only a few methods anyway. Moreover, it may be possible to put such services interfaces into software that could be updated through a Web service itself, so that such devices are always using the latest API. Thus, each of these devices becomes www.myPDA. phone/personal or something similar. (Actually, most such devices would just be referred to by their Ipv6 address, but there's no reason why a named address couldn't work here.)

This in turn works its way up the chain. I have a laptop with a wireless Ethernet connection that turns it into a roving server. I have a second dedicated desktop server with a services connection set up that periodically queries the laptop - if it's live (and I give my permission), then any future queries get forwarded to my laptop until I disconnect.

"Client-side" Web services are much more likely to be interface-oriented, by the way, while "server-side" Web services are more typically going to be data-centric, but beyond these distinctions the real boundaries between client and server pretty much disappear.

This distinction is also disappearing in dedicated chat systems. If your system hosts a set of Web service APIs that are dedicated to a chat application, then subscribers to the same set of protocols who had (or could discover) your IP address could communicate directly with you, without connecting into a central server.

Underground Systems
These systems would start out as largely "underground" because for those who currently are trying to profit from the Web, the only real way to do it is to become the intermediary who brokers the interactions between people (by charging a fee for either access to the network or for each transaction). Yet, in time, perhaps three to five years, this form of specialized chat (or file sharing, which SOAP would be ideal for) will probably render most instant message and similar services in existence today obsolete.

I think in the long term these Web services will also render our current browsers to a quaint, curiosity status. A browser is in effect an early Web service client: it makes a request to a server for a set of data that happens to be a document of some sort (or a media object), and then has the intelligence to convert the response into a workable interface called a Web page.

As XML and XSLT become more prevalent, the role of the browser becomes subsumed by the operating system; if you can define an interface via XML (it doesn't have to be XHTML, by the way - just the instructions that tell where specific widgets are placed, what hooks they have within them to data, and how these widgets fit with other services), then the whole notion of interface programming over time becomes a function of XML-based services. Already, you see this with XUL, the language that Mozilla uses to describe its own interfaces.

In many ways, this is what I see as the true personal Web services - the abstraction of interfaces, the decentralization of applications, transient networks built upon protocols that themselves can shift and change in response to requirements, the movement away from large, concentrated systems and toward systems that encourage what the Internet was originally designed for: communication between people.

More Stories By Kurt Cagle

Kurt Cagle is a developer and author, with nearly 20 books to his name and several dozen articles. He writes about Web technologies, open source, Java, and .NET programming issues. He has also worked with Microsoft and others to develop white papers on these technologies. He is the owner of Cagle Communications and a co-author of Real-World AJAX: Secrets of the Masters (SYS-CON books, 2006).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...