Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo

Microservices Expo: Article

Identifying and Brokering Mathematical Web Services

Identifying and Brokering Mathematical Web Services

An important part of the Web service vision being promoted by the Worldwide Web Consortium (W3C) and others is that of automated service discovery, the idea being that when we need a particular kind of service we will no longer have to go out and search for it manually; our computer will do it for us.

Nowhere is this more necessary than in scientific computation. International collaboration here is already the norm and is increasingly being supported by the Grid, where specialized resources are connected by high-speed, high-bandwidth networks. To realize this vision requires mechanisms for describing what services actually do, and for reasoning about those descriptions in the presence of a user's problem.

MONET:
Mathematics on the NET

The MONET project is a two-year investigation into how service discovery can be performed for mathematical Web services, funded by the European Union under its Fifth Framework program. The project focuses on mathematical Web services for two reasons: first, mathematics underpins almost all areas of science, engineering, and increasingly, commerce. Therefore, a suite of sophisticated mathematical Web services will be useful across a broad range of fields and activities. Second, the language of mathematics is fairly well formalized already, and in principle it ought to be easier to work in this field than in some other, less well-specified areas.

According to the Worldwide Web Consortium (W3C), the key to service discovery will be provided by the semantic Web, where information is encoded using formal languages or ontologies whose meaning is well defined and unambiguous. Given a set of service descriptions and a job specification written using a suitable collection of ontologies, a software agent ought to be able to select the appropriate service for the job and generate any necessary proxies to enable the interaction between the client and the service.

This is fine in theory but making it work in practice is rather difficult. It is actually quite hard to describe what a service does, particularly when it is based on an existing piece of mature software whose semantics are not fully documented. Moreover, the software agent needs to be able to understand the relationships between the high-level specifications described by the ontologies and the low-level service interfaces that are currently likely to be written in WSDL.

The MONET consortium includes commercial companies, national research laboratories, and universities; and aims to develop mechanisms that can be freely used by scientists worldwide. Details of the technology it is developing can be downloaded from the project Web site at http://monet.nag.co.uk.

The Problem
While for many areas of mathematics there are generic algorithms that can, in theory, solve many problems in that class, in practice most of the algorithms scientists use are designed to address a relatively narrow range of problems. Specialized algorithms are generally much more efficient and predictable in terms of resource usage and can yield more accurate results. This leads to an interesting question about mathematical Web services: Will they be fairly monolithic and address a whole class of problem, or will each algorithm be a separate service?

The advantage of deploying Web services that address whole classes of problems is that it makes the process of matching problem to service much easier. The software doing the matching does not need to understand the more subtle features that are used to distinguish the subclasses; this kind of specialized knowledge is part and parcel of the service. On the other hand, specialized services are more lightweight from a developer's point of view, and reflect the reality of how mathematical software is developed and used in practice. The MONET project takes the view that you need to be able to capture arbitrarily detailed information about the applicability of a service.

The MONET Architecture
A simple example of the kind of interaction MONET aims to facilitate is shown in Figure 1.

 

There are three "actors" in this process. The first is a group of services and the second is a client. The third is a piece of software called the Broker, but for convenience we distinguish between several of its components that undertake distinct, well-defined tasks.

The process of the client discovering an appropriate service and then invoking it can be broken down into five steps:

1.   Registration: The services register their capabilities, access policies, etc., with the broker's service manager.
2.   Inquiry: The client sends a description of the kind of service it is looking for to the broker. This description may be generic (e.g,. something like "find me a service that performs definite integration", or specific (e.g., "find me a service to solve the problem".
3.   Analysis: The planning manager inside the broker analyzes the problem and extracts the criteria on which to select a service, which it then matches against the registry maintained by the service manager. If it finds one or more possible matches their details are returned to the client along with an indication of how closely they fit its requirements.
4.   Selection: The client selects a suitable service and requests access to it via the broker's service manager. What this entails depends on the access policies of the service and the particular service infrastructure being used; in the case of grid services, for example, where every abstract service is actually a factory, a new service instance would be generated and a handle returned to the client.
5.   Connection: If access is granted, then the client initiates a connection to the service.

This is a simple scenario and MONET has been designed to work with more complex and less easily defined problems. In practice, the broker might employ other, more sophisticated planning services to help it with the matching process, and the result might not be a list of candidate services but a list of execution plans based on the composition of multiple services using a suitable choreography language such as BPEL4WS. However, in all cases the key to making the process work is a sophisticated mechanism for describing problems and services that allows for the effective matching of one to the other.

Mathematical Service Description Language (MSDL)
MSDL is the collective name for the language we use to describe problems and services. Strictly speaking, it is not itself an ontology but it is a framework in which information described using suitable ontologies can be embedded. One of the main languages we use is OpenMath, which is an XML format for describing the semantics of mathematical objects. Another is the Resource Description Framework Schema (RDFS), which is a well-known mechanism for describing the relationship between objects. The idea is to allow a certain amount of flexibility and redundancy so that somebody deploying a service will not need to do too much work to describe it.

An MSDL description comes in four parts:

  1. A functional description of what the service does
  2. An implementation description of how it does it
  3. An annotated description of the interface it exposes
  4. A collection of metadata describing the author, access policies, etc.
MSDL Functional Description
There are two main ways in which it is possible to describe the functionality exposed by a service. The first is by reference to a suitable taxonomy such as the "Guide to Available Mathematical Software (GAMS)" produced by NIST, a tree-based system where each child in the tree is a more specialized instance of its parent. This has the twin advantages of being easy to do and of providing a hook into other taxonomy-based classification systems such as UDDI. The disadvantages are that fixed taxonomies fail to capture the evolving nature of mathematical algorithms, and a particular taxonomy may not be rich enough in certain areas (for example, GAMS makes detailed distinctions between software for numerical analysis while lumping all software for symbolic computation into one category). Moreover, while it is easy enough to grow the taxonomy from the leaves, adding internal nodes disrupts the inheritance structure.

The second way to describe the functionality exposed by a service is by reference to a Mathematical Problem Library, which describes problems in terms of their inputs, outputs, preconditions (relationships between the inputs), and post-conditions (relationships between the inputs and outputs).

For example, the problem of finding the minimum value of an expression that is subject to simple bounds on its parameters where both the expression and its derivatives are present might be expressed as follows.

Inputs
1. F(v1.....vn):Rn--> R
2. ACRn
3. {Di(v1.....vn):Rn--> R, i = 1...n}

Outputs
1. x ? A 2. f ? R

Pre-conditions
1. Di = af/avi,i = 1...n

Post-conditions
1. F(x) = F 2. Ay ? A:F(y)>f

One important use of the Mathematical Problem Library is to provide names for the various objects that form parts of the problem (in this case F, A, Di, x, and f). This is used in both formulating a problem (i.e., one can say, etc.) and also, as we shall see later, in understanding how to construct the messages defined in the WSDL file.

There are a number of other pieces of information that can be included in the functional description. A directive is used to indicate something about the approach taken by the service to tackle a user's problem. In the above case the directive would usually be solve, but an alternative might be prove - i.e., given particular inputs and outputs plus the preconditions prove that the post-conditions hold. It is also possible to include statements in other formalisms such as RDFS or the emerging Web Ontology Language (OWL) being developed by the W3C.

While this may seem to involve a certain amount of redundancy, the various parts of the functional description can, in practice, be highly complementary. For example, the entry in the problem description library might indicate that the problem involves the solution of a particular kind of differential equation while the taxonomy reference would add the fact that the equation is stiff.

MSDL Implementation Description
The MSDL Implementation Description provides information about the specific implementation that is independent of the particular task the service performs. This can include the specific algorithm used (by reference to an Algorithm Library), the type of hardware the service runs on, and so on.

In addition, it provides details of how the service is used. This includes the ability to control the way the algorithm works (for example, by limiting the number of iterations it can perform or request a specific accuracy for the solution), and also the abstract actions that the service supports. While in the MONET model a service described in MSDL solves only one problem, it may do so in several steps. For example, there may be an initialization phase, then an execution phase that can be repeated several times, and finally a termination phase. Each phase is regarded as a separate action supported by the service.

MSDL Interface Description
While WSDL does a good job in describing the syntactic interface exposed by a service, it does nothing to explain the semantics of ports, operations, and messages. MSDL has facilities that relate WSDL operations to actions in the implementation description, and message parts to the components of the problem description. In fact the mechanism is not WSDL-specific and could be used with other interface description schemes such as IDL.

Conclusions
There are many other aspects of Web services - not least the ability to negotiate terms and conditions of access, measure the quality of the actual service provided, and choreograph multiple services to solve a single problem - that are still being worked out. The partners in the project's ultimate goal is to develop products and services based on the MONET architecture but the viability of this depends to a large extent on solutions to the other emerging issues. While we are confident that this will happen, it is not yet clear what the timescale will be.

The MONET project is currently building prototype brokers that can reason about available services using MSDL descriptions encoded in the W3C's OWL. We are also investigating the applicability of this technology to describing services deployed in the Open Grid Service Architecture (OGSA).

Useful Resources

  • MONET: http://monet.nag.co.uk
  • OpenMath: www.openmath.org
  • OGSA: www.ggf.org/ogsa-wg/
  • BPEL4WS: http://www-106.ibm.com/developerworks/library/ws-bpel/
  • Worldwide Web Consortium: www.w3c.org
  • OWL: www.w3.org/2001/sw/WebOnt/
  • GAMS: http://gams.nist.gov
  • UDDI: www.uddi.org
  • More Stories By Mike Dewar

    Mike Dewar is a senior technical consultant with the Numerical algorithms Group (NAG, www.nag.com), a worldwide organization dedicated to developing quality mathematical, statistical, and 3D visualization software components.

    Comments (1)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    Microservices Articles
    Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
    "NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
    In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
    Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
    In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
    The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
    DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
    Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
    TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...