Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Stackify Blog, Pat Romanski, Yeshim Deniz

Related Topics: Microservices Expo

Microservices Expo: Article

Distributed Parallel Computing with Web Services

A pivotal role on the back end

Web services technology has become the ubiquitous connectivity fabric amongst diverse business domains and technical camps. At the same time, distributed parallel computing is becoming the de facto architecture for managing the performance of computationally intensive, long-running programs.

So, is it counterintuitive to consider Web services when pursuing performance improvement of compute-intense, long-running applications? It may seem that way but, most amazingly, Web services play a critical role not in one but in two areas of High Performance Computing (HPC) and distributed parallel computing:

  • Communications/deployment
  • Classifications/discovery services of resources
In other words, Web services play a role in the application adaptation and infrastructure layer, respectively. Sensibly enough, Web services deliver again on the promise of semantic and syntactic universal collaboration and wide acceptance in yet another coming-of-age technology.

This article looks at how the Web services scenario is unfolding in the distributed parallel computing space. First it discusses the developing infrastructure standards, followed by some definitions, and then delves into the real grid opportunity: to improve application response time while workload balancing. I'll conclude with a recent project implementation.

Grid Provides the Infrastructure for Parallel Distributed Computing
The Globus project and its proposed Open Grid Services Architecture (OGSA) specification describe how Web services can facilitate the creation, life-cycle management, and security requirements for reliable and industrial strength Grid Services Architecture (GSA). However it only seems to address the infrastructure and resources associated with the grid. So what about the application, you might ask. After all, the grid is only as valuable as the applications you run on it.

The most common approach to enabling an application for a grid environment is to decompose the application into smaller, independent subtasks or job "chunks," submit the jobs to the grid (schedule and deploy), and hope for the best. This paradigm provides a good solution for batch cycle applications and is well suited for coarse-grain parallelizable applications.

But what about the standards, providing best practices, support, and tools for distributing more complex applications, sometimes referred to as "fine-grain parallelizable."

Furthermore, many have argued that such services are orthogonal to what OGSA addresses. Yes, I agree that the specification is about the infrastructure, but only for now. Remember the debate about middleware, EAI, application servers, and the J2EE stack? I foresee that the same line of arguments regarding distributed parallel computing will soon be coming to a grid near you. In fact the two technologies - J2EE and grid - have very similar characteristics, including services, specifications, best practices patterns, and supporting technology middleware, runtime containers, and desirable quality of services. And in both cases Web services play a key role. Perhaps the overriding similarity is n-tiered distributed computing.

Parallel Distributed Computing and the Grid
First, let's talk about grid and distributed parallel computing, or are these the same? Some say it's a matter of definition again. This is especially true for uncharted technology territories: there seem to be as many definitions as opinions. However, some commonly used terms and concepts have been emerging. This is not surprising as parallel computing has been around for quite a while, for example as parallel computing execution on symmetric multiprocessing (SMP) machines or massively parallel computers. Programming to leverage these configurations required executing multiple parallel threads and sharing common memory - fast memory, that is. Silicon Graphics and a number of mid-range to high-range Sun and AIX multiprocessing machines are a fine example of parallel computing machines - the next best thing to Cray supercomputers!

These SMP configurations provide horizontal scaling by adding CPUs (typically up to 64). You add more processors until you can't add any more. But what happens when your application isn't completing fast enough and you need a 65th CPU? The only choices that don't require reprogramming are to buy a system with more powerful CPUs or a system that can support more CPUs. If this doesn't help the outlook is either chunking and distributing or bleak.

So the real distinction between distributed parallel processing and parallel processing is the access to the data that would have been in the shared memory in the parallel configuration. This could require a minor change and some data selection and movement logic being added, or it could be a major consideration because the movement of the data among the processors executing the instances of the application could take more time that the additional CPUs reduced. Typically, distributed parallel computing involves dozens or hundreds of computers, the grid or compute farm, and concurrently running components of an application.

I saved for last the notion of parallel running of an application. In a nutshell parallel computing exploits concurrency of execution, so no arguments here: it is parallel concurrently computing as opposed to serial computing. There is only one thing missing from the alphabet soup, the main character: the application.

Applications: Making Fast Faster
High Performance Computing is about doing more with less or making programs that run fast run faster! Consider an HPC application currently running as fast as possible using parallel computing (multiple system resident CPUs). Moving this application to a multiple system configuration typically requires reengineering the program. This translates to diverting developer focus away from new projects, and may require recertifying the application and hiring engineers conversant in distributed computing and data considerations. But most of the programs were designed with sequential flows in mind and the programmers (those who are mere mortals) typically think in terms of sequential flows and are most effective in designing, writing, and debugging sequential programs, albeit some are fluent with multitasking techniques at times. So where is the magic?

So the options to move programs from sequential to SMP machines are:

  • Use special parallel compilers that leverage the multiple CPUs which at best offer less than optimal improvements without major tuning
  • Use an advanced language that supports threads and write multithreaded programs. It's certainly not a piece of cake to develop, not to mention what happens to the application when "the thread guru" moves on to bigger better things or goes on in pursuing alternative life styles.
Developers and organizations recognize that the ever increasing business volumes will drive their already loaded SMP-based applications beyond the affordable expansion point, and are looking at ways to avoid this problem. They are looking at ways to distribute the individual program components using less expensive systems as part of cluster and grid strategy.

So the question remains: how do you improve the response time of an application by distributing it to a compute farm? If the application has a high-volume, sub-second response, transactional characteristic we are in luck. WebLogic and BEA's robust clustering technology can effectively distribute a high transactional application to a cluster. But there is a large class of application legacy of some sort, such as finance quant apps, engineering numerical analysis solvers, life science genome applications, or computational statistics which require processing large amounts of data and are compute- intense. These still need a solution.

The Compute-Intensive Application
To discuss the distributed parallelization options, it is helpful to further classify applications into stages based on the best practices parallel distributed computing design patterns they fit in. I've called them Stage I and Stage II variants of distributed applications.

Stage I distributed application candidates can be described as "same instructions small different data" design patterns. Consider the search for extra terrestrial [email protected] project or the search of large Mersenne prime numbers - both items arguably on the far right of compute-intensive. In both cases the same application needs to be executed again and again using a small amount of different data; hence the name "same instructions different data." The only requirement to distribute the calculation is to use a job scheduler that distributes the calculation segments across computers. Sounds like a classic mainframe batch scheduler could do the job. No wonder IBM calls grid computing the next big thing - they have been providing the underlying services for years. From an application programming point of view, the grid is a vast number or transactions and jobs distributed by a scheduler.

These applications have one entry point and one exit point, and can have their work computed in parallel without sequential or data dependencies on the results of each. An application with these characteristics is referred to as "embarrassingly parallel."

A variant of the embarrassingly parallel application is a job or application that for historical reasons is being run as a sequential string of steps. These can easily be decomposed into smaller jobs and fall under the Stage I distribution umbrella. So all you need to distribute Stage I applications is to identify them, chunk them into parallel executable steps, and get a scheduler that deploys the "chunks" or steps. Several vendors, including Platform and Sun, provide such scheduling services.

While there are many applications that fit into the Stage I model, these applications have input data and usually intermediate result interdependencies. These are classified as Stage II distribution applications. You will recognize these as having the sources of their resource consumption embedded in loops with complex data dependency requirements. In other words, you can't just chunk the application and expect it to run faster with the same results

A few different algorithmic patterns immediately come to mind as Stage II distribution application candidates: serial nondependant, master-slave, binary non-recombining tree, simplex optimization algorithms, and so on.

Consider a simple binary search algorithm for finding the maximum from a sequence of numbers used in a sorting problem. Although it's a simple, commonly used algorithm, it exhibits the structural characteristics of Stage II distribution applications. You can devise a simple Web service that receives two numbers and returns back the maximum. A simple control master program could generate slave/worker services executing the comparisons on a compute farm, and returning the final result back to the parent/root node of the tree. The commercial tools for building such solutions include low-level parallel middleware (e.g., MPI and PVM), and a few higher-level paradigms that provide virtual shared memory or shared processes paradigms such as Java Spaces (a technology transfer of Linda Spaces), GemFire from Gemstone, and GigaSpaces. These offer generic supporting middleware services, with which the programmer must be conversant in order to enable the distribution communications and data transfer.

A recent entry that approaches such challenges differently and focuses on the application rather than on the infrastructure is ACCELERANT from ASPEED Software. It offers the developer a high-level algorithmic and computational pattern-aware interface that can be inserted in existing or new applications. This approach masks the application developer from having to deal with the middleware and distributed expertise while providing the resultant application with the use of the required runtime services to optimally manage the application execution across all instantiations of the execution.

An even more challenging variant of Stage II applications is one characterized as "impossible" to parallelize. Examples of this variant are step-wise iterative algorithms, where each step requires the computation of the previous one. A simple example is the common summation technique for adding a sequence of numbers. You iterate through the sequence of numbers and at each loop you add the next number to a tally. It turns out that even these Stage II variant algorithms can be recast to be handled like the easier Stage II algorithms, e.g. a binary tree master-slave algorithm can be applied to the above summation problem. This greatly simplifies the programming effort but obviously requires reverification since the algorithm has been altered. Other advanced techniques such as genetic algorithms are also available but their discussion does not belong to Web Services Journal - not until there is a Web services solution for them!

Now let's move on to how Web services facilitate grid enabling of a complex compute-intense application and harnessing the computing power of HPC center for fast pricing portfolio bond options. The Callable Bonds Portfolio pricing was selected because it is a representative class of particularly complex Monte Carlo simulations that yield greater accuracy.

Now is as good a time as ever to shed light on the celebrated Monte Carlo techniques. First, it's not just a jargon to confuse the unwary. It's a real scientific tool. Monte Carlo techniques are counterintuitive in the sense that they use probabilities and random numbers to solve some very concrete real-life problems. Buffon's Needle is one of the oldest problems in geometrical probability tackled with Monte Carlo. A needle is thrown on a lined sheet with a distance between the lines that is the same as the length of the needle. Doing the experiment many times computes the number  (pi) of the circle, with great accuracy I must say. You can design a Web service that executes ranges of millions of throws on a compute farm. The master control program (server-side Web service) aggregates the experiments and serves you back the value of pi!

The Use Case Requirements
So let's talk about the use case designed and implemented at a buy-side financial services boutique. The business problem was to price a portfolio of callable bonds using Monte Carlo (MC) techniques. Think of a bond as a series of cash flows. You pay a price to buy it at issue day. Then every so often, say six months, it pays a coupon back. At maturity, the bond pays back its valued principal. Pricing a bond means to access its value at any given time. One popular way of pricing is to run complex computational statistical techniques called Monte Carlo simulations. Callable bonds have the added complexity that can be, well?called at any day before maturity. In order to anticipate the value of a hypothetical call, you must run additional "what if" scenarios. This happens to be one of the easier examples of a huge set of analysis, modeling, pricing, and risk assessments being used and in need of distribution in order to run within very stringent time constraints at an affordable price.

The company had existing C++ legacy code implementing the pricing model. Some of the bond portfolio calculations could take over an hour on the existing hardware. The new business requirement was to make the response time less than 30 seconds. Given the needs and the current implementation something had to give, and adding expensive cycles and reengineering the application was very risky - pardon the pun. The front GUI presented yet another challenge. The trading desk uses a new Java- based front-end trading system, but the sales desk primarily uses Excel spreadsheets for pricing for simplicity and easy of use.

Web Services for Robust Shared Business Services
After assessing the business objectives and the technical constraints, the desirable architecture was in place (see Figure 1). A strategic decision was made to outsource the IT infrastructure to a commercial-strength High Performance center. By lowering the cost of ownership and increasing availability, the client was able to harness on-demand computing using state-of-the-art equipment. With the appropriate SLA in place, incremental scalability turned out to be predictable and affordable.

A Web service interface provided the single API to the pricing engine and facilitated the Shared Business Service for the two LOBs. Furthermore, it provided an elegant solution to the technology interoperability gap. An unforseen benefit was the ability for the salesman to execute remotely from his laptop the Web service at a third- party office or at the coffee shop nearby (see Figure 2).

The server side of the Web service is the master/control program starting the computational/slave units on grid configuration using ASPEED's ACCELLERANT on-demand application servers and HP center's middleware fabric. Each slave component encapsulates the computation aspect of the portfolio pricing, the C++ legacy code (see Figure 3).

Every time the Web service is called, a number of slave calculations are fired on the grid. ACCELLERANT's on-demand server provides quality of services such as dynamic load balancing, fail-over, and managed optimal response time.

Building the Client Web Service
BEA's 8.1 Platform technology provided the ideal environment for building the client Web service call.

1.  The first step was to create a Web service control. This was achieved simply by pointing at the published Web service URL followed by ?WSDL.:

http://someWebServicesURL/Pricing.asmx?wsdl

2.  The WSDL file received was saved at a local project directory. Figure 4 shows the input definition, a segment of the WSDL file.

3.  Browse to the project and directory where the saved WSDL is located.

4.  Right-click the WSDL file and select Generate JCX from WSDL. The resulting JCX file is a Web service control, which can be used from the Java client trading system.

Figure 4 shows a simple <XML> input file that is sent to the Web service.

Conclusion
In this article, I demonstrated how Web Services play a critical role in two fundamental areas of distributed parallel computing: infrastructure middleware and parallelization. I then defined the three Stages of application distribution: Stage I, embarrassingly parallel and two variants of Stage II, complex interdependent; and "impossible" to parallelize. I concluded with a case study of parallelizing a Stage II application using BEA's 8.1 platform and ACCELERENT from ASPEED.

The computational grid and distributed parallel computing deliver substantial performance improvement today. While the standards are still evolving, practitioners design and implement missing-critical applications, doing more at a faster rate in diverse commercial areas, and enjoy-great competitive advantage. Financial services professionals can execute complex financial models and provide exotic products to their clients for higher profits while they meet more stringent regulatory risk requirements and improve the bottom line through more efficient capital allocation. Pharmaceutical companies speed up preclinical and early clinical trials by a factor of five or more and gain FDA approvals faster. Manufacturing uses fluid dynamics, executing on powerful compute farms and connecting designers via Web services, to deliver faster simulation and to shorten new product life cycle while delivering better, cheaper, stronger products.

Web services play a pivotal role not only in the infrastructure back-end space, but also closer to the "final mile." I predict in the next 18 to 24 months, as the product stack matures and bandwidth increases, Web services, dynamic business process choreography, and informal on-demand networks will be able to tap the idle power of powerful compute farms, or even commuters' sleepy laptops, and deliver content on pervasive devices like never before. But remember, the grid is only as profitable as the applications you run on it.

Until then, get the grids crunching.

More Stories By Labro Dimitriou

Labro Dimitriou is a BPMS subject matter expert and grid computing advisor. He has been in the field of distributed computing, applied mathemtics, and operations research for over 20 years, and has developed commercial software for trading, engineering, and geoscience. Labro has spent the last five years designing BPM-based business solutions.

Comments (3) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Joe Smith 02/23/05 01:52:31 AM EST

Agreed this article is a total waste. Wonder if the product is a waste as well.

Labro Dimitriou 02/16/05 09:34:33 AM EST

The intend of the article was to present how a client/practitioner used web services to implement an HPC solution for improving end-to-end performance of a critical application. On the way I presented a high level classification of the emerging parallel design patterns with a few simple examples.

Strangler, I am sorry the article did not meet your expectations. In your mind, what is the import issue in GRID computing? I will be more than happy to present my POV on a future article or on my blog.

Strangler 02/12/05 02:21:30 PM EST

This article tries to address an important issue in Grid Computing. But the article simply ends up throwing around ya-ya words. What a waste of time - for the author and myself.

@MicroservicesExpo Stories
One of the biggest challenges with adopting a DevOps mentality is: new applications are easily adapted to cloud-native, microservice-based, or containerized architectures - they can be built for them - but old applications need complex refactoring. On the other hand, these new technologies can require relearning or adapting new, oftentimes more complex, methodologies and tools to be ready for production. In his general session at @DevOpsSummit at 20th Cloud Expo, Chris Brown, Solutions Marketi...
SYS-CON Events announced today that CollabNet, a global leader in enterprise software development, release automation and DevOps solutions, will be a Bronze Sponsor of SYS-CON's 20th International Cloud Expo®, taking place from June 6-8, 2017, at the Javits Center in New York City, NY. CollabNet offers a broad range of solutions with the mission of helping modern organizations deliver quality software at speed. The company’s latest innovation, the DevOps Lifecycle Manager (DLM), supports Value S...
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus intern...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to c...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
This talk centers around how to automate best practices in a multi-/hybrid-cloud world based on our work with customers like GE, Discovery Communications and Fannie Mae. Today’s enterprises are reaping the benefits of cloud computing, but also discovering many risks and challenges. In the age of DevOps and the decentralization of IT, it’s easy to over-provision resources, forget that instances are running, or unintentionally expose vulnerabilities.
There are two main reasons for infrastructure automation. First, system administrators, IT professionals and DevOps engineers need to automate as many routine tasks as possible. That’s why we build tools at Stackify to help developers automate processes like application performance management, error monitoring, and log management; automation means you have more time for mission-critical tasks. Second, automation makes the management of complex, diverse environments possible and allows rapid scal...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
The purpose of this article is draw attention to key SaaS services that are commonly overlooked during contact signing that are essential to ensuring they meet the expectations and requirements of the organization and provide guidance and recommendations for process and controls necessary for achieving quality SaaS contractual agreements.
SYS-CON Events announced today that OpsGenie will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Founded in 2012, OpsGenie is an alerting and on-call management solution for dev and ops teams. OpsGenie provides the tools needed to design actionable alerts, manage on-call schedules and escalations, and ensure that the right people are notified at the right time, using multiple notification methods.
The first step to solving a problem is recognizing that it actually exists. And whether you've realized it or not, cloud services are a problem for your IT department. Even if you feel like you have a solid grasp of cloud technology and the nuances of making a cloud purchase, business leaders don't share the same confidence. Nearly 80% feel that IT lacks the skills necessary to help with cloud purchases-and they're looking to cloud brokers for help instead. It's time to admit we have a cloud s...
According to a recent Gartner study, by 2020, it will be unlikelythat any enterprise will have a “no cloud” policy, and hybrid will be the most common use of the cloud. While the benefits of leveraging public cloud infrastructures are well understood, the desire to keep critical workloads and data on-premise in the private data center still remains. For enterprises, the hybrid cloud provides a best of both worlds solution. However, the leading factor that determines the preference to the hybrid ...
In this modern world of IT, you've probably got some new colleagues in your life-namely, the cloud and SaaS providers who now hold your infrastructure in their hands. These business relationships-yes, they're technology-based, but cloud and SaaS are business models-will become as important to your IT team and your company as the hardware and software you used to install. Once you've adopted SaaS, or inherited SaaS, it's on you to avoid price hikes, licensing issues and app or provider sprawl....
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...