Microservices Expo Authors: David Sprott, Sematext Blog, Lori MacVittie, Yeshim Deniz, Carmen Gonzalez

Blog Feed Post

Why Workflow and BPM Suck

The verities and the balderdash the impact of the cloud

I originally wrote this paper back in 2005 as a bit of a rant against the positioning of Workflow and BPM. I was reminded of it the other day and took another look only to discover that things still haven’t changed that much. So I’ve decided to revamp it a bit to encompass cloudy type things and what the impact of social media etc has had in the ensuing years. So for your amusement or edification here’s a revised version.

Many of us that were involved in the field of Workflow Automation and Business Process Management (BPM) a few years ago (and some still are I’m sure) argued long and hard about where the two technologies overlapped, where they were different, which mathematical models should be used, which standards were applicable to which part of the technology stack and all that associated puff.

Well these arguments and discussions are well and truly over more or less forgotten; the demarcation lines were well defined and drawn; the road ahead became clear.

The fact that Business Process Management has its roots in Workflow technology is well known – many of today’s leading products are, in fact, evolutions of the original forms processing packages. So there is no longer a need to debate, what is now, a moot point.

But what has happened is that BPM also changed. Rather than being an extension of workflow concepts BPM was seen as systems-to-systems technology exclusively used in the deployment of concepts such as SOA solutions. I’m over simplifying things I know, but it seemed as though BPM was destined to become an IT Technology solution as opposed to the business process solution it was meant to be. Somewhere along the way, one of the key elements in a business process – a person – dropped off the agenda. The fact that the majority of business processes (some 85% according to some very old Forrester research) involve carbon based resources was overlooked – think BPEL for a moment – doesn’t the attempt to develop that particular standard tell you something about the general direction of BPM? But be warned, even today, many vendors will tell you that their BPM products support Human interaction, but what they are talking about will be simple work item handling and form filling – this is a long way from the collaboration and interaction management we will talk about below.

The problem stems from the fact that most Workflow products were flawed and as a result, the problem in the gene pool rippled through to the evolved BPM species. So what was wrong with workflow? It’s quite simple when you think about it; most workflow products assumed that work moved from one resource to another. One user entered the loan details, another approved it. But business doesn’t work like that.

This flawed thinking is probably the main reason why workflow was never quite the success most pundits thought it would be; the solutions were just not flexible enough, since the majority of processes are unsuited to this way of working. Paradoxically, it is the exact reason why BPM is so suited to the world of SOA and systems to systems processes. A rigid approach to systems processes is essential, where people are concerned; the name of the game is flexibility.

Why do we need the flexibility?

Let’s take a simple analogy so that the concept is more easily understood.

Supposing you were playing golf; using the BPM approach would be like hitting a hole in one every time you tee off. Impressive – 18 shots, and a round finished in 25 minutes.

But as we all know, the reality is somewhat different (well my golf is different) – there’s a lot that happens between teeing off and finishing a hole. Ideally about four shots (think nodes in a process) – but you have to deal with the unexpected even though you know the unexpected is very likely; sand traps, water hazards, lost balls, free drops, collaboration with fellow players, unexpected consultation with the referee – and so it goes on. Then there are 17 more holes to do – the result is an intricate and complex process with 18 targets but about 72 operations.

As mentioned earlier, we have to deal with the unexpected. This is not just about using a set of tools to deal with every anticipated business outcome or rule; we are talking about the management of true interaction that takes place between individuals and groups which cannot be predicted or encapsulated beforehand. This is because Business Processes exist at 2 levels – the predictable (the systems) and the un-predictable (the people).

The predictable aspects of the process are easily and well catered for by BPMS solutions – which is why the term Business Process Management is a misnomer since the perceived technology only addresses the integration aspects – with the close coupling with SOA (SOA needs BPM, the converse is not true) there still iis an argument for renaming BPM to Services Process Management (SPM).

Proposals such as BPEL4People didn’t fix the problem either, all that managed to achieve is replicating the shortcomings of Workflow. Anyone who has tried to put together a business case for buying SOA/BPM will know the entire proposition will be a non-starter.

Understanding the business processes exist at 2 levels (the Silicon and the Carbon) takes us a long way towards understanding how we solve this problem. The key point is to recognize that the unpredictable actions of the carbon components are not ad-hoc processes, nor are they exception handling (ask anyone with a six sigma background about exceptions and you’ll understand very quickly what I mean). This is all about the unstructured interactions between people – in particular knowledge workers.  These unstructured and unpredictable interactions can, and do, take place all the time – and it’s only going to get worse! The advent of social networking, SaaS etc. etc.,  are already having, and will continue to have, a profound effect on the way we manage and do business.

Process based technology that understands the needs of people and supports the inherent “spontaneity” of the human mind is the next logical step, and we might be tempted to name this potential paradigm shift “A Business Operations Platform”. [1]

But what makes a BOP different from what’s gone before?

One of the key innovations (and there are many) is the collaborative nature of the platform. At last there is an environment that allows, encourages even, the business world and the technology world to align. Given that the business process is where these two worlds collide then the BOP becomes the place where the two worlds can achieve the most in terms of collaborative development and common understanding. Eliminating decades of misunderstanding. The Business Operations Platform does six main jobs.


  1. Puts existing and new application software under the direct control of business managers.
  2. Facilitates communication between business and IT.
  3. Makes it easier for the business to improve existing processes and create new ones.
  4. Enables the automation of processes across the entire organization, and beyond it.
  5. Gives managers real-time information on the performance of processes.
  6. Allows organizations to take full advantage of new computing services.

Unlike early BPM offerings that were stitched together from fragments of technologies past, a BOP must be built on a standards-based and modern architecture.. With a service oriented architecture (SOA) and full BPM capabilities companies can create a complete business operations environment that can drive innovation, efficiency and agility for their enterprise. It must be Cloud enabled and capable of being deployed as BPMAAS as. It is the BOP that sets “enterprise cloud computing” apart from “consumer cloud Computing”

So why does workflow suck? It sucks because it made the fatal assumption that a business process was simply modelled as “a to b to c” – but business, as we all know, doesn’t quite work like that. BPM succeeds because of the heritage these products is in the workflow world – but BPM sucks as well because it ignores the requirement to include people.

Jon Pyke

[1] Since I wrote this paper Gartner coined the term “Intelligent BPM” but that begs the question as to what went before “Stupid BPM” ? So I’ll use BOP if that is OK with you the reader.

The post Why Workflow and BPM Suck appeared first on Cloud Computing Best Practices.

Read the original blog entry...

More Stories By Cloud Best Practices Network

The Cloud Best Practices Network is an expert community of leading Cloud pioneers. Follow our best practice blogs at http://CloudBestPractices.net

@MicroservicesExpo Stories
In many organizations governance is still practiced by phase or stage gate peer review, and Agile projects are forced to accommodate, which leads to WaterScrumFall or worse. But governance criteria and policies are often very weak anyway, out of date or non-existent. Consequently governance is frequently a matter of opinion and experience, highly dependent upon the experience of individual reviewers. As we all know, a basic principle of Agile methods is delegation of responsibility, and ideally ...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past. In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are deve...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
DevOps is a term that comes full of controversy. A lot of people are on the bandwagon, while others are waiting for the term to jump the shark, and eventually go back to business as usual. Regardless of where you are along the specturm of loving or hating the term DevOps, one thing is certain. More and more people are using it to describe a system administrator who uses scripts, or tools like, Chef, Puppet or Ansible, in order to provision infrastructure. There is also usually an expectation of...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
Today every business relies on software to drive the innovation necessary for a competitive edge in the Application Economy. This is why collaboration between development and operations, or DevOps, has become IT’s number one priority. Whether you are in Dev or Ops, understanding how to implement a DevOps strategy can deliver faster development cycles, improved software quality, reduced deployment times and overall better experiences for your customers.