Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Mike Kavis, Ian Khan, Lori MacVittie

Blog Feed Post

Why Workflow and BPM Suck

The verities and the balderdash the impact of the cloud

I originally wrote this paper back in 2005 as a bit of a rant against the positioning of Workflow and BPM. I was reminded of it the other day and took another look only to discover that things still haven’t changed that much. So I’ve decided to revamp it a bit to encompass cloudy type things and what the impact of social media etc has had in the ensuing years. So for your amusement or edification here’s a revised version.

Many of us that were involved in the field of Workflow Automation and Business Process Management (BPM) a few years ago (and some still are I’m sure) argued long and hard about where the two technologies overlapped, where they were different, which mathematical models should be used, which standards were applicable to which part of the technology stack and all that associated puff.

Well these arguments and discussions are well and truly over more or less forgotten; the demarcation lines were well defined and drawn; the road ahead became clear.

The fact that Business Process Management has its roots in Workflow technology is well known – many of today’s leading products are, in fact, evolutions of the original forms processing packages. So there is no longer a need to debate, what is now, a moot point.

But what has happened is that BPM also changed. Rather than being an extension of workflow concepts BPM was seen as systems-to-systems technology exclusively used in the deployment of concepts such as SOA solutions. I’m over simplifying things I know, but it seemed as though BPM was destined to become an IT Technology solution as opposed to the business process solution it was meant to be. Somewhere along the way, one of the key elements in a business process – a person – dropped off the agenda. The fact that the majority of business processes (some 85% according to some very old Forrester research) involve carbon based resources was overlooked – think BPEL for a moment – doesn’t the attempt to develop that particular standard tell you something about the general direction of BPM? But be warned, even today, many vendors will tell you that their BPM products support Human interaction, but what they are talking about will be simple work item handling and form filling – this is a long way from the collaboration and interaction management we will talk about below.

The problem stems from the fact that most Workflow products were flawed and as a result, the problem in the gene pool rippled through to the evolved BPM species. So what was wrong with workflow? It’s quite simple when you think about it; most workflow products assumed that work moved from one resource to another. One user entered the loan details, another approved it. But business doesn’t work like that.

This flawed thinking is probably the main reason why workflow was never quite the success most pundits thought it would be; the solutions were just not flexible enough, since the majority of processes are unsuited to this way of working. Paradoxically, it is the exact reason why BPM is so suited to the world of SOA and systems to systems processes. A rigid approach to systems processes is essential, where people are concerned; the name of the game is flexibility.

Why do we need the flexibility?

Let’s take a simple analogy so that the concept is more easily understood.

Supposing you were playing golf; using the BPM approach would be like hitting a hole in one every time you tee off. Impressive – 18 shots, and a round finished in 25 minutes.

But as we all know, the reality is somewhat different (well my golf is different) – there’s a lot that happens between teeing off and finishing a hole. Ideally about four shots (think nodes in a process) – but you have to deal with the unexpected even though you know the unexpected is very likely; sand traps, water hazards, lost balls, free drops, collaboration with fellow players, unexpected consultation with the referee – and so it goes on. Then there are 17 more holes to do – the result is an intricate and complex process with 18 targets but about 72 operations.

As mentioned earlier, we have to deal with the unexpected. This is not just about using a set of tools to deal with every anticipated business outcome or rule; we are talking about the management of true interaction that takes place between individuals and groups which cannot be predicted or encapsulated beforehand. This is because Business Processes exist at 2 levels – the predictable (the systems) and the un-predictable (the people).

The predictable aspects of the process are easily and well catered for by BPMS solutions – which is why the term Business Process Management is a misnomer since the perceived technology only addresses the integration aspects – with the close coupling with SOA (SOA needs BPM, the converse is not true) there still iis an argument for renaming BPM to Services Process Management (SPM).

Proposals such as BPEL4People didn’t fix the problem either, all that managed to achieve is replicating the shortcomings of Workflow. Anyone who has tried to put together a business case for buying SOA/BPM will know the entire proposition will be a non-starter.

Understanding the business processes exist at 2 levels (the Silicon and the Carbon) takes us a long way towards understanding how we solve this problem. The key point is to recognize that the unpredictable actions of the carbon components are not ad-hoc processes, nor are they exception handling (ask anyone with a six sigma background about exceptions and you’ll understand very quickly what I mean). This is all about the unstructured interactions between people – in particular knowledge workers.  These unstructured and unpredictable interactions can, and do, take place all the time – and it’s only going to get worse! The advent of social networking, SaaS etc. etc.,  are already having, and will continue to have, a profound effect on the way we manage and do business.

Process based technology that understands the needs of people and supports the inherent “spontaneity” of the human mind is the next logical step, and we might be tempted to name this potential paradigm shift “A Business Operations Platform”. [1]

But what makes a BOP different from what’s gone before?

One of the key innovations (and there are many) is the collaborative nature of the platform. At last there is an environment that allows, encourages even, the business world and the technology world to align. Given that the business process is where these two worlds collide then the BOP becomes the place where the two worlds can achieve the most in terms of collaborative development and common understanding. Eliminating decades of misunderstanding. The Business Operations Platform does six main jobs.

It:

  1. Puts existing and new application software under the direct control of business managers.
  2. Facilitates communication between business and IT.
  3. Makes it easier for the business to improve existing processes and create new ones.
  4. Enables the automation of processes across the entire organization, and beyond it.
  5. Gives managers real-time information on the performance of processes.
  6. Allows organizations to take full advantage of new computing services.

Unlike early BPM offerings that were stitched together from fragments of technologies past, a BOP must be built on a standards-based and modern architecture.. With a service oriented architecture (SOA) and full BPM capabilities companies can create a complete business operations environment that can drive innovation, efficiency and agility for their enterprise. It must be Cloud enabled and capable of being deployed as BPMAAS as. It is the BOP that sets “enterprise cloud computing” apart from “consumer cloud Computing”

So why does workflow suck? It sucks because it made the fatal assumption that a business process was simply modelled as “a to b to c” – but business, as we all know, doesn’t quite work like that. BPM succeeds because of the heritage these products is in the workflow world – but BPM sucks as well because it ignores the requirement to include people.

Jon Pyke


[1] Since I wrote this paper Gartner coined the term “Intelligent BPM” but that begs the question as to what went before “Stupid BPM” ? So I’ll use BOP if that is OK with you the reader.

The post Why Workflow and BPM Suck appeared first on Cloud Computing Best Practices.

Read the original blog entry...

More Stories By Cloud Best Practices Network

The Cloud Best Practices Network is an expert community of leading Cloud pioneers. Follow our best practice blogs at http://CloudBestPractices.net

@MicroservicesExpo Stories
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
Our guest on the podcast this week is JP Morgenthal, Global Solutions Executive at CSC. We discuss the architecture of microservices and how to overcome the challenge of making different tools work together. We learn about the importance of hiring engineers who can compose services into an integrated system.
Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters. The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end us...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...
Microservices has the potential of significantly impacting the way in which developers create applications. It's possible to create applications using microservices faster and more efficiently than other technologies that are currently available. The problem is that many people are suspicious of microservices because of all the technology claims to do. In addition, anytime you start moving things around in an organization, it means changing the status quo and people dislike change. Even so, micr...
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.