Cases Managed The World Over

June 21, 2009

A recent spate of interest in Case Management is good to see (I always called it Case Handling but the concepts are the same). The OMG is about to vote on an RFP (Request for Proposal for a new standard) on Case Management. As some of my regular readers will realize, I have a special interest in the subject of Case Management.

Some of us have been talking about the problems of dealing with Cases for a number of years (e.g. this post in 2007). My own experience started with the development of what would today be called Case Management systems starting in the mid-80s and culminating in an Oracle-based object oriented repository in the early 90s – way too clever by half for that era so I canned it in 1992 and became an Analyst looking at other peoples products, writing white papers, etc. Since then I put together a number of white papers specifically talking about the issues of Case Management:

  • In 1996 a paper called “The Business Case for Case Handling” and although the vendors I referenced at the time have since disappeared (bought out), the issues are just as relevant today.
  • Over the last few years, I published a couple of papers that start to explore some of the related issues (these papers are available on BPM Focus with registration). In particular, these two papers address many of core concepts of Case Management.
    • “Process Innovation and Corporate Agility – Balancing Efficiency and Adaptability in a Knowledge-Centric World”
    • “Business Processes and Customers – Difficult Domains to Integrate”

So with this post, I am having my own stab at defining the issue. I have been invited several times by those in the OMG to take part in this particular enquiry, but hesitate to get involved as these things can act as an enormous time sink. So first let me point you at some other perspectives. In recent months, we have seen several bloggers discussing some of these ideas (touching on the need for adaptability and agility):

  • Jim Sinur of Gartner talks of “… Agile processes that are tapped into emerging events and contexts driven by organizational and community goals … the need for creating and managing unstructured processes. This kind of environment requires organizations and vendors to master goal driven processes.” In another post he said “Today most processes are Flow directed, but the future will likely require goal direction for at least a portion of the process. This is what we call unstructured processes that are composed of process snippets that are flow directed and portions that are completely dynamic. A combo looks to be the way forward.” See here, here and here.
  • Neeli Basanth Kumar (of Cordys) talks of Process Patterns in Adopting Case Based Solutions (he even uses one of my diagrams from a 97 paper – The Workware Evaluation Framework … where I tried to highlight the role of Case Management).
  • A discussion paper put out by Dennis Byron at ebizQ provides a sort of summary of some of the thinking of the vendors that replied to his request for information (originally it contained references to my thoughts and some of my graphics but that content was pulled after I pointed out the provenance). 
  • Bruce Silver commented on the RFP discussion going on at the OMG and postulated what he sees as the difference between “traditional BPM” and “Case Management”.

For me, it all comes back to the continuum of Process. On the one hand, we have the image of the organization as machine, with mechanistic “Procedures” used to control the work of the resources available. Most BPM initiatives are still stuck at this level, seeking to automate things and remove (human) resources from the equation. If Productivity = Value / Resources, this reductionist approach is all about reducing the resources involved in the deliver of a given value.

At the other end of the spectrum, we could consider processes as more like evolving “Practices.”  Think of what you do personally and see if this concept makes sense – some parts of what you do are defined in Standard Operating Procedures, other parts you interpret and apply your judgment. The more leeway you have to make decisions (in your job), the more knowledge you exercise in carrying out your job. Most knowledge workers are goal oriented, regarding procedures as a means to an end, rather than an end in themselves. Managers tend to be goal oriented.

We could think of high level processes as being about a “Purpose” – and how that Purpose is interpreted will inevitably be somewhere on that spectrum Procedure and Practice. Indeed, one finds that most business problems need a combination of both – hence the approach that has become know as Case Management. Now we’ve got this concept established, from a process perspective you could think of Case Management as applying to the Practices end of the spectrum. Workers here are goal oriented, and typically apply processes to achieve those goals.

Case Management is a very important “Design Pattern” for supporting flexible work Practices instead of more rigorous Procedures (where no adaptability or run time flexibility is needed). Different products take vastly different approaches to Case Management, all with the aim of providing flexibility and adaptability to the user, yet still providing support for the organizational objectives (processing more work, more efficiently and providing traceability).

Case Management proffers a way of mixing overarching support for that Purpose – normally through a high-level, outline procedure, which is then supported by a library of process fragments that can be bound into the parent as determined by the user. In some cases, the user has complete control of what should happen next; in others, the ability to progress from one phase of that high-level outline to the next is constrained in some way. Some products leave it very loosey goosey, others are all about constraining the user. Depending on how strong the need for adaptability is (in the target domain) the user may even have the ability to develop new process fragments to support a given need (imagine a Software Engineering project … it’s not always possible to predict every possible permutation). In others (say a Bank for example), it might be good enough to have the user select from a library of available sub-processes to bind to the parent. Indeed, in a Bank or Insurance company, it is unlikely that the run time adaptability would be allowed (the last thing you need is a clerk getting creative with a bank draft).

With careful architectural design, it is possible to create a Case Management environment out of many different BPMS products. But that implies that the end-user organization already have a clear idea of how such environments are constructed. In a sense, they create an application layer above the BPM Suite. 

My concern with the OMG RFP is that it is trying to standardize something that is poorly understood (as evidenced by the varying perspectives given in the OMG BMI mailing list). 

While the rest of this document goes on to outline my own views on Case Management, I believe that developing a standard in this area at this point would only result in hampering innovation. Having said that – there is a definite need for much more discussion and exploration of the domain of Case Management.

I believe that the approach proffered by Cordys represents just one way of approaching Case Management. There are others, and I do not believe that tying everyone down to one interpretation of Case Management at this point will be a good thing in the industry. Other vendors with Case Management approaches include:

  • Singularity
  • Cordys
  • Global 360
  • Sword (was Graham Technology)
  • Itensil
  • TIBCO
  • EMC Documentum
  • IBM (FileNet)
  • BizAgi
  • Pallas Athena
  • Pega
  • Polymita
  • HandySoft

All these vendors have some sort of capability that could be described as Case Management (and I am sure there are plenty of others that would put themselves on the list).

Finally those in the Process technology world are starting to see that a pure “one sized fits all approach” to the standardization of process definitions is entirely inappropriate when it comes to the needs of humans and knowledge workers. Moreover, Case Management approaches provide all sorts of benefits to companies in that they enable a far more flexible response to the needs of customers.

As different vendors struggle to work out the best approach, the last thing they need is to be tied down to a “standard” approach. In the end, it will be the Darwinian process of selection that will see the best products win out; not some imagined need for standardization and interoperation between wholly different approaches to the problem. 

Notes

Most BPM efforts could be characterized by their incessant focus on process standardization. They are predicated on the assumption that overall business effectiveness improves through better control. And while this is true for procedural, back-office problems, the reality is that customer facing and knowledge work processes are extremely difficult to standardize (if not impossible).

This is a real problem for long term BPM adoption. Ask yourself how many organizations you know in the BPM arena that have more than 5 or 10 processes “under management” (i.e. using a BPM Suite to ensure that things don’t fall through the cracks). And then think about how many spreadsheet are used in those same organizations to coordinate work.

In the paper “Customers and Business Processes – Difficult Domains to Integrate” I suggested that there were several different types of Case Management (Case Handling). These range from the traditional BPM Suite (which struggles to support the necessary adaptability), through what I called “Design Time Case Handling” on to “Run Time Case Handling”.

Case Management and BPM

Case Management and BPM

The vast majority of BPM Suites and Workflow tools assume that all activities/tasks/steps (and the potential paths through them), are modeled a priori (beforehand). Putting that another way, they focus on driving work between employees based on a model that maintains the status of a work item. The process model must exist up front, which presents the first hurdle of process discovery—i.e. ensuring those models are “correct.”

Further, in most products, all work of a given type share a common process description (rather than copying the model to support each work item). In such situations, the engine will normally allow the end-user little or no choice but to follow the pre-defined procedure.

Of course, the challenge is then for process modelers to predict all possible permutations in advance—something that is virtually impossible to achieve in customer facing situations. To get around show stopping scenarios, a few products incorporate features that provide the ability to bypass, redo, and rollback steps, while most rely on re-assignment of work to the supervisor (who must then step outside the system to resolve the problem). It does not take long before the supervisor becomes the bottleneck as cases mount up (those that do not follow the “happy path”).

Change is only possible through re-development of the common process model. New items of work then follow the modified process description (most products incorporate some form of version control). Change to an individual work item normally requires the deletion of all threads of work and the work item is then recreated under the new model (compromising any future audit). Alternatively, mechanisms must are needed to move an existing case to the new model.

These adaptability issues are not constrained to customer facing scenarios. For example, as government regulations change, the firm needs to revamp its process models to handle that change. There might be thousands of cases in the system, the vast majority of which will complete before the new regulations come into force. But imagine that there are still 100 cases outstanding at the point the new regulation comes into effect. For most products, it would simply be impossible for them to handle this problem in any sort of constructive fashion. Each of those work items would have to be manually stopped, and then restarted (somehow) under the new process definition that met the new government regulations. The only viable way of approaching the problem is to incorporate mechanisms to migrate individual instances to the new model.

For Case Handling support, the key differentiating factor (of the BPM Suite) is the ability to link multiple processes to a given case of work—the primacy is with the case of work, rather than the processes that are used to support it. Each case is usually “managed” by a relatively loose (high-level) parent procedure, but the worker can then add new procedural fragments to handle each different requirement of the work in hand. Effectively, the user is binding new procedural fragments to the case at run time; either by selecting them from a library, or by developing new ones.

Of course, this sort of approach is reliant on a BPMS that can facilitate such modifications to work in flight. For most products, it will also require great care in the design of the process architecture itself, and may involve the development of an external application.

(Some of these thoughts have been culled from my past White Papers on the subject of Case Handling)


Appian Anywhere SaaS Case Study

April 19, 2009

As many of you know, I have been pressing the metal to the floor on BPM in the cloud. So, I was interested to see the Appian Webinar last week (available on demand here). Some of you will remember that I developed a good chunk of process training for them to support users on the Appian Anywhere platform.

The session opened with the usual intro (not from me for once), pointing to all the usual reasons for doing BPM – efficiency, retain customers, compliance, etc. However, I was impatient to get to the SaaS opportunity, as I believe BPM delivered in the cloud enables a whole range of new possibilities.

Following the vendor neutral intro from Information Week, Samir Gulati Appian’s VP of Marketing, talked about their offering. Appian Anywhere has the same code based as the Appian Enterprise product, but deployed in the cloud and available on demand. AA has been out there for nearly 18 months or so (although I think it was only officially released earlier this year). It’s available in the Premium Edition where you get your dedicated server, SAS-70 type 2 application, etc., and the Standard Edition where you get shared (multi-tenant) access to an Amazon EC2-based service. From a pricing point of view, the Standard Edition is $35 per user per month. And while you are in evaluation mode, Appian provide a “Process Coach” to help get you up and running.

Samir talked about a few example customers of Appian Anywhere: ManuLife, who are using it to support their marketing function, driving better resource usage, managing interfaces to their third party suppliers, etc. The second example was Starbucks who are using AA to manage and track localised promotions, enabling visibility into what is going on.

The main part of the webinar was a case study delivered by John Cowles, Director of Operational Efficiency at Clayton Holdings. They are primarily involved in Credit Risk and Due Diligence work for the Mortgage industry (they don’t own assets, just provide services to others). Their customers are the Banks and Mortgage issuers. They could see the financial industry going in a downward spiral, and felt that things would get tighter. They realized that they needed to get better control of their own processes, and start monitoring/managing process performance. They also felt it was taking too long to get people up and running effectively.

Being a smaller company, they were a little nervous about getting into the BPM space, so thought they would try out the SaaS option. For them, it was a low risk; a low cost way to get started (John’s estimate was that it was only about 10-20% of the cost associated with buying a BPMS and installing in-house). And with limited IT availability, they felt the SaaS option represented the best way forward. They started zeroing in on Appian because they had the SaaS capability (at the time they were the only one out there with an On Demand offering). He liked their tenacity and the flexibility of the tool.

Once their instance was set up, it took just 6 weeks to get their first process up and running. He started developing a baseline of their current operation (always a sensible move). In his words, “you can’t improve anything unless you know where you are started. You could be wasting your time focusing on the wrong things.” Overall, he was looking for a 30% improvement in efficiency, while also seeking to reduce the variability in the way work was carried out. They followed a DMAIC approach, but it was a BPM project (not 6Σ). They were lucky in that they had the active involvement of a lot of senior business people (Executive Steering Group). Yet, their development and implementation team was small (2 people).

They have been doing a new release every month once they got their initial processes up and going. And ever since that first release, the business people themselves have developed more and more ideas on what they want to do. Initially, they actively avoided doing any integration at all. So it has only been very recently that they needed to involve IT.

I am not sure, but I think they are now addressing two areas of the business. John talked of having over 30 processes, with a lot of interdependencies between those processes; so I am guessing he was referring to the various sub-processes and chained processes that support the domain. 

From a results point of view – they are now doing more with less. He cited a new operation in a remote city where they had thought they would need 14 people to do a particular role, now they get by with just three.

For a case study, I thought it was a good one. It was good to hear someone really getting into the lessons learnt.

  • John quite rightly pointed to the need to “Focus on Change Management and Process Management early on … We had to prioritize, needed to step back and look at the bigger picture.” I found myself thinking that we could have had some interesting discussion over the Process Portfolio Management techniques that we have been working on with a Center for BPM Partner.
  • His second point was I think a good one “Limited or no system integration in the first release” … indeed, they left the integration till nearly 6 months before they got into that.
  • “Prototype everything” … sit down, work with them in design mode, and see what that looks like, prototyping all the time.

There were others, but those were the things that stood out for me. John also talked to the need for Process Visibility … “Need to step back and look at the metrics at a high level and then focus down on the critical areas … treat it like a compass.” I liked that last phrase as it gives a sense of what Process Visibility should really mean to managers.

As a BPM Case Study, I thought the session was a good one. However, I think it would have been even better if he had covered the game-changing capabilities of a SaaS delivered BPM solution in support of the process across the wider value chain. I think many managers are still stuck in the mode of optimizing their own processes rather than looking for the opportunities to support the wider problem. It’s a bit like stove pipes inside an organisation … but here, I am getting at the opportunities to radically improve the value chain, through the comprehensive integration of all the actors involved. Having said that, I am sure John is already thinking about the opportunities to deliver this sort innovation.


The Elephant In The Room

April 11, 2009

On Thursday, I was chatting with Jim Sinur over the big issues affecting our industry. I suggested that what we need at the next Summit was to get people to start seeing these big issues … the ones that BPM is only tangentially starting to address. So the ides was to have a combative session where we bounce around the big things that seem to go largely unmentioned. The idea is that we discuss the major trends and big problems associated with long-term success of BPM. So here are a few that I proffered on the call (you’ll notice these ideas are sort of related).

  1. Organizations are struggling to drive wider adoption of process management? The average number of Processes under Management in a typical BPM site is probably 5 or less; I think we would all agree that more than 10 is unusual. While there may be some cases of organization with over 20 or even 50, the reality is that there is little widespread adoption inside your average large organization (although some are starting to grapple with that problem). Why is that? Because the effort required to standardize the processes and deal with all the data and artifacts is just too great. Yet at the same time, the number of spreadsheets used to coordinate work in those same organizations is numbered in the thousands (or at least hundreds). Every one of those spreadsheets represents and opportunity to manage and improve organizational performance. There are lots of issues that are associated with the broad base adoption I am suggesting here … education and training are key.
  2. The emergence of Case Handling as a dominant design approach in the BPM. Today, too much emphasis is placed on the purely transactional, procedural end of the process spectrum. Not enough on the needs of workers. If you like, we need the same rigor that was applied to ERP implementation now to be applied to all the other stuff in the firm.
  3. Value chain optimization trumps behind the firewall efficiency! Some organizations are experimenting with processes that span organizational boundaries (i.e. outside the corporate firewall). This is driving innovation as firms come to grips that they are part of a wider value chain. Suppliers and partners need to collaborate, as they coordinate their efforts they are using processes to work with each other. In turn, I believe that this trend is going to drive adoption of SaaS-oriented BPM solutions.
  4. Innovation is occurring in both thinking, and technology to support the needs of knowledge workers (like you and I). Not only do we need more accessible products (who wants to rely on the IT department to hold your hand), but we need new ways of thinking about process and collaboration.

And then there are a few Problems we all need to deal with if we are to get success.

  1. Deal with the Politics First – it’s an absolute waste of time trying to move forward with a BPM initiative if you haven’t got your ducks in a row. I would posit that one of the reasons why a CoE approach tends to be more successful than one-off BPM projects is precisely because of the organizational buy-in required to create that sort of entity. However, the fact remains that a CoE is major overhead on your first BPM project; establishing a CoE is an evolutionary step on the path toward BPM Nirvana. Managers should address this issue early on to avoid disappointment. This is as much about ensuring that the process is rooted in the organization appropriately.
  2. It is people, not computers that drive innovation (contentious issue I know). BPM Vendors need to get off their soap-box; they need to avoid the “add water and stir” flavor in their marketing.
  3. Resource management – I mentioned it earlier, but there are just not enough good people with knowledge and expertise available. Where are the skills going to come from? I would suggest it is much better to train your own (growing your own capabilities) rather than assuming that some outsource provider will give you the resources at the right time. I bet you were thinking abotu your project team resources … I am talking about within the business itself. As an industry, we have spent so long conspiring to keep the business professional in the dark, yet now they need to grapple with how things get done around here (one of the best definitions of a Business Process I have heard was from Howard Rheingold … “The Way Things Get done Around Here”). Point is, you cannot outsource change.

Do some of these themes resonate. I am sure there are  other “Elephants In The Room” that the industry needs to grapple with. What do you think?


An Introduction to BPMN – Chapter 5 from the BPMN Modeling & Reference Guide

April 9, 2009

Recently, there have been a couple of blog postings on use of Signal events in BPMN. First was Rick Geneva and then a response from Dave French (waving hi to a fellow Kiwi).

Anyway it prompted me to look into my book (BPMN  Modeling and Reference Guide) and polish up a chapter for free distribution. So I put it on the BPM Focus web site (you’ll find it on Page 4 of the White Papers section). You’ll need to register to get access to the document, but dont worry that’s free. If you are already registered there, then you will just need to log in.

The reason for mentioning it within this context is the extensive use of Signals that I make within that Chapter.  I meant to post this some time ago, and of course it slipped. The discussion on Signals just reminded me to get it out there.


Holy Crap – Is It That Long

April 7, 2009

I have posted just one blog post in the best part of 6 months, I am wondering whether I even qualify as a blogger. I suppose not … problem is that when I want to say something, I find I want to say it well and in considered form. Then I get distracted with some current deliverable (or crisis, or proposal, or call, or …) and the feeling passes.

Of course, I still have the current book hanging around my ears to get out too, excerpts of which might make good postings. The problem is that this book is turning into a bit of an opus … my perspectives on life the universe and everything to do with BPM and Transformation. So having virtually completely re-written the whole thing, I am about to take the knife to it again, perhaps splitting it into two books to make them both more accessible.

I am sure that the end result will be worth it … I mean, apart from Academics looking for a single reference guide on BPM for their students, who wants (or needs) a complete analysis of all things BPM’ish. So I am working with a colleague to cut it down to a business oriented discussion about the power of managing processes as a way of, in the end, driving organizational transformation.


Process Portfolio Analysis Webinar – in 2 hours

February 3, 2009

I have been ignoring the blog while I have been trying to get the book finished. What a struggle – somewhat akin to passing yourself and all your works through the eye of a needle. But it is getting there.

Anyway, I shoudl have posted here the details of the upcoming webinar we are staging with the Value Chain Group.

Managing The Roadmap – Process Portfolio Analysis

Register here


Modeling & The Current State (Modeling the “As Is” process is mostly a waste of time)

December 8, 2008

It seems a contentious point of view in Business Process Management – but when we come up to the “Understand Phase” (“As Is” or “Current State” model ), we recommend “time boxing” the work to ensure that the activity is kept at a suitably high level. The intention of this activity is really to create a baseline; a reference point for the BPM project.

Now those who continue with their “legacy thinking” perspective usually decide that it is important to create a detailed description of how work happens. They model everything in sight, trying to create an accurate representation of the work as it happens today. While this is good for the “billable hours” of consulting firms, it does little for the business managers engaged on a journey of change and discovery.

The point is that the amount of work expended here is usually wholly inappropriate to the benefit derived. If your intention is to change the way things happen, gathering a great deal of detail around current work practices is a waste of time. If you are going to improve things (with or without the use of automation), then you will be changing how the process is carried out … i.e. how things happen today will soon become a thing of the past.

Don’t get me wrong, it is absolutely essential to develop a baseline understanding of the ways things are done. It’s just a question of emphasis. The issue for those involved in the exercise is just what degree of detail is required. They should be asking “can we stop now?”

The real purpose of current state modeling is to establish a baseline – so that the team can establish a realistic business case (allowing them to track benefits and improvements during and after implementation), and to identify the areas that require attention.

This is more about a pragmatic assessment of reality and clarification of current performance metrics than it is about process modeling. The metrics in question are those that the customer of the process really cares about (not the detailed cycle times of some low-level sub-process). From a modeling point of view, the need is for enough structure to hang the metrics upon (and perhaps one level of detail below). Anything more than that is a waste of time and resources.

So how much detail do you really need? Well, I normally start with high level outline of the process – the major chunks and then draw a simple high level process model. I recommend a high level BPMN diagram, but I usually seek to contrast that model with a Role Activity Diagram (not the same as a flow diagram with swim lanes, RADs model how the Roles involve change state and synchronize their actions), and perhaps simple Object State Transition Network (how the things moving through the model change state).

With a high-level flow diagram or outline of the process, it is really very straight forward to develop these alternative views, but they really do help people see things differently. I often say that the problem with Flow Diagrams, is that “the more you look at them, the less they mean.” Flow diagrams always look correct – for example, in my recent book “BPMN Modeling and Reference Guide” (authored with Stephen White the main author of the BPMN specification itself), I have yet to receive a note from anyone telling us that we have a major flaw in one of the models (yes, there is at least one). It just looks correct (and this is a book where we tried our very hardest to make sure every model was “right”).

Incidentally, the best reference on RADs is Martyn Ould’s “Business Process Management – A Rigorous Approach.” And for OSTN, I prefer the IDEF3 perspective as it is relatively simple and easy to understand (UML also has similar modeling capabilities).

Coming back to the Understand Phase, in the workshops with the Subject Matter Experts (SMEs), I also seek to understand the volumes of work, any major exceptions and the percentages of items that follow the major paths from decision points. The other thing to understand is the numbers of employees involved in the work (FTEs and the amount of time spent on each area of the process).  From that information you can calculate the costs of undertaking the work and where the money goes. You also need to understand the roles involved, and the capabilities of the staff members who fulfill those roles. All of these become essential ingredients for the business case. Without it you are whistling in the wind (when it comes to asking for funding). Even if you already have the funding, you should do this anyway (it will certainly be needed later).

I could go on here at length, but the point I am trying to make here with this blog post is this … if your consulting provider is asking you to fund a detailed “As Is” phase of work, then you are throwing money away. They are more interested in lining their pockets than assisting the client. The only exception that I can think of is where the process is itself highly regulated (and a rigorous work definition is mandated by law). In such cases, I think you have to draw your own conclusions on how to avoid “analysis paralysis.”


BPM – Is it a Software Engineering Tool? A Technology? or a Management Discipline?

November 30, 2008

In his excellent posting, Keith Swenson makesmany good points. He points to the range of interpretations of BPM, and particularly highlights the issues associated with its interpretation by software engineers as just another piece of hype on the road to good programs. But I think there is another, perhaps more important strand that is buried in there. As Ketih points out, BPM is about the Management of Business Processes.

As we all know, everyone’s interpretation of the term Business Process is different. In my training (whether that be BPMN, or higher level training on BPM Methods), it is one of the first things I get people to write down (inside the first 5 mins), and not surprise, every definition is entirely different. And when those people are senior managers in a ciompany, their interpretation of the term is invariably what I call “Process as Purpose“. The point is that they see Processes as being more about the purpose than the constraint implied by sequencing of steps. They are there (at the training), because they see the importance of “Managing” their processes. Indeed that concept (Managing Business Processes) as central to the success of their companies.

[[ I am still down in Brazil, and I am really struck by the process sophistication of the people I am meeting. They all get it. I was more than surprised to find two “Business Owners” (people who own significant businesses), giving up 3 days of their time to come on a course around how to structure and run BPM programs. They were cherry picking from the broad range of techniques we covered, but ask yourself whether you could imagine the CEO or COO deciding that they should attend a public training course. That’s what I am getting at about the sophistication of the Brazilian business climate. ]]

Coming back to Keith’s post, he describes a spectrum of BPM interpretation – from pure Software Engineering (where the SW Eng tries to reflect the needs of the business person’s Business Process); through Business Processes being modeled by a business person, then tossed over the fence to a Software Engineer to finish them off; to the Business Process as modeled by the business person, then being directly executed (what he called “Pure BPM”). I am not quite sure I agree with the Pure BPM bit, but I do know what he is pointing to … where the processes of the firm are driven by models (without translation to some intermediate executable format (like say BPEL).

One of the comments on Keith’s post points to the challenges of getting business people to model their own processes and make the resulting collection of stuff useful. He described the usually resulting mess as an “expensive disaster”. And the reason for this is that business people dont usually have the sophistication to understand their business problem as a set of inter-releated processes that between them deliver on the “Process as Purpose” concept I referred to earlier.

Invariably, process modelers (whether IT or business) tend to see a process problem as a single process. They interpret the high level Purpose as a single implementation process (which invariably it is not). They make all sorts of mistakes such as mixing up the “Handle an Instance” with the “Manage the Flow of instances”; they switch from batch mode to handling a single instance; they dont think about the interfaces between processes (handing information from one to another), etc. What they do is try and connect up everything that sounds like an Activity into one convoluted process.

Now software engineers are usually more adept at the necessary abstract thinking, but that doesn’t mean to say that business people cannot wrap their pretty heads around the notions. It is merely a reflection of the fact that they have not had adequate training. What is missing (across this entire industry) is better learning around “Process Architecture” – what “chunks” do you need and why. Poor chunking leads to unnecessary complexity (and even “expensive disasters”).

We are still stuck with decomposition as the prevailing mind set – where sub-processes are always contained within the parent. SOA concepts seek to get around this, but there is also a higher level “Business Services Oriented Architecture”. Processes should not be regarded as some sort of static hierarchy, they are more accurately regarded as a network of interacting instances. Think more jigsaw puzzle than org chart.

When I gave a “Power Breakfast” at the last Gartner BPM Summit on BPM and Process Architecture I had a packed room (it was starting at 7:30 in the morning so these people were keen). I described a set of methods that you could use to go from “What Business Are You In” to what “Processes Do You Need” right down to the SOA components if that is what you wanted to do (I would recommend looking at a BPM Suite first rather than going straight to the SOA software engineers paradise). I only saw one person out of the 90 or so get up and leave, and nearly everyone else gave me their card at the end of it. The room really was comprised of mostly Enterprise Architecture folks from the IT community, all of whom struggle with this transition.

Switching tacks – the vendors BPM Suites are unconciously making this architecture problem worse. With only a few exceptions (Pega, Itensil, BizAgi … I am sure there are others in this category too, these are the ones that spring to mind), vendors interpret the business process problem as being entirely seperate from the data and artifacts associated with the process (business people see them as intertwined). They regard the process relevant data as a set of named value pairs … the information required by Process A is declared on Process A, and must be recreated on Process B and then mapped from one to the other (if the processes need to communicate with each other). That means that there is an extra (unnecessary) layer of complexity for business people trying to reflect their business problem. Moreover, if you change one process, then you need to refactor all the interfaces. This is “software engineering” oriented thinking.

The other approach is to define your data structures (perhaps as an “Entity” defined as an exstensible set of XML artifacts) and then describe the views on those artifacts at the level of the data structure. Then it is merely a matter of associating your processes with the Data Entity, and all the different views become available. Process interfaces become an order of magnitude more accessible (to the business user), you can use any number of processes to support a single case of work, and again it helps move away from the software engineering mindset we find in so many BPM tools (which were often created to solve the problem of Enterprise Applicatin Integration … hence their association with Software Engineering).


Brazil seems full of “People Who Get It”

November 26, 2008

Last night I was “privileged” to go along to the Brazilian Quality Awards (Fundação Nacional da Qualidade) here in Sao Paulo. I was invited along to their national awards ceremony by FNQ. I am here running some training – BPM Process Modeling Fundamentals (BPMN) and Developing A Structured Approach To BPM – all hosted at the FNQ offices.

Now I don’t speak Portuguese, so the speeches went over my head (pity, as most of the audience sat in rapt attention). But what struck me was the seriousness and understanding of the business people I met – serious in the sense of processes are really important to them; and understanding in the sense of the journey they are on.

And some are well advanced on that journey. I mean, how many Executives (business owners) do you meet that really do understand the notion of truly managing their business through processes … let alone one who it turns out does not have any functions at all. His company has no Functional Managers, just Process Owners and associates working with them. Not many that I can recall.

Coming back to the Quality Awards, I can see it is really a serious business here. The speeches went before the dinner … and with no jokes (that made the audience laugh) until 50 minutes in, it was tough going (so that’s one SLA not well met).

But here in Brazil, they really seem to get the Connect, Communicate and Collaborate ideal, which was in sharp contrast with some of the power games played by the traditional (American/British) way of doing business. With the people I have met here in Brazil, I have been struck by the contrast with some of those I met on my way here.

I met a number of Senior Executives from brand name companies. Whether their parent companies get it or not (at the C level exec level), one thing is for sure, the local management talent know what it is all about to compete through process. A perspective that I felt was backed up by the quality of the Business Analysts I met on the courses so far. Make no mistake – Brazil is a rapidly growing market for BPM.


BPMN 2.0 – Marriage Made In Heaven or Trough of Disillusionment

October 31, 2008

Inside the OMG there has been a heated debate about whether BPMN 2.0 should become linked more explicitly to UML … so many heated exchanges to chew through. This blog posting was put together in that context.

It was originally Charles Box (and later Deming) who said: “All Models Are Wrong, Some Are Useful.” We should learn to live with that reality.

By modeling something, we are removing some aspect of the real world in order to represent it. And yet, the IT-oriented folks continue to flail about looking for one true modeling notation and set of semantics to rule them all (like string theory). As though how somehow everything must be translatable and interconnected. I think for most business folks – they don’t really care. They use models to communicate with each other … and yes, they use circles and arrows, and boxes and clouds, and … only a very few have the interest in making them all relate to each other.

It is only when we get down into the IT organization that all of this stuff has to be translatable and traceable … that all the classes and elements have to get along (be placed in some interconnected network of stuff).

We currently have a Business Process Modeling Notation (sans rigorous meta-model), we also have a Unified Modeling Language (avec rigorous meta-model)… both can be used to model processes (even businesses). But they are different and some folks feel the need to move stuff between these two approaches. We invented BPDM (another rigorous meta-model) as a mechanism for doing that sort of thing along with providing a competing BPMN serialization (to XPDL). But BPDM was deemed too hard by many (or too expensive to implement support for when you already have UML) … at least we have seen little appetite in the market by vendors for supporting it. Most of the BPM Suite/Workflow vendors out there are on XPDL.

The idea with BPDM was to create a semantic layer that would allow the translation between these modelling notations (and others). Or more precisely, that which can be translated should be able to be translated with “semantic integrity”. It would also allow for extension of the semantics for different needs. But for UML to work alongside this, would have meant a Profile for UML (or some other detailed integration at semantic level) – but the folks with the skills and expertise for this sort of thing chose not to invest their time and energy in developing such an interchange format (between UML and BPMN via BPDM).

But that’s all history now. What these well resourced players could sign up to was a future version of BPMN. So now we have BPMN 2.0 – with all the hope and promise of an effective marriage between orchestration (BPMN) and choreography (something that is needed for effective interchange of models but very few people understand fully).

The BPMN 2.0 RFP calls for: “A single specification, entitled Business Process Model and Notation (BPMN 2.0), that defines the notation, meta-model and interchange format … Extension of  BPMN notation to address BPDM concepts … [will need] changes that reconcile BPMN and BPDM to a single, consistent language. The ability to exchange business process models and their diagram layouts among process modeling tools preserving semantic integrity. Enhancements in BPMN’s ability to:

Model orchestrations and choreographies as stand-alone or integrated models. Support the display and interchange of different perspectives on a model that allow a user to focus on specific concerns.” Further … “Proposals shall specify conformance criteria that clearly state what features all implementations must support and which features (if any) may optionally be supported.”

At the same time, it now seems that BPMN 2.0 has to provide a high level modeling approach and traceability down through the stack (which means UML right). There are various other camps – all attempting to twist the specification in their own particular direction. I hear one group saying “let’s make BPMN reflect the needs of BPEL”; others saying well we should now make BPMN part of UML (I must get asked if that is going to happen at least once at every conference … always with a look of dread on the part of the person asking); others wanting stronger choreography support (personally I would like to see something emerge that could support a translation to Role Activity Diagrams which is a much more powerful approach to modeling how roles collaborate and inter-operate that what I have seen so far in BPMN 2.0).

So now that we are in the trough of disillusionment (the marriage vows have yet to be cemented, the RFP is but a hazy memory of a drunken engagement party). We have two different groups (power bases) lobbying for the ascendancy – well not really lobbying, lets just say they are struggling to work out what bits of each others proposals they like, what they can live with, and what they don’t like. And there is a lot more soul searching (work) to go on there.

Let’s recap on where we seem to be now:

  • There is a Notation Specification with some (IMNSHO) half baked choreography support (along with an abstract syntax). It fixes some things and has missed the point on others.
  • There is another Specification (that derives from the BPDM) which describes a more robust set of process semantics … let’s call that the Process Modeling Framework for the moment. This is still perceived as too difficult for some to wrap their heads around … but in the end it is where the UML piece will have to tie in (if someone is going to invest the effort).
  • Then there is a specification that is supposed to outline the mapping from one to the other.

As far as I can tell – all three of these documents require significant further work to marry and align – personally, I can’t see this being finished in the near future. It won’t be just a one cycle delay.  And that’s before we take on the UML interface challenge (although I am sure someone stepping up to the plate on that one would be welcome, they would be doing it against a moving target). We also need to think about how we will embrace the current XPDL community (an upgrade path). And now it looks like we are now about to invent a couple of new modeling approaches, which of course some are already saying should somehow be like BPMN (or UML).

In the end, this stuff only makes sense in context of enabling businesses to work more effectively. BPMN 2.0 needs to give true model portability (with semantic interoperability). We need conformance levels (but first we need to decide on where the lines in the sand are for those different levels). We need to … stop broadening the effort and focus more on getting to the result.

It can’t be that hard – especially when we have all the “Wise Wizards at OMGee” working feverishly on the problem.