Wednesday, December 17, 2008

Manufacturing Equipment System and CEP

I just finished a proof of concept for a manufacturing customer on how to use a inference engine to process events coming from a Manufacturing Equipment System. Without breaking any confidential information, I just want to highlight the use cases, the proposed architecture, and give some example of rules.
At the high level, the business use cases are real time fault detection and equipment monitoring. Equipments are tools running on the manufacturing floor. The manufacturing tools process parts. After a certain amount of work processed, a tool needs some maintenance and any parts assigned need to be inhibited so the MES can route these Work In Progress parts to equivalent tools.

Tool can generate alarms, and the system can take preventive action and/or alert people to avoid bigger problem. So for fault detection a rule looks like:

if Alarm_id == 36 then inhibit parts running on tool initiating the alarm, and send email and SMS message to floor manager

I have to complete this rule set with correlation, filter out and removing consequential alarms rules to avoid generating too much actions.

When {
alarm1: Alarm(category == TrackBlocked);
alarm2:Alarm(category == ScannerEmpty; equipmentId == alarm1.equipmentID; this != alarm1)
} then {
Retract alarm2;
}

For equipment monitoring, the manager wants to monitor limits of product to tool. Once thresholds are reached we can trigger a maintenance request on tool, and reroute parts scheduled to the tool. Events are sent at each job completion. Job means the processing of a manufactured part on a tool. At each event the rule engine updates counters, and verifies thresholds. If thresholds are met

The architecture has to integrate corba, and Java RMI.












The CEP server code is simple and uses JRules POJO rule session to insert events in working memory. The performance is to process 1 event per second, and manage a sliding window of one hour. (configurable). So the code is managing a sorted list of events and removes older events no more part of the time window. This is very simple and good enough for this simple event and time window management, as high volume and performance goals are far from what low latency CEP engines are used to process.

What is interesting on this project was the classification of the application as event based processing, and so entering into the CEP space, even if we are not really working on stream of events, or using any SQL like language or computing aggregation. This is a classical alarm filtering and correlation application where an inference engine can do a good job.

Friday, November 21, 2008

SOA slowing down

Gartner says the "Number of Organizations Planning to Adopt SOA for the First Time Is Falling Dramatically". I will summarize the facts as below:

  • 53% of person contacted are using SOA in some part of their organizations
  • 25 % were not using it but had plans to do so in the next 12 months
  • 16 % no plan to use it
  • 20 % are building Event Driven Architecture
  • 20 % are planning to do EDA in the next 12 months
  • Since the beginning of 2008, there has been a dramatic fall in the number of organizations that are planning to adopt SOA for the first time. down to 25 percent from 53 percent in 2007
  • Many organizations have evaluated SOA and have chosen not to spend time and effort on it
  • The highest concentrations of organizations not pursuing SOA and having no plans to do so are in process manufacturing and agriculture and mining
  • SOA adoption in Europe is nearly universal, moderate in North America and lagging in Asia
Main reasons listed are:
  • No clear business case – but there is a great deal of confusion about how to construct a business case for SOA
  • Lack of skill and no plan to acquire necessary skill
  • Use of modern programming environments is closely associated with SOA
  • Legacy skill + SOA are scared

In my recent work on the book "Resilient Information System" as co-author, I have to study what are the pains and use cases our BRMS customers have in term of SOA. I'm still doing interviews on that matter but basically there are four main subjects to address when migrating to SOA :
  • Data: work on the definition of domain data model, or put in place some transformation layer to present the different view of the data to the applications
  • Business rule: be able to understand, to externalize and to easily change the business rule outside of the applications.
  • Expose functions as business services: once reusable points have be identified, the services can be designed and documented (WSDL) for reuse.
  • Re-engineer the business processes to support automation using reusable services.

The successful deployments among our customers are done in that order. If CIO buys upfront the full SOA suite, he will most likely not get the bang for the money, as the migration to an Extended SOA is long. So the business case has to be built by increment with the first goal of deploying tools that empower the business user to change his business policies. When you are successful on this matter, the other business cases are easier to articulate, because business users are seeing the value. As explained in the Agility Chain Management System the core technologies to support the implementation are MDM, BRMS and BPM. Deploying SOA without BRMS and MDM does bring to the red path of the SOA maturity matrix. BPM, as BPEL orchestration engine, is not the solution to buy upfront, it will arrive later in the SOA maturity when we have reusable services that we can orchestrate.

SOA is a viable architecture design and approach as was OOD in programming, so please do not kill it. Now it has to be decided as a corporate initiative and implemented by increment, project after project.

Thursday, October 30, 2008

BRE part of the service layer

I just had a look to the presentation from Sandy Kemsley done at the business rule forum: "Mixing Rules and Process" and her blog entry on BPMS and BRMS integration. I can quickly summarize the points she made that I love and share some feedback:
  • Does BPM has Rules? yes but typically not full-features BR. Rule changes may require redeploying processes with IT involvement.
  • Separate rules from process: externalize decision from process. Call BRE from BPE.
  • Benefits from separation: compext rules automate manual process, reuse rule across processes, change rule without processes.
In her blog entry "As a BPM bigot, I see rules as just another part of the services layer... but I didn’t hear that from any of the vendors." It may be because ILOG team was not speaking... ;-)

We are pushing, saying and we are delivering projects since 4 to 5 years where we have a clear separation between decisions done by business rules from process flow, and designing solution where the business process is calling the decision service. In ABRD I even clearly propose this approach as a best practice for the reasons mentionned above:
  • different velocity of change and agility
  • reuse of decision point
  • better version control and change management support in BRMS
What we do observe in the field is that customers are at ease to design a web services or a services (SOA larger definition) that use a rule engine to implement the decision logic supported by these services, but are at pain to deploy executable processes, because not all the other parts (services) of the business process are ready. So BRMS is the first technology deployed in SOA rewampping before BPM. Green field deployment is difficult to find, and legacy application are moving slowly to be rewampped as reusable services. Not to mention the nightmare of data model management.
We are still at the lower left part of the SOA maturity adoption matrix and the Agility Chain Management.

Wednesday, October 1, 2008

JSR94 - an over visioned java standard?

Recently I had to re-do some JSR94 code and I'm still interested by this work done some years ago and I'm still supportive of it. But as Roy Johnson was saying recently during one of his presentation "Where will tomorrow's enterprise innovation come from?". Does JSR94 is one of this "unhealthy Java standard" like JDO was? Done in a period where the Java community wanted to standardize everything ?
For the recall JSR-94 is an industry standard that defines how Java programs deployed in J2SE or J2EE can acquire and interact with a rule engine. Being able to change engine implementation is a nice design approach, but as of today this specification is limited by the fact that there is still no standard to exchange rule definition between engines. So rule written for one engine can not be used by another one. This dramatic limitation is undermining the use of JSR94. And i'm still surprise to see architect asking compliance to it.

Although the horizon is still blue: W3C is working on the final specification for Rule Interchange Format which should help to exchange executable rule definitions between engines supporting this specification. With RIF, JSR94 will make sense in the design of decision service.

Friday, September 12, 2008

SBVR thoughts

I spent two days last week to study the Semantics of Business Vocabulary and business Rule or SBVR specification. SBVR is part of the OMG’s Model Driven Architecture (MDA) with the goal to capture specifications in natural language and represent them in formal logic so they can be machine-processed. It includes two specialized vocabularies:

* Vocabulary for Describing Business Vocabularies which deals with all kinds of terms and meanings.
* Vocabulary for Describing Business Rules which deals with the specification of the meaning of business rules, and builds on top of the previous vocabulary.

The meanings are declined into concepts, questions and propositions. The meaning is what someone intends to express or understands. A phrase such as "We deny the invoice if the medical treatment was done after one year of the accident" has a clear meaning for a claim processor within a car insurance company. Analysts need to transform logically this meaning into concepts which has a unique interpretation so that they can represent the business knowledge within a comprehensive vocabulary and rules.

Business rules are declined into two possible classes:

* Structural (definitional) business rule which are about how the business chooses to organize the things it deals with, they are considered as necessity. In this context the statements describing the rule can describe the necessity, the impossibility or the restricted possibility.
* Operative (behavior) business rule govern the conduct of business activity. They are considered as obligation and directly enforceable. When considering operative business rule it is important to look at the level of enforcement to specify the severity of action imposed by the rule in order to put or keep it in force. Statements to describe the rule include obligation, prohibition, and restricted permission.

In SBVR, rules are always constructed by applying necessity or obligation to fact types. Fact type is an association between two or more concepts.


SBVR is defining a complete grammar to specify business logic. I was thinking it may be too complex to get easily accepted by the field, even if it may been seen as mandatory when moving to an extended SOA: There is a need to define enterprise ontology, to avoid reinventing the definition of terms used by all the services deployed in the SOA.

From a project management perspective SBVR brings interesting risks as the project team members may engage in a huge effort of documentation, without delivering working software. Not so agile, this stuff. The just good enough should be kept in mind and a build by increment for this conceptual model may be a good approach.

The second important risk will be linked to the size of the universe we have to cover. Defining all the vocabulary and business policies for a given business application will generate a lot of facts and rules that may be implemented in different environments or not implemented at all. Then the decision to select which rule goes where, will add time to the project.

I had in mind to use a sub part of SBVR to help control the way we harvest rule during the discovery and analysis activities, but use it only in the context of what need to be implemented. The next ABRD drop for the end of the month will propose things around that.

Any comments are welcome.

Thursday, September 4, 2008

HP printer and business rule

AAAAAH. Today I discovered a very nice implementation of business rules done by HP. At home I have a HP photosmart 8250 printer who today displayed the message:"the ink cartridges are past its expiration date!"

So each ink cartridge has a date and the code in the printer is testing this business rule to force me to buy new cartridges.... It is not enough to get printer drinking cartridge ink like we drink water under 100F, but in case we are not printing enough, this rule enforces business.

Sad commercial behavior, bad rule, I'm just now hoping they are using a rule engine so I can change the rule on a fly... A dream.

Friday, August 29, 2008

IT and Business Analyst working together

During one of my last web seminar I presented, I got the following question, I want to share the answer on the blog: "How much IT involvement would be necessary for making rule changes, assuming that the business person can change the rule and test it on their own?"


IT is always responsible of the production platform. As such it is controlling the quality of the rules deployed to the servers. As the simplest process you can have the business users maintaining the rules within the web based BRMS component and even deploy to a test server to do some simulation testing and what-if analysis. Once the rule execution set is ready for deployment to production, IT can create baseline, extract the rule set from the rule repository and then apply some non-regression testing suite to verify that the rules are still working as expected. This is a good timing also to verify that any changes done at the data model level (logical or physical models) are not impacting existing rules.


The following diagram illustrates the different activities per role, I can easily imagines as a standard maintenance process:


So IT is still responsible of production quality control, version control, software components integrity, production server monitoring and management and nothing will change in close future.

Friday, August 22, 2008

CEP and BRE

I spent the last four months to dig into Complex Event Processing offerings and especially how BRMS and CEP can work together to support new type of business applications, and to move to agile IT architecture. As stated by Professor David Luckham in his book the ‘Power of events’: future applications need to address the following problems

  • Monitor events at every level of the IT systems
  • Detect complex patterns of events, consisting of events that are widely distributed in time and location of occurrence
  • Trace causal relationship between events in real time.
  • Take appropriate action when patterns of events are detected.


Adding CEP engine within our IT architecture will help bring agility as we can define complex patterns of events, hot deploy them, execute them, and improve their scope by time. CEP solution today uses a language close to SQL to define statement which filters, aggregates, joins events, and applies pattern matching on stream of events. Those statements are deployed to a CEP engine which will continuously evaluate them on a flow of events, when the conditions match subscriber applications (or code) will receive the complex or synthetic event and do something about it. If we consider those statements as rules, BRMS is a good candidate to manage them as it offers all the efficient tooling to consider those rules as asset with their own life cycle.


As of today BRMS uses Rete based engine to evaluate and fire if-then-else type of rule. An event processing statement looks more as a query. For example if we want to extract bid events on given car brand, event occurring in a time window of 15 seconds and we want to compute the average price (one of the attribute of the BidEvent). The statement may look like:

select *, avg(price) as avgprice

from BidEvent(itemName='BMW').win:time(15 seconds)


I’m using the Esper open source product EPL syntax to illustrate this example. This is a very simple ‘rule’ with the expressiveness of SQL. Easy to understand for a programmer but with big limitations when we need to communicate with the business user on what rule is doing. This is more true in real application. Most of the time those statements will become more complex to understand (even for programmer) when such statements combines joins, aggregation, multiple streams and database lookup. Putting a high level language on top of this SQL based approach will help on that matter. JRules for example offers a framework called BRLDF to define business language on top of lower programming language.


Those statements have a business motivation and CEP applications are really pushed by business users as those technologies are helping to identify pattern of events relevant to the business. We are moving to real time BI. So with this business dimension we can consider CEP statements as business rules. This means BRMS can support the management of those statements, and offers an integrated environment for business analysts and developers.


Now, one of the main questions is the deployment model. Does Rete based engine support well the high throughput and low latency of current CEP application requirements? The response is very close to no, if we speak about millions of events per second, rules to apply on sliding or jumping time window… If the constraint on the number of events decreases BRE may be a good solution. We are using JRules or Rule C++ in the telecoms industry since mid 90s for alarm filtering and correlation applications and it was the main driver of the BRE demand at that time. So we can use Rete for event processing. In fact in long term we can imagine having different engines under the same product or one engine which will pick up the more efficient algorithm to use according to its deployment model. I will post some example of rule using event in a next post.

Sunday, July 20, 2008

Example of a rule analysis

I will try to do some specific rule analysis to illustrate some of the ABRD concepts. ABRD proposes to do the analysis activities as soon as possible and in parallel of the rule discovery, this helps to identify the issues, gaps, and clarify the rules under scope. So let imagine a claim processing application for a car accident. There are some injured persons who reported the accident claim and with some medical providers who send medical bills to the insurance company. One important step of the rule discovery is to clearly understand the business process. For this example we can outline the basic process for claim processing looks like the following diagram:

In brown the decision points that are rule rich, so will be mapped to a rule set. Taking one rule from the adjudication decision point we will build a domain model:

Ask for an expert audit if one of the treatments includes an emergency room treatment or an ambulance transfer done on a day after the accident.

Claims adjudication in insurance refers to the determination of a member's payment, or financial responsibility versus the company ones. So obviously the company wants to pay for what it commits too no more. Here the policy states that we do not want to pay for emergency visit or ambulance transportation which was not on the same day of the accident. For the emergency room we can have some exception, if the injured person came back two days later for headache, or pains that could be consequences of the accident, we need to get some complement of information. For ambulance there is mostly no reason to pay. So from this little analysis we need to develop two rules: one for the emergency room case and one for the ambulance as the actions are different. One action will trigger some audit request and the other action will set the reimbursement amount to zero.

The analysis process starts by looking to the business terms used by the rule. Here we can highlight: audit, treatment, day of accident (also referenced as day of loss) and type of treatment like emergency room, ambulance transfer. Let starts by the fact that the claim is related to a car accident, this means our object model should support a claim type with a day of loss concept attached to it. There is at least one injured person who needs medical treatment, and so the claim relates to the involved persons in the accident. Injured person has injury and related treatments. A medical bill includes the description of the medical treatment done on the patient. By discussing with the claim processor or google ‘medical procedure code’ we can understand that each treatment has a code that we can use to get the type of treatment. The medical bill in fact has some legal form like the UB92 or HCFA-1450. From those forms we can also build our data model.

The audit may include a list of reasons so that we can manage in one audit multiple questions or items to review. The claim and the medical bill will follow a life cycle which will be supported by a state machine. Part of the state machine management will be done by the rule, part by the application. For example when rules are finding issue on the claim they will change its status. So from these descriptions we can build the following UML class diagram

Once we have this UML diagram, let generate Java classes, load them in a java project within Eclipse and tune this model so it will be more efficient from writing rules, using a test driven development approach and refactoring. I encourage using Maven to package the project and to be able to efficiently manage jar dependencies in your project. Once we did so we can start JRules Rule Studio and create the Business Object Model entry. I will detail that in a next blog.

Friday, July 18, 2008

Very good success with recent webseminars

Hi
Thank to all of you to attempt my last web seminars on ABRD, SOA, and Effective rules writing. You were around 850 to register and close to 400 to attempt. This is showing an interest in best practices around business rules and their implementation. Feel free to ask me more questions by email, and I will start to respond on this blog and on the isis blog to the more common questions.

Thursday, June 12, 2008

SOA and BRMS

Well I did not post since a while. I was working on an interesting subject related to Complex Event Processing. I will detail that in close future.

In the mean time coming back to our purchase Order sample, I will take the architect's hat, and outline the deployment of BRMS within a SOA approach.

SOA has proven to be a great technology to make enterprises more agile. With their ability to separate business logic from its implementation, BRMS adds more capabilities to the SOA. SOA is a progressive architectural style for creating and using business tasks that are packaged as services. The main goal of these services is to expose loose coupled business functions to facilitate deployment, combination, and reuse within different applications. SOA supports the integration of heterogeneous systems, as soon as they are exposed over the internet using standard protocols like HTTP, XML, unified interface definition (WSDL).

Every business has rules, and business users want complete control and visibility over these rules. A BRMS decouples an application's business logic from its data access and from its flow control. Decision logic is exposed behind decision services which support some business task. Such services are ValidateCustomer, ValidatePurchaseOrder, CheckCreditHistory… Using BRMS those service delegates the processing to a rule set deployed within a Rule Engine. Those rule engine can be managed using a rule execution server which provides the pooling mechanism, rule set management and execution monitoring functions.

A business application leveraging MDM, BPEL and BRMS may look like


This generic diagram can be used to explain how BRMS can be deployed in SOA. In dark blue the components coming with ILOG BRMS.

The SOA is organized in a set of business services used by the business application logic. This business application layer is modeled using use cases diagram or business process map, and may be implemented using BPEL orchestration, and/or a standard Model View Control web application. The business services leverage lower level services, which I packaged in a technical service layer. They support lower granularity service like accessing business objects (DAO for Creating Reading Updating Deleting data), master data like product definition, enumerated domain value, sub process flows, web services and rule execution server. In this logical view of the architecture I do not need to expose the Enterprise Service Bus which may need to be used as a communication layer between the application logic and the business services.

In fact this diagram even if it outlines one application, can be applied for multiple applications. The business and technical services can be reused to support other business cases.

Rule developer creates the rule projects structure, the data model used by the rules, the rule execution flow, prototypes some rules, and then gives the hand to the business analysts to complete the rule authoring activity. Business Analysts use the web based interface to manage the full life cycle of the rule. Rules are persisted in a rule repository. The deployment of a rule set can be done without stopping the core application, and can be controlled within the Monitoring and Control environment of the production platform. Using a BRMS the IT staff and business analyst can work closer together as they are using the same high level language to write business rule, they understand the rule flow, so the long term maintenance of the application is greatly enhance.

Within a SOA the BRMS is bringing an effective and efficient mechanism to manage the decision logic of any business application. The architect needs to design the decision service so they can be reusable, then the rule architect needs to design the rule execution flow to take into account some specificities of context of execution. Rule set reuse may infer some tests on the context of the caller. This context is exposed to the rule engine as a fact in the working memory, and the rule flow can have some initial tasks to test and apply different branching according to the state of this context.

I will dig into code detail later.

Sunday, May 11, 2008

A sample to illustrate the ABRD approach

There are a lot of requests from colleagues, customers, readers of ABRD to get a sample covering the main activities of ABRD from A to Z. I'm working on a book with a friend of me, which will cover all the ABRD activities in detail, and provide detailed samples. But for the moment I will take a simple purchase order process. There are a lot of samples on the web around this process, but here I'm focusing on a simple version of it.
This Order Process may betypical B2B scenario, in which multiple stateful services interact with each other. Two sources for the order request: one online with an Internet customer submitting an order on a Web page, and one using an automatic purchase order issued by a well known authenticated customer-partner.

In ABRD using a business process centric approach for discovery – analysis – implementation we start by doing the analysis of the process, then we go to the data modeling, rule harvesting, and service design in parallel like the following diagram outlines it:

During the process modeling the rule and process analysts search for verbs which involve mental processing or thinking like check, qualify, compute, calculate, estimate, evaluate, determine, assess, compare, verify, validate, confirm, decide, diagnose.. to identify activity which has business decision and logic, applies some business knowledge... Those activities should be what we call "decision rich" and have the focus of the rule analyst to discover rule.

Purchase Order Process description

The process is initiated when a customer online submits a product order, or when an automatic purchase order issued by a well known authenticated customer is received on one EDI link. A customer validation step is executed using customer data and product request data. Based on the product data provided in the order, it is checked whether the product is on stock. If the product is available, the price for the shipment of the ordered product is determined and presented back to the customer. If the product is not available, a message is sent back to the customer by email or web page (if still online). This last case also triggers email to product manager to provision the stock for this product.

The total price of the order is calculated from the product price, the customer business condition, loyalty program… and the shipment price. When the charging and the shipment have been performed successfully, the order is filed and the process ends.

Process Analysis

Looking on this process and the description above we can extract the following decision point where business rule will apply:

Decision Point Name

Description

Validate Customer data

Verify the customer information – data present or not, and black listed customers

Validate Purchase Order

Verify there is no fraud in the purchase orders or on a set of purchase order coming over a time period, , verify the data are correct

Calculate Total Price

As the price may account for a loyalty program, marketing campaign, stock level, and other business dimensions, business rules are relevant here to compute the price outside of a simple addition.

Get Shipment report

It may be possible to receive a set of events from the shipment company to aggregate and correlate to the same purchase order so that customer can be notified on a major type of events as well as the internal management team.


A decision point table is an important deliverable of the analysis. It can be built during the inception phase by the team responsible to outline the project high level scope. If it is not present the project team has to quickly develop it during the first iteration of the project.

We will detail the outcome of the rule discovery in a next post.

Successful webseminar on ABRD

Hi
Thanks a lot, to all the attendees on last week web seminar , we got records of registered persons and connected ones. There are a lot of questions, and I could not respond to all of them. So I will use the BLOG to answer the questions related to Agile Business Rule Development and other architecture discussions.
For Europe we plan to do a second webseminar, I will announce it.

Friday, April 25, 2008

SOA maturity Matrix, Agile Chain Management System

Today I would like to present a very nice article in a IT french journal “Solutions logiciels”. This is related to sustainable architecture. I’m working with Pierre Bonnet, the author of this article, on this matter, and try to share experiences on BRMS, as one of the corner stones of the Agility Chain Management System, I will try to summarize the main points of this article in french as it is relevant over the world:

The IT infrastructure in a lot of companies suffers from a lack of investment to modernize the legacy, which adds complexity when development team deploys new applications. Business and IT managers are focusing in short term ROI putting on hold long term investments needed to build a long term sustainable architecture.

Added to these, we can foreseen three layers of IT staff: the senior with their knowledge of the current legacy systems and the business requirements, the intermediate layer with people skilled in client/server and EAI technologies, and the younger one with object oriented skill, and web, web services development backgrounds. Younger are involved in tactical projects, using agile methodology, but most of the time generate rigid systems not able to change and supporting a long term logical architecture. The seniors are starting to retreat leaving a deep gap in knowledge which is most of the time not transferred to younger population. These are future bombs in the IT organization. What to do?

IT management needs to start long term projects to rebuild the IT architecture for the next generation. Pierre proposes to start by looking at what is the current situation at the enterprise level and being able to build a plan to transform the IT architecture for long term needs. The second point is to take risk: we can manage an IT organization just by controlling the existing. The rapid changes in economy, new regulations, and technology progresses force managers to take risky choices. But the risk is also to look at change in IT staff population, skill transfer, and risk to deploy new application on a fragile architecture.

Finally his last point is about clearly communicating to business and executive management about the non quality contract of current IT system, the medium to long term degradation of the IT services, and so the company competitive advantage. The communication has to leverage a strong action plan leveraging a risk management plan.

Three principles support the argumentation:

1- The re-engineering of any IT architecture cannot being done in one shot. The diagram below presents a progressing path to migrate existing architecture to a sustainable one. Starting from a SOA to expose the existing legacy functions as services, then moving to the extended SOA where BPM, BRMS and MDM are used in conjunction to offer agility at the service level for change in referential data, decision logic, and business process. The end of the path references the use of those technologies for the enterprise architecture.

2- The Agility Chain Management System: Using the MDM, BRMS and BPM technologies, IT staff can start by externalize referential data and parameters using the MDM, business rules and decision logic using BRMS, and orchestrate services and support business process with BPM.

3- A Enterprise Level Methodology: the migration path enforces a strong use of methodology like TOGAF, Praxeme, CEISAR)

Pierre addresses in this article some very important points and principles to take care during a SOA migration. Currently we are seeing a lot of project focusing on re-vamping legacy function as web services but this is not the end of the story of a SOA. I like very much the path to the extended SOA and the maturity matrix as a tool to explain to IT management the long term vision. Project managers can position where their project stands on this matrix to clearly articulate what is the value of their project’s product for the long term IT vision.

I modified slightly the original maturity matrix to clearly mention that any project falling into the bottom-right corner is the beginning of decision service re-engineering using the MDM-BRMS-BPM technologies, that IT development team has to leverage for the long run. When people embrace those technologies they are seeing the value and how to leverage them at the enterprise level. As you can see green path is the way to go and the red path is going no-where as it falls back into a rigid application behind a web service...

As an example of this kind of project we can take a claim processing application where the development team and architect decided to migrate one of their service (Claim Coverage Verification) to use business rules to verify the coverage of the insured person around his claim. MDM was used to define all the referential data as the medical codes, the US states, zip code,.. and other application parameters. The legacy application offers some data and business services with a re-vamped web service approach, and the BPM was used to call the services, to dispatch the set of XML documents over the process steps, and to offer the GUI for the claim processors: the actors of the business.

Starting from there the team can enhance the migration of such application and design reusable, agile technical and business services.

Tuesday, April 22, 2008

Book your agenda: May 7 10.00 am PST. Web seminar on ABRD

Make Developing Business Rule Applications Easier

What: 60-minute best practices webinar
When: Wednesday, May 7
Who should attend: Architects, developers, application programmers, software engineers, IT senior managers, IT team leaders, and anybody who wants to learn more about developing business-rule applications

Learn about a rules-specific methodology designed for short turnaround times: Agile Business Rule Development (ABRD)

When you're building business rule applications, generic software-development methodologies just aren't enough.

Agile Business Rule Development (ABRD) is a step-by-step process for developing business rule applications. An iterative methodology, ABRD employs agile software-development values. Rule development is organized as a series of cycles: discovery, analysis, authoring, validation, and deployment. Your team stays on schedule, delivering outstanding business rule applications.

Attend this free one-hour webinar and learn how to:

  • Leverage ABRD as first open-source methodology for business rules
  • Implement and deploy rules in an SOA or BPM context
  • Engage in rule discovery and analysis

Wednesday, April 2, 2008

Software Development Life Cycle for BR-BPEL application

The purpose of this article is to present an integrated software life cycle for development team who wants to leverage the capabilities offered by technologies such as BPM-BPEL and Business rules management system.

Each new business application development is triggered by a business needs to support or enhance and improve business efficiency. Even with an Agile approach, to trigger the project, architect, business users, project manager and project sponsor have to work together during the Inception phase, to define the business case, the requirements and justifications of the project. Once the project funded, the team may work on specification and requirement documents. I simplify a little here because in reality a lot more documents may be produced, but the goal here is to present how those requirements are supported in an Agile way using the BRMS and BPM products and not discussing about requirements management. So let state the specifications are our main entry point to detail the major work that needs to be done for developing the applications. The format of the specification can be user stories, or a more detailed description. (a good reference to write specifications and user case is Alistair Cockburn work ). Specifications are classically managed by a standard SDLC which includes design, build, test, and deploy of working code to the different staging platforms (development, test, or production).

The Agile approach enforces short iterations to deliver quick value to the business. The blue tasks in the diagram below represent a set of iterations that build the core of the business application.


The horizontal axe of this diagram represents time. The pools are used to contain activities executed in the different development environment. The top pool is the business analyst team which is triggering change to the application. When the development team is using BPEL and BRMS technologies, there are parts of the specifications which go directly to those components: Business processes will be implemented using BPEL and BPM process map, and business rules using BRMS. In the first release (you can see a release as a vertical slice of the above diagram) the agile value of each of these technologies is not clearly articulate. This is true that BPEL and BRMS help to develop in less time, and adopt an agile approach of development. But the main value proposition will come in longer term by the ability to efficiently support the two types of change requests the business user may ask:
• The business rules and policies changes: business want to improve the decision done by the rule engine
• The process tuning or improvements: the process needs to change due to legal constraints, better error and exception path management, or for better execution.

Business Rule changes will be done in the BRMS using a web based interface or IDE like JRules Rule Studio. The hot deployment capability of a rule set authorizes to deploy a newly updated rule set in a question of seconds. This helps a lot to support quick change on the decision logic attached to decision services called by the BPEL process instance. Rule Developer can do a lot of try and catch in a pre-production environment to improve the quality of the rule set, so the quality of the decision. Some IT organization implements the “champion – challenger” pattern to evaluate a new rule set against the production one. The champion rule set is in production and process transaction request with a certain level of metrics: metric can be number of automatic decision, or other business key performance indicator. On the pre-production server, the challenger rule set receives the same data flow, and metrics are compared. If challenger beats the champion it becomes the new champion in production.

Business process improvement requests are managed using the BPMS system, and a new process definition can be released to the different platforms. We can definitively see more change requests coming for business rules change, than on the business process: When we have a working process it is more sensitive to change it. Still business process changes will happen to support for example an improvement of the exception management, or to find some short-cut on the global processing so that the time to complete the process is better.

The core application can still evolve with a new release path. Change to the GUI, or data model will most likely be managed by a new release of the full application.
As a conclusion it is important to consider using an agile development methodology which support quick change when using BPM – BPEL – BRMS technologies, if you do not adapt your way to develop business application, you may lose the opportunity to be agile.

Thursday, March 20, 2008

Industry First Open Source Methodology for Business Rules

Yesterday ILOG Inc announced his donation of his Rule Based Methodology to Eclipse Consortium. I need to provide some explanations on what this donation is about. The Agile Business Rules Development methodology (ABRD) is the industry’s first free, vendor-neutral methodology delivered as an Eclipse Process Framework (EPF)OpenUp plug-in. ABRD provides a step-by-step process for developing business applications using technologies such as Business Rule Management System, BPM, BPEL.

ABRD mitigates the risk associated with new business rules initiatives by providing a well documented and structured approach for developing rule-based applications. ABRD allows organizations to avoid using ad-hoc processes or having to expend significant time and effort creating their own best practices.

In case you never have a look at EPF, Eclipse Process Framework provides tools for software process engineering to develop methodology. It comes with content knowledge organized in library, and with a tool, EPF Composer, which enables process engineers and managers to implement, deploy, and maintain processes for organizations or individual projects based on the content of the library.



The goal for EPF is to deliver a platform for producing software development practices, how-to, common definition and vocabulary, and processes with task, role, work product and guidelines definitions. Libraries are physical containers for knowledge content, process configuration and other parameters to publish the content as a set of web pages. Method content describes what is to be produced, the necessary skills required and the step-by-step explanations describing how specific development goals are achieved.
Processes describe the development life cycle. They take the method content elements and relate them into semi-ordered sequences that are customized to specific types of projects. They express who, when, what work will be performed.
EPF Composer provides a knowledge base of intellectual capital which we can browse, manage and deploy. This knowledge base organized in plug-in forms the basis for developing processes. We can define Roles, Tasks, Work Products, and Guidances in a hierarchy of folders named Content Packages. All content can be published to html and deployed to Web servers for distributed usage. Finally process engineers and project managers can select, tailor, and rapidly assemble processes for their concrete development projects. Once content is defined you can define the process building blocks, called capability patterns, which represent best development practices for specific disciplines, technologies, or management styles.

'abrd_openup" is a plug-in which extends openUp. From there you can reuse the content to develop your own plug-in. I recommend to extend ABRD without modifying it, so that you can leverage future release of it. If you want to contribute to abrd_openup you can send me email or comment on this blog. I will integrate most of the contribution in each release of the plug-in.
Have EPF fun!.

Wednesday, February 27, 2008

Agile Business Rule Development EPF plugin.

Eclipse Process Framework just integrated the Agile Business Rule Development plugin into the OpenUp library. This abrd_openup plugin addresses the development of business application using rule engine technology and Business Rule Management System. This is pre-release 1.0 but it already integrates a lot of content around rule discovery and analysis. I am working on the architecture track and will be able to deliver a new version in March. So any comments or contributions are welcome.
Please download EPF and the OpenUp library and let me know your thoughts.

Friday, February 1, 2008

A Cycle Approach for Business Rule Development

The Agile Business Rule Development methodology details all the different activities to develop a rule set, from rule discovery to rule set deployment and maintenance. We can group the set of activities into five groups. Those groups will be used to build an iterative approach to the development:
  • Rule Discovery
  • Rule Analysis
  • Rule Authoring
  • Rule Validation
  • Rule Deployment

The following diagram represents how the five groups of activities can be executed in a process flow using loops to implement short iterations. The rule set will grow following these cycles to get closer to the outcome expected by the business.

Figure 1 Rule Set Development Life Cycle

In the first loop, between Discovery and Analysis, the team harvests the rules from the business process description, the subject matter expert knowledge, legal documentation, use cases or any other source. This loop represents the first phase of the rule set construction.

1.1.1 Cycle 1: Harvesting

This phase is limited in time. The development team splits the day into two parts, executing discovery workshop in the morning (2 or 3-hour sessions), then performing some analysis and documentation for the remaining of the day. The team iterates on these two steps during 2 to 5 days maximum, depending on the number of rules and their complexity. Meeting execution is based on standard requirement elicitation techniques. To make the better use of the development and business team's time it is important to plan in advance the workshop sessions and to clearly state what is in the agenda.

To organize the session the project team may need to name a moderator responsible of managing the meeting and keeping the team on track. His other roles are:

  • Establish professional and objective tone to the meeting.
  • Start and stop the meeting on time.
  • Establish and enforce the “rules” for the meeting.
  • Introduce the goals and agenda for the meeting.
  • Facilitate a process of decision and consensus making, but avoid participating in the content.
  • Make certain that all stakeholders participate and have their input prepared.
  • Control disruptive or unproductive behavior.
Gather “Open Points” and follow up actions between sessions.

The goal is to document just enough rules to be able to start the implementation. In addition, this phase aims at understanding the object model within the scope of the application and to identify and extract some rule patterns.

The starting point of the Rule Discovery is the Decision Point Table: During the Project Inception, the project team is doing business modeling activities (not covered here) which aim at describing the business process and decisions applied to the Business Events corresponding to the scope of the business application. One important work product built during this modeling phase is the Decision Point Table which describes the point in the process (task, activities, transition) where there is a lot of decision points involved (test conditions and actions). These decision points represent potential candidate for rule sets.

1.1.2 Cycle 2: Prototyping

Once a certain level of discovery progress is done, the development team should be able to define the structure of the rule project: the rule set parameters (input-output business objects), the basic sequencing of the rules, also called Rule Flow, and the first major elements of the Domain Logical Model. The team then should be able to already implement some rules.

The idea is to execute the step “Rule Authoring” as soon as possible to uncover possible analysis and design issues. Indeed, most of the rules look good on paper but real issues arise most of the time during implementation and test. The next morning workshop session communicates the issues back to the business team. This leverages the feedback loop approach and provides an efficient mechanism to build a pragmatic, adequate and business-relevant executable rule set.

The second phase still does some discovery and analysis, to complete the rule harvesting.

1.1.3 Cycle 3: Building

Executable rules are more important than the ones defined on paper or in requirement tracking tools in a non-executable form. This is part of the agile-manifesto.

This agile statement is at the core of this cycle. Based on a Test-Driven Development (TDD) approach the goal of this phase is to implement a set of test scenarios with real or close to real data, to test the rules within their corresponding rule sets and their targeted execution context.

The day-to-day authoring activities can be seen as a set of little steps including test case implementation, writing rules, executing them, and doing some validation with the team members.

This cycle is still using short daily loops:

  • Loop on Authoring and Validation to develop test cases and rules
  • Loop on Analysis, Authoring and Validation to author executable rules, complete the analysis, do some unit testing and address/resolve issues.
  • Loop on a bi-daily basis on Discovery, Analysis, Authoring and Validation. The discovery will be used to complete the scope of the rule set and address the issues identified during implementation.

The cycle 3 should finish after 2 to 3 weeks. The goal is to release the rule set within an integrated development build in order to start testing the business application with the decision service. The rule set should only be 40 to 60% complete. Business users or rule writers will then elaborate and complete it in cycle 5 (Enhancement). But at the end of cycle 3, the Object Model used with the rule should be at least 90% complete, and the project structure should be finalized.

It is still possible to execute this cycle multiple times, if the size of the rule set is bigger than what can be done in three weeks (exactly 40% of the size of the rule set cannot be done in three weeks). In this case, it is recommended to still time-box this cycle to three weeks and deliver a concrete build to the QA or validation team for review and execution. Then embark on another build for the next 3 weeks.

1.1.4 Cycle 4: Integrating

The goal of this cycle is to deploy the rule set under construction to the execution server and business application to test it with an end-to-end testing scenario. The integration of the decision service and the domain object model is an important task. Data is sent to the rule engine to fire rules and infer decisions. During the previous phases the development team develops a set of test scenarios with realistic or real data which will trigger rule execution. Those test scenarios will be executed during the integration phase to support end to end testing. In the future they will serve as non-regression test suite.

1.1.5 Cycle 5: Enhancing

It can be seen as a more mature phase where the goal is to complete the rule set, and to maintain it.

It includes authoring, validation and deployment. It is still possible to do some short face-to-face discovery activities with the Subject Matter Expert to address and wrap-up some issues and questions. But with this approach, the team responsible for maturing the rule set to close to 100% coverage can be another team than the initial development one. This team is more business-oriented. As owners of the rule set and the business policies, they can develop at their own pace as they have all of the core infrastructure implemented by the development team.

It is important to note that there will be some needs to enhance the object model or physical data model to add some new facts, attributes, or entities. This can be started in the Rule Business Object Model view by an analyst or can be done in the executable object model by a developer. Those modifications will follow the standard release management process of the core business application.

We will details some of those activities in details later.