Friday, August 29, 2008

IT and Business Analyst working together

During one of my last web seminar I presented, I got the following question, I want to share the answer on the blog: "How much IT involvement would be necessary for making rule changes, assuming that the business person can change the rule and test it on their own?"

IT is always responsible of the production platform. As such it is controlling the quality of the rules deployed to the servers. As the simplest process you can have the business users maintaining the rules within the web based BRMS component and even deploy to a test server to do some simulation testing and what-if analysis. Once the rule execution set is ready for deployment to production, IT can create baseline, extract the rule set from the rule repository and then apply some non-regression testing suite to verify that the rules are still working as expected. This is a good timing also to verify that any changes done at the data model level (logical or physical models) are not impacting existing rules.

The following diagram illustrates the different activities per role, I can easily imagines as a standard maintenance process:

So IT is still responsible of production quality control, version control, software components integrity, production server monitoring and management and nothing will change in close future.

Friday, August 22, 2008


I spent the last four months to dig into Complex Event Processing offerings and especially how BRMS and CEP can work together to support new type of business applications, and to move to agile IT architecture. As stated by Professor David Luckham in his book the ‘Power of events’: future applications need to address the following problems

  • Monitor events at every level of the IT systems
  • Detect complex patterns of events, consisting of events that are widely distributed in time and location of occurrence
  • Trace causal relationship between events in real time.
  • Take appropriate action when patterns of events are detected.

Adding CEP engine within our IT architecture will help bring agility as we can define complex patterns of events, hot deploy them, execute them, and improve their scope by time. CEP solution today uses a language close to SQL to define statement which filters, aggregates, joins events, and applies pattern matching on stream of events. Those statements are deployed to a CEP engine which will continuously evaluate them on a flow of events, when the conditions match subscriber applications (or code) will receive the complex or synthetic event and do something about it. If we consider those statements as rules, BRMS is a good candidate to manage them as it offers all the efficient tooling to consider those rules as asset with their own life cycle.

As of today BRMS uses Rete based engine to evaluate and fire if-then-else type of rule. An event processing statement looks more as a query. For example if we want to extract bid events on given car brand, event occurring in a time window of 15 seconds and we want to compute the average price (one of the attribute of the BidEvent). The statement may look like:

select *, avg(price) as avgprice

from BidEvent(itemName='BMW').win:time(15 seconds)

I’m using the Esper open source product EPL syntax to illustrate this example. This is a very simple ‘rule’ with the expressiveness of SQL. Easy to understand for a programmer but with big limitations when we need to communicate with the business user on what rule is doing. This is more true in real application. Most of the time those statements will become more complex to understand (even for programmer) when such statements combines joins, aggregation, multiple streams and database lookup. Putting a high level language on top of this SQL based approach will help on that matter. JRules for example offers a framework called BRLDF to define business language on top of lower programming language.

Those statements have a business motivation and CEP applications are really pushed by business users as those technologies are helping to identify pattern of events relevant to the business. We are moving to real time BI. So with this business dimension we can consider CEP statements as business rules. This means BRMS can support the management of those statements, and offers an integrated environment for business analysts and developers.

Now, one of the main questions is the deployment model. Does Rete based engine support well the high throughput and low latency of current CEP application requirements? The response is very close to no, if we speak about millions of events per second, rules to apply on sliding or jumping time window… If the constraint on the number of events decreases BRE may be a good solution. We are using JRules or Rule C++ in the telecoms industry since mid 90s for alarm filtering and correlation applications and it was the main driver of the BRE demand at that time. So we can use Rete for event processing. In fact in long term we can imagine having different engines under the same product or one engine which will pick up the more efficient algorithm to use according to its deployment model. I will post some example of rule using event in a next post.