Tuesday, November 17, 2009

BlueWorks

In the last few weeks, I have to work for developing more content of the process track of ISIS and ABRD. I have to study and develop artifacts for business modeling activities using IBM BlueWorks. BlueWorks is a hosted set of applications used by business analysts to discover and model business-relevant content used in a BPM deployment. A user can create elements like Strategy map, Organization Map, and BPM model in BPMN. The tools are simple to use, and give good starting point for business process implementation. Those tools are in the palette of good tools for a process analyst.

BlueWorks is more than just a group of tools. It is also a knowledge sharing platform, where users can learn about BPM products and best practices and they can get presentation of successful deployments. It is a very nice environment to learn but also to collaborate.
In term of useful tools a user can go to the Business Leaders environment to create the following artifacts:
  • Strategy Map: defining, planning, and communicating the overall strategy of an organization.
  • Capability Plan: Business capabilities define what your business does, such as the services it provides to customers, or the operational functions it performs for employees.
  • Business Process Map
Everything on the web. This kind of applications is really showing the future of business application: hosted, accessible over the web. Google Wave is another proof point to this trend where people can collaborate and learn in real time.
Register and try it. You can add comments and your findings to this blog too.

Working on ABRD-JRules book

Just to update I'm still working on the future book about Agile Business Rule Development and JRules. It takes sometime taking into account the hard work we have to do with our current integration of ILOG within IBM.
This blog is not dead. And I should be able to spend more time on it in the future weeks.

Wednesday, May 20, 2009

Service Component Architecture

Discussing about agile IT architecture on this blog without going to SCA will be a major error. I'm playing with SCA since last September, and I love it.
Service Component Architecture is defined as the implementation of the SOA architecture. You can read an excellent paper from David Chappell on this subject. SCA separates application business logic and the implementation details. It provides a model that defines interfaces, implementations, and references in a technology neutral way, letting us to bind these elements to any technology specific implementation.

For the business case and business user point of views the value propositions are around:
- Save time and money
- A simpler API, Efficient GUI tools to assemble components to build new application
- Enable and encourage reuse - Developers can create composites that perform useful functions. SCA makes it easy to use and reuse business logic.
- Bring agility to interchange business logic
- Bring visibility on how the application is built. I can imagine easily using the assembly diagrams to explain how 'business' components work together in the context of the current application.

On the architecture level the ability to separate business logic from infrastructure logic reduces the IT resources needed to build an enterprise application, and gives developers more time to work on solving a particular business problem rather than focusing on the details of which implementation technology to use.
Some key concepts:
A component implements some business logic exposed as one or more services that operate on business data. Component includes an implementation, and can have partner references and interfaces.

A component can indicate the services it relies on using references. Explicitly defining references allows dependency injection, the SCA runtime can locate the needed service and injects the reference into the component which needs it.
Components are bring together within an assembly or composite. The composite is persisted as an XML document using the Service Component Description Language (SCDL).

A component can also define one or more properties. Each property contains a value that can be read by that component from the SCDL configuration file when it’s instantiated. Finally a domain can contain one or more composites, each of which has components implemented in one or more processes running on one or more machines.

On the tools and implementation side, IBM for sure has an impressive offering on that space with WebSphere Integration Developer. Apache Tuscany is also an excellent open source to help us jumping into the technology and understand how it works. Eclipse Ganymede is offering an assembly editor, SCA project plugin...

We will build our AML application using SCA

Wednesday, May 13, 2009

AML with Event processing and Rule Engine

I have to build a demo and a presentation (co -presented) for IBM WebSphere Impact 09 on how business rules and business event work together in the context of Anti Money Laundering. James Taylor did a good summary on his blog.
I promised some time ago on this blog to go through a complete example of executing ABRD on a project. So lets take this demo as a main example. Lets start with this first blog on a short description of AML and its high level process.

AML business context:

Money Laundering is the act of hiding illegally earned money from police and tax authority by making illicit funds appear as initiated from legal business. The Money laundering is a three step process. The first step, called ‘placement’, is done by depositing illicit funds in a business bank account. If one makes a cash deposit above 10000$ the bank is required to report the transaction to the government. The next step is called ‘layering’, wherein funds are moved from bank to bank and consolidated. The last step is the ‘integration’, where the funds are reintroduced to the financial system as ‘clean money’.

The first defense against money laundering is the requirement on financial intermediaries to know their customers— often termed KYC (Know Your Customer). Knowing his customers, financial intermediary will often be able to identify unusual or suspicious behaviors, including false identities, unusual transactions, changing behavior, or other indicators of laundering.
Placement rules should be able to detect deposit structuring by one or more individuals at various bank locations within a day or over time; notably this can also include ATMs. Large wire remittance customers, such as Money Service Business, will deposit cash more often and in greater volumes than typical customers.

Rules to detect large cash placements, using various methods and locations, in a single day, week, or month, would therefore be well-suited for monitoring.

Layering rules should identify bogus loans to offshore entities which never repaid the funds or the loan paid off in cash.

Current auditing happens manually where the auditor examines data output from legacy application and search for cash transactions over a period of time. The goal is to migrate to a continuous monitoring with software component which will alert auditors for suspicious activities. Banks need to be aware of all of the financial transactions that make up a ML. The critical knowledge to manage is the associations between such transactions.


The first step of a business process modeling approach is to work on the business process.

The high level process can be seen with 4 steps:

  • Detect fraud pattern
  • Analyze pattern
  • Investigate customer
  • Report on fraud


The detection of ML pattern is looking to different sources of information like the transactions, the customer accounts, the loan servicing applications, and looking at pattern of behaviors leading to potential Money Laundering. The detection of pattern is done with a time window constraint. A person making cash deposit on a yearly or monthly basis may not be a ML. A person doing cash deposit regularly without business motivation may be a fraudulent. The analysis is a sub process which aims to look at the potential fraudulent customer and search for historical information or customer data points already gathered by the system. Investigation is, as of today, a human activity performed once the system is reporting a risky customer. The investigation is to complete gathering information on the customer. Reporting Money Laundering to authority after the investigation is completed and positive.

From this process we will evaluate how to deploy event processing engine and rule engine for the pattern detection, analysis and investigation, in some next posts.

Friday, May 1, 2009

EDA and Rule Engine

I presented sometime ago an architecture overview and use case for deploying a Rule Engine inside an Event Driven Architecture. There are papers from analysts and other bloggers on that subject which are predicting that EDA is becoming a hot subject in the next few months. I want to share what I found interesting.
Event Driven Architecture is an asynchronous publish-and-subscribe communication pattern: Publisher applications send events to a mediation layer which is notifying the subscribers interested by the events. The publisher is completely unaware of the subscriber. Components are loosely coupled in the sense that they only share the semantic of the message. The simplest Java implementation is based on using JMS Topics as it is a natural API for pub-subscribe messaging.
The data carried on the message payload are events with business means. The goal to embrace EDA is to deliver real-time access to business data. This is not really an extension of SOA but a complement of it as publishers may call services on event detection. But it can be seen orthogonal to SOA, as SOA is using a traditional procedural pattern around synchronous controlled orchestration of services.
Some are saying SOA is dead, replaced by EDA. Well SOA is still a valid approach to design IT architecture. SOA is not dead, and EDA is a complement of it. One thing that I think make EDA very attractive is the flexibility to add new function/application without impacting existing ones.
By the way EDA is not new: One of the mainframe programming models was to have batch application waiting for the result of other batches to process their own work. It is very close to subscribers waiting for events coming from publishers. At least we can say EDA is the distributed version of the old mainframe programming approach.

As soon as current application in the IT are able to post events, you have the flexibility to add/ remove listeners to address a new business needs.

So why rule engine is a critical component within EDA? One deployment is to use a BRE to support the implementation of such listener. Instead of developing rigid application you use BRE to bring the agility inside the flexibility. The component can be seen as a decision agent. The second interest is in the implementation of the event processing that has to detect event, process it and take action.

We can see multiple levels to support this event processing depending of the characteristics of the architecture and the type of event processing we look at. I'm seeing at least three:
- Simple event processing: the subscriber focuses on processing a few types of event with specific static conditions, and initiating action such as creating a new event, or calling a service. This processing can be real time or not. We may not want to consider any time dimension in the event.
- Event stream processing: Events are ordered and arrived as a stream to the subscribers. The processing may involve time windows, count based window, leveraging time based pattern,... It is used to synthesize data in real time.
- Complex Event Processing: Detect complex patterns of events, consisting of events that are widely distributed in time and location of occurrence. It supports low latency, high throughput, complex event management with aggregation, join, stateful operators, event A followed by B and by C, and all the combinations of it.
The technologies to support each processing are different, and it is important to do not use one for the other, or we will generate frustration.

Thursday, March 5, 2009

Sustainable IT architecture


In current difficult economic period where companies are merging, need to comply to new regulations, reduce cost, and be able to react more quickly to economic changes, building Agile and sustainable IT architecture is a must.
So the book "Sustainable IT architecture" examines the use of Service Oriented Architectures (SOA) from the perspective of its contribution to the development of sustainable and agile IT systems that are able to adapt to new technology developments and to manage business processes.
The book is the translation and adaptation of a first french book which has a huge success in the IT community in France, and other countries using french language. I contributed for one chapter of the book around BRMS and how this component is a major piece of the Agility Chain Management System. This book also arrived on a good timing, where some architects and journalists are questioning SOA, or claiming SOA is dying. There is no doubt about SOA, its values, and approch. SMABTP, an insurance company in France, deployed SOA starting 2002 and without it they will not have their current sane business. (Jean Michel Detavernier, co-autor of the book, did an excellent presentation at DIALOG09 on how he did deploy SOA within SMABTP). You should be able to order it soon, and do not forget to comment on this blog.

Friday, February 27, 2009

ABRD v1.2 available on EPF download

Hi
To share with you the ABRD v1.2 is available for download as part of the EPF practices library, instead of dowloading it from CVS

http://www.eclipse.org/epf/downloads/praclib/praclib_downloads.php

Please continue to contribute and help on its enhancement. I thanks all the persons who gave me good feedback and propose changes that I did incorporate in current release or will do in the future release.
ABRD is becoming more and more adopted and used, it is a simple set of practices to develop business application using rule engine technology and so very relevant when we want to use BRMS platform.
I'm quite busy since these last months between the acquisition of ILOG and the book I'm writing on JRules and ABRD. I could not blog as I will expect.
But stay tuned, there are a lot to come in close months around BRMS ...

Monday, February 9, 2009

BPMN Modeling and reference guide

As Derek Miers commented on one of my old blog, some of my BPM maps were not compliant to BPMN. He was for sure right, and I bought the book 'BPMN Modeling and reference guide' that I really encourage people to buy and read to be fluent around BPMN. I was short cutting the approach by looking at presentations you can find on the web. You can get your own interpretation of the notation and got completly wrong. So a reference is always preferable, and with this knowledge you may arrive to design good process, at least process compliant to the intent of this standard.
The book is a must to be sure every one understand what a BPMN diagram is representing. Each BPMN modeler tool on the market has his own interpretation or 'add-on' so it is always interesting to get back to the intent of the notation and the reference. The book is illustrated with a lot of samples. Very useful.
Thanks again Derek.

DIALOG 09- A Success

Last week I was at Dialog 09, the ILOG customer conference. It was a real success, and real pleasure to see all the BRMS customers presenting how they are able to empower the business users to maintain rules. James is blogging on Dialog, and you can see good summaries there.

I was co-presenting effective rule writing, which is still a hot topic for business rule application, and there is some web seminar on this topic as soon as this week.
Another important subject is about rule deployment and the different strategies for deploying rules with the IT. I will blog on that soon.
The last presentation was on SMABTP, a very successful story on SOA adoption, and BRMS deployment: in 2002, Jean Michel Detavernier (CIO Deputy) has the Vision to embrace SOA at the enterprise level, deploy rules every where and put in place a real agile IT architecture. It is now possible to define new insurance product in days where it was needed months before. There are still customers, architects, and CIO who are questioning SOA value proposition, they may need to read such presentation. In such difficult economy, agile IT architecture is a Must. no question.