Wednesday, May 20, 2009

Service Component Architecture

Discussing about agile IT architecture on this blog without going to SCA will be a major error. I'm playing with SCA since last September, and I love it.
Service Component Architecture is defined as the implementation of the SOA architecture. You can read an excellent paper from David Chappell on this subject. SCA separates application business logic and the implementation details. It provides a model that defines interfaces, implementations, and references in a technology neutral way, letting us to bind these elements to any technology specific implementation.

For the business case and business user point of views the value propositions are around:
- Save time and money
- A simpler API, Efficient GUI tools to assemble components to build new application
- Enable and encourage reuse - Developers can create composites that perform useful functions. SCA makes it easy to use and reuse business logic.
- Bring agility to interchange business logic
- Bring visibility on how the application is built. I can imagine easily using the assembly diagrams to explain how 'business' components work together in the context of the current application.

On the architecture level the ability to separate business logic from infrastructure logic reduces the IT resources needed to build an enterprise application, and gives developers more time to work on solving a particular business problem rather than focusing on the details of which implementation technology to use.
Some key concepts:
A component implements some business logic exposed as one or more services that operate on business data. Component includes an implementation, and can have partner references and interfaces.

A component can indicate the services it relies on using references. Explicitly defining references allows dependency injection, the SCA runtime can locate the needed service and injects the reference into the component which needs it.
Components are bring together within an assembly or composite. The composite is persisted as an XML document using the Service Component Description Language (SCDL).

A component can also define one or more properties. Each property contains a value that can be read by that component from the SCDL configuration file when it’s instantiated. Finally a domain can contain one or more composites, each of which has components implemented in one or more processes running on one or more machines.

On the tools and implementation side, IBM for sure has an impressive offering on that space with WebSphere Integration Developer. Apache Tuscany is also an excellent open source to help us jumping into the technology and understand how it works. Eclipse Ganymede is offering an assembly editor, SCA project plugin...

We will build our AML application using SCA

Wednesday, May 13, 2009

AML with Event processing and Rule Engine

I have to build a demo and a presentation (co -presented) for IBM WebSphere Impact 09 on how business rules and business event work together in the context of Anti Money Laundering. James Taylor did a good summary on his blog.
I promised some time ago on this blog to go through a complete example of executing ABRD on a project. So lets take this demo as a main example. Lets start with this first blog on a short description of AML and its high level process.

AML business context:

Money Laundering is the act of hiding illegally earned money from police and tax authority by making illicit funds appear as initiated from legal business. The Money laundering is a three step process. The first step, called ‘placement’, is done by depositing illicit funds in a business bank account. If one makes a cash deposit above 10000$ the bank is required to report the transaction to the government. The next step is called ‘layering’, wherein funds are moved from bank to bank and consolidated. The last step is the ‘integration’, where the funds are reintroduced to the financial system as ‘clean money’.

The first defense against money laundering is the requirement on financial intermediaries to know their customers— often termed KYC (Know Your Customer). Knowing his customers, financial intermediary will often be able to identify unusual or suspicious behaviors, including false identities, unusual transactions, changing behavior, or other indicators of laundering.
Placement rules should be able to detect deposit structuring by one or more individuals at various bank locations within a day or over time; notably this can also include ATMs. Large wire remittance customers, such as Money Service Business, will deposit cash more often and in greater volumes than typical customers.

Rules to detect large cash placements, using various methods and locations, in a single day, week, or month, would therefore be well-suited for monitoring.

Layering rules should identify bogus loans to offshore entities which never repaid the funds or the loan paid off in cash.

Current auditing happens manually where the auditor examines data output from legacy application and search for cash transactions over a period of time. The goal is to migrate to a continuous monitoring with software component which will alert auditors for suspicious activities. Banks need to be aware of all of the financial transactions that make up a ML. The critical knowledge to manage is the associations between such transactions.


The first step of a business process modeling approach is to work on the business process.

The high level process can be seen with 4 steps:

  • Detect fraud pattern
  • Analyze pattern
  • Investigate customer
  • Report on fraud


The detection of ML pattern is looking to different sources of information like the transactions, the customer accounts, the loan servicing applications, and looking at pattern of behaviors leading to potential Money Laundering. The detection of pattern is done with a time window constraint. A person making cash deposit on a yearly or monthly basis may not be a ML. A person doing cash deposit regularly without business motivation may be a fraudulent. The analysis is a sub process which aims to look at the potential fraudulent customer and search for historical information or customer data points already gathered by the system. Investigation is, as of today, a human activity performed once the system is reporting a risky customer. The investigation is to complete gathering information on the customer. Reporting Money Laundering to authority after the investigation is completed and positive.

From this process we will evaluate how to deploy event processing engine and rule engine for the pattern detection, analysis and investigation, in some next posts.

Friday, May 1, 2009

EDA and Rule Engine

I presented sometime ago an architecture overview and use case for deploying a Rule Engine inside an Event Driven Architecture. There are papers from analysts and other bloggers on that subject which are predicting that EDA is becoming a hot subject in the next few months. I want to share what I found interesting.
Event Driven Architecture is an asynchronous publish-and-subscribe communication pattern: Publisher applications send events to a mediation layer which is notifying the subscribers interested by the events. The publisher is completely unaware of the subscriber. Components are loosely coupled in the sense that they only share the semantic of the message. The simplest Java implementation is based on using JMS Topics as it is a natural API for pub-subscribe messaging.
The data carried on the message payload are events with business means. The goal to embrace EDA is to deliver real-time access to business data. This is not really an extension of SOA but a complement of it as publishers may call services on event detection. But it can be seen orthogonal to SOA, as SOA is using a traditional procedural pattern around synchronous controlled orchestration of services.
Some are saying SOA is dead, replaced by EDA. Well SOA is still a valid approach to design IT architecture. SOA is not dead, and EDA is a complement of it. One thing that I think make EDA very attractive is the flexibility to add new function/application without impacting existing ones.
By the way EDA is not new: One of the mainframe programming models was to have batch application waiting for the result of other batches to process their own work. It is very close to subscribers waiting for events coming from publishers. At least we can say EDA is the distributed version of the old mainframe programming approach.

As soon as current application in the IT are able to post events, you have the flexibility to add/ remove listeners to address a new business needs.

So why rule engine is a critical component within EDA? One deployment is to use a BRE to support the implementation of such listener. Instead of developing rigid application you use BRE to bring the agility inside the flexibility. The component can be seen as a decision agent. The second interest is in the implementation of the event processing that has to detect event, process it and take action.

We can see multiple levels to support this event processing depending of the characteristics of the architecture and the type of event processing we look at. I'm seeing at least three:
- Simple event processing: the subscriber focuses on processing a few types of event with specific static conditions, and initiating action such as creating a new event, or calling a service. This processing can be real time or not. We may not want to consider any time dimension in the event.
- Event stream processing: Events are ordered and arrived as a stream to the subscribers. The processing may involve time windows, count based window, leveraging time based pattern,... It is used to synthesize data in real time.
- Complex Event Processing: Detect complex patterns of events, consisting of events that are widely distributed in time and location of occurrence. It supports low latency, high throughput, complex event management with aggregation, join, stateful operators, event A followed by B and by C, and all the combinations of it.
The technologies to support each processing are different, and it is important to do not use one for the other, or we will generate frustration.