Tuesday, May 26, 2009

What does an Event Analyst do?

**** I am working on the images for this post. They will be along shortly! ****

Over the past several months, I have had the privilege of working with some of the top minds in event processing working on a glossary of event processing terms.  Our work has centered around the mechanics and "clock-works" of CEP.  This is extremely important work, but my mind keeps moving towards what does the event analyst do? Perhaps its my ADD, but what's a man to do?

The working group has a diagram of an fully developed event scenario.  We have used this diagram to aid in discussion and as a reference.  But again, my mind turns to the question of "how did an event analyst create this in the first place?"  In this post, I hope to work through that and perhaps evangelize some terms.

First off, what is an event analyst?  I see a specialized business analyst.  A person who recognizes the stimuli that sets things in motion.  It's obvious, a business is replete with business and technical events.  Some of these things like customer order or RFID or check paid are easy to bridge the gap between the real world and the emulating virtual computer world.  However, not all things that happen in the real world of business can be directly measured or captured.  How can you capture the event of fraud or a car accident except perhaps by a human entering information into a screen?  That is what an event analyst does: Figure out how to capture events that are not directly capture-able.

For those still reading, I want to illustrate an example of this and perhaps you can help me make it better. Comments are appreciated!  Especially around vocabulary of terms.  If something doesn't make sense or could be improved, please comment.  This is a work in process, so I have humble opinion of it.  The goal is to make it worthwhile even if it changes from my original opinion.

========  Situation  =========
Emily, an event analyst, works for the state transportation and highways department. She is given the task by her boss to figure out how they can increase capturing speeding motorists while decreasing the number State Trooper patrols.  Getting her cup of Jamaican Blue Mountain, she gets to work.

First off, she knows that there is an business event object in the Transportation system called speeding which is triggers a process in the court system which files the complain with the clerk of courts, sends out the legally approved correspondence and creates the case on the court's docket.  Currently, that event object is published when the patrol-person types the traffic ticket info into the patrol system.  In order to accomplish her task, however, Emily can't rely on human observing the speeding violation.   But how can we get a computer to measure that.  As any good analyst would do, she starts to draw a picture. (well, I would, but I am an architect which means picture-drawer, I believe)

*** Draw a picture of the two speeding events in the real and emulated worlds ***

When she considers the "real-world" event of speeding, a couple of things come to mind.  First off, what object is speeding?  The car.  And what does it mean "to speed"?  It means that the object (car) is moving faster than the legal limit for that section of road.   Emily realizes that the car has a velocity  which indicates the speed its traveling but also, thinking back to her calculus classes about integrals, that at each moment in time the car is at a certain position.  Since an event is a state change of an object, in a sense, the car has a stream of state changes in terms of its position. And the Speeding event would change state to speeding or not speeding depending on the velocity passed  legal limit at that location one way or the other.  Emily adds this event stream to her diagram.

*** Add an event stream for the movement to the picture ***

Emily understands that there is a relationship between speeding and movement which is called an attribute relationship, but how does that help her get the speeding event object in the patrol system to be published?  Perhaps she can figure out a way to get capture the movement.

As she is pondering this, she remembers that Homeland Security implemented a high speed traffic camera that was mounted around some critical infrastructure.  It has the ability to take high-speed digital images of 3 lanes of traffic and capture the license plates of cars.  In Homeland Security's case, they used the information to track the coming and goings of cars around the infrastructure, but Emily had an idea.  The raw events objects that were created had a timecode and a highly precise location.  Being the good event analyst, she reused what was available, subscribed to the raw event objects and started watching.  She know that the individual movement event caused the camera to capture the license plate encapsulating it in the raw event object.  

*** Add the camera event object, the causal relationship between the movement and camera. ***

Homeland security had 50 of these camera setup in the metro area.  All of them were setup at intervals along the roads around some critical areas.  By calculating the time difference between the intervals when a car (as identified by the license plate) past a camera, it is easy to calculate the speed by dividing the distance by the time.  This creates an inference relationship between the set of camera used to detect the movement of the car and the real-world speeding event.

Emily wrote some CEP Script that allowed her to calculate the time difference between the raw event objects for a particular car.  With that information, she was able to publish the speeding event object which acted as a stimuli to the rest of the legal process.  Her finished diagram looked as so:



Sunday, May 10, 2009

SOB - Service Oriented Business

I was having lunch with a friend and he asked me: "What is a simple definition for SOA that I can tell people when they ask?"  I thought about that and all the answers I came up centered around "How do we build it."  

When I googled it, I got lots of answers, but they were very technical in nature or used service in the definition:

OasisA paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains. It provides a uniform means to offer, discover, interact with and use capabilities to produce desired effects consistent with measurable preconditions and expectations. citation


IBMService Oriented Architecture (SOA) is a business-centric IT architectural approach that supports integrating your business as linked, repeatable business tasks, or services. 

GartnerAn application topology in which the business logic of the application is organized in modules (services) with clear identity, purpose and programmatic-access interfaces. Services behave as "black boxes": Their internal design is independent of the nature and purpose of the requestor. In SOA, data and business logic are encapsulated in modular business components with documented interfaces. This clarifies design and facilitates incremental development and future extensions. An SOA application can also be integrated with heterogeneous, external legacy and purchased applications more easily than a monolithic, non-SOA application can. Citation

None of these definitions were something I could memorize and regurgitate when a vice-president asked "What's this SOA thing?"

My definition is simple: "SOA is a way of organizing work that maximizes the consumer / provider metaphor"  A friend suggested I replace "metaphor" with "relationship."

As I was thinking about this definition, I was studying agile programming and there was a reference to the type of company: Functional-oriented vs. project oriented were what the book was referring.  My brain said, "What about service-oriented?"

Can we make a service oriented business (SOB)?  Certainly we have thought of that acronym about employees of a business who believes incorrectly that they are service-oriented.  But can we actually make a business that is completely service-oriented in the consumer / provider metaphor?  What would such a business look like?

Obviously at the top are services that are provided to other entities which they (asuumeingly) pay to consume.  Internal to these services, some work is accomplished.  But probably there are also underpinning services as well.  Services that we rely to make our service perform, stocked, etc.  And those services have underpinning services and so forth.

But if we consider SOA as maximizing the consumer / provider relationship, we could consider the underpinning service as its own business.  A business that "sells" its own products/services.  As a network of these arise, we can see the value matriculating from the end-consumer through our network of inter-related services.

What I find interesting about this concept is that it is value based.  A service invests its income in providing its customers what they want.  I know we have hear the "treating the other employees as customers" rhetoric hundreds of times.  But if we truly created a business as a network of services, would that change?

One other interesting thought that I have about this network of services.  It is ripe for outsourcing.  If we divided a company into logical semantically accurate functionality [work] and consider the "service" as an internal provider of that work and then we found out an external party can offer that same service at the higher value / cost matrix.  Wouldn't that inspire better investment into that service internally?

Said a different way.  Couldn't this help us meet our goal of All work that is necessary for a business to operate should be provided by an entity whose core competency is doing that particular work.

Just some things to think about.


Saturday, May 02, 2009

Other types of functionality... where does it go on the cloud

I have been curious of how will the cloud evolve.  We have picked basic functionality to start the cloud.  Queuing, Storage, instruction processing, network are the blocks that the rest of IT is built upon.  As an industry, those blocks have been/are being solidified.  What I would like to know is what is next to be built or perhaps next after that.  As usual, I have an opinion, but it will, also as usual, require a bit of context.  I hope the conclusion will be worth it!

My contention is this: "The cloud" is a mechanism to enable services of commodity functionality  [Which is another name for work] and as such, a well understood taxonomy or organization of these commodity functionality will need to be developed in order for higher level functionality to prosper in the cloud.  

To understand this, understand that functionality can be described as: Core (value adding), Unique (necessary, but no provider we trust), Commodity (necessary and providers we trust)  and Extraneous (not necessary).  Core competencies (functionality), as described by Jack Welch, is work that a company focuses on being the best at to set itself apart from its competitors.  Unique is work that is needed to be done but it doesn't set us apart directly and there aren't providers that we can trust to do the work.  Commodity is work that is needed to be done that doesn't provide differentiation but there are providers who can do the work.  Extraneous are waste in the lean six-sigma sense.

In real-world products, we have seen business outsource accounting, payroll, manufacturing, sales, IT.  Following Jack Welch's advice of focus on core competencies and allow others to offer a service which encapsulates the work that isn't core to your value proposition.  This makes sense. It leads a company to only, for the most part, focus on things that matter to its bottom line.  However, a company will still need to do unique functionality because no one else can.  Our goal should be to convert, as much as advisable in terms of security, functionality from unique to commodity.  Even potentially creating new industries to provide that functionality.  Stated differently all work that is necessary for a business to operate should be provided by an entity whose core competency is doing that particular work.

So applying this to information technology, there are algorithms that are core, unique, commodity and waste.  If we follow the same advice, our efforts should be focused on our IT's core competency.  Work that is unique we should plan how to make it commoditized.  Commodity should be being done by others and waste should be eliminated.  But how do we make this happen?  

With the advent of services and well-understood interfaces, it has become operationally easier to separate out the core from the commodity services.  The cloud starting from the ground up, with storage, database, computing power, networking.  Functionality that all systems share and because they have stable, mature, well-understood interfaces.  This is not really due to the efforts of the cloud community for they stood on the shoulder of giants. Rather it was the efforts of IT Vendors and standards groups that standardized the interface allowing the creation of their products.  Remember these standards allowed the syntactic ambiguity and also, to the extent possible, the semantic ambiguity to disappear.  However, this canonization effort is expensive, time-consuming and potentially a pre-ripe standard may stifled innovation to soon.

High level functionality such as business rules, calculation, decision logic may indeed be capable of being a commodity.  However, trust, the ecosystem of providers, the organization of functionality and the interfaces standards for the functionality are immature.  Thus it traps potential commodity functionality as unique.  Or it requires that business purchases a large package that stands in for the undeveloped ecosystem which can hinder your IT organization from being enabling business differentiation which should be its primary objective.

SAP and other large package vendors will tell you that they are opening up to allow highly customized processes and manipulation of data.  They will even allow you to call a web service from an external (to SAP) provider (probably will be you!) and use the response in your process.  So, in essence SAP (and all the other ones too) has turned into a platform that provides a rich array of functionality and process to accelerate your mapping of your emulated business to your real business.  This platform can become the focal point of your business.  Is this new?  no.  Is this bad? no.  But the ecosystem needs to extend out.  Potentially even replacing parts of the rich array of functionality or perhaps replacing the platform.  A business using a package should ask itself, what is the core competency of our large package software and map that answer to the previous goal: all work that is necessary for a business to operate should be provided by an entity whose core competency is doing that particular work.  No vendor can provide all functionality to all types of parties as part of their core competency.  At least not to the determent of some of the functionality.  So, how do we build an ecosystem that will allow best of breed augmentation using external functionality?  How will we know who has what?  And how difficult will it be to have multiple providers or to switch providers?  Asked another way, how do we abstract out the functionality from the provider?

These, I believe, are the core questions to prompting us to make high order functionality available on the cloud.  We tried UDDI, but there wasn't a good taxonomy developed to describe where the functionality fit.  As well, there wasn't a good semantic ontology developed to describe the interfaces.  Nothing helped us be agnostic to the physical interface.

So my solution:

Imagine in a folks-onomy way a provider builds a version of their functional taxonomy.  This would include the semantics of what the functionality does.  As well, the semantics of the interface.

This world would also includes a more generic taxonomy built with help of linguists, computer scientists that is semantically accurate.  These taxonomies will be built, probably, along industry segments lines.  The company accomplishing this intellectual property, will work with individual providers to help them map their own taxonomy to the more generic taxonomies potentially across industry boundaries.  Where holes appear, the company will modify the generic taxonomies to make it a more complete and living entity.

The beauty of this is when a company wants to consume commodity functionality.  They search the functional taxonomy to find what they need.  Their platform/package provider will jump start their own taxonomy and the individual consumer will modify it to match their particular implementation.  The individual company will a more specialized functionality for its business than its package provider provides.  Because providers publish what functionality they provide and a match can be found at run-time.  Also, because the interface is semantically tagged, the consumer can provide the correct information based on its data's association with the generic taxonomy at run-time and convert the response to its own format.  

Therefore, the physical interface doesn't need to be well-understood, just semantically understood.  Which has been a huge problem all along.  We don't semantically tag our information (as a general rule) to an authoritative source .  In the same vein, we could do the same for describing nouns, processes and events.  It is in organizing our functionality (work) and our information that will allow our trapped unique functionality to become commoditized and therefore meet our goal.  

I think a company that can build and maintain the taxonomy so that semantic accuracy can be a first order principle in software design would be an awesome place to work.  As the cloud grows and encapsulates higher order functionality, semantics will be foremost!  Well, until it itself is commoditized.  

Anyone want to help me build this company?