Buy-side compliance systems: Still 20th Century technology?

Buy-side compliance systems are not, in large part, true real-time.  What does that mean? A genuine real-time system will be event-driven, most existing systems are not. 
So let's work through an example:

A buy-side portfolio contains a European equity portfolio and cash in a number of currencies.  There is a rule that the portfolio can only contain CHF denominated assets to a maximum of 5% of the portfolio value.

How does a buy-side compliance system manage this?

One way would be this:

A server holds the position data for the portfolio and receives real-time updates of portfolio trading information and position price changes and foreign exchange rate changes. Each position is valued in real-time and each position value is then converted to a portfolio base currency.  Each position in portfolio base currency is then summed to find the overall portfolio value.  Each position value is then divided by the overall portfolio value to find the percentage of portfolio held in each position.  And all CHF denominated assets are summed and divided by the overall portfolio value to find the CHF denominated asset value in real-time.
If, during the course of asset or foreign exchange market fluctuations the 5% limit is breached then an alert is generated and distributed to various channels (email, SMS, website, message queues, log files).
Another way would be this:
A server process runs in response to a user key press or a scheduled batch job.  It then runs a one-time valuation and calculation job and the output of any rule breaches is then distributed as above.
The key-press/scheduled batch job model is predominant within the current set of buy-side compliance systems. Why does this present a problem?
  1. Within a batch job paradigm there is a problem of data sampling points.  Simply put, in times of market volatility it may be the case that a portfolio breaches a compliance rule but this breach is not noticed at the time as the batch job is not scheduled to run at the time the breach occurs.  The portfolio may then be compliant at the time of the next batch job.  The issue of course is that the signal of an intra-observation breach is lost, so the asset manager does not appreciate how close the portfolio is coming to an observed breach.
  2. Furthermore, the batch job paradigm means that there is an un-even processor load on infrastructure - the servers will spin up and run the jobs then sit idle until the next batch job.  This is of course a sub-optimal use of infrastructure - in an ideal situation a firm would look to run a datacentre at between 60-80% processor load on a continuous basis.  Running batch jobs means having more infrastructure to fit the batch job into a window.  It's 20th Century IT systems pretending to be contemporary.
  3. Batch job failures requiring intervention often lead to a series of jobs being abandoned - since there is not spare compute to run the job before the next scheduled job is due to start.
  4. Batch jobs give a backwards looking view of portfolio compliance, since there is no predictive analytics built in.  Predictive analytics is where the buy-side compliance system needs to get to in order to retain relevance.
  5. Batch job compliance systems, in normal usage, are structured to generate false positives rather than miss any breach.  This requires an increase in operational compliance personnel to check all of the potential breaches.
  6. True real time calculations of portfolio valuation and associated derivatives such as percentage of portfolio will generate meaningful data for risk analysis of portfolios
The future buy-side compliance system will look radically different to the present generation.