Why I love Apache Velocity

This was originally due to be entitled "MQTT 5.0 analytics platform, greenfield project build, part three" but I thought I would go for a snappier title.

See also MQTT 5.0 analytics platform, greenfield project build, part one and MQTT 5.0 analytics platform, greenfield project build, part two

Apache Velocity is a library that helps with meta-programming. Code that writes code.  This is a slightly unusual pattern which takes some understanding.

Let's consider a simple case.  You want to create a data class to store a person's first name and surname.

In pseudo-code this might look like this:

class example{

    private String surname

    private String firstname

    public example(String surname , String firstname){

        this.surname = surname;

        this.firstname = firstname;


    public getFullName(){

        return this.firstname + " " + this.surname;



That's pretty simple, no need for anything fancy.  Two lines in an XML file would be enough to specify this payload and some code on top to call the Velocity functions 

But what if you have to create a series of classes to persist data in a specific format?

If you have a look at this link to Github you can see an xml file that defines some FIX message formats. Using Apache Velocity and some fairly simple code, it's possible to create a complete set of FIX message classes with minimal fuss.  

The beauty of this is that, with some time investment, a true polyglot API can be constructed.  Consider a typical API model in current use (such as Bitfinex

A RESTful framework with 

  • a number of REST endpoints expressed as JSON payloads and URLs
  • a number of JSON payloads distributed over websockets

In order to use this framework a client application would have to be able to communicate over REST and to receive responses.  In a typical programming model the JSON payloads would be parsed to and from instances of classes.

Consider that clients will connect using (no particular order)

And that each has different datatype representations.  Rather than the REST/WS document as the end-point why not move up the food chain and supply the framework classes to and from  which the JSON payload can be parsed?

Using Apache Velocity the class files can be automatically created from a canonical representation in whichever format - whether that's another language, an XML file or a relational (or other) database DDL file.

As we have seen with FIX Orchestra - if you make adoption of your API easy, you make the business of connecting to people easier. And, you make the process of staying connected through planned upgrades much easier.

The relevance here is that for the MQTT project (part one and part two) we take a payload from a remote sensor and pass it through an MQTT client, MQTT server and finally to a persistence endpoint (Shakti, kdb+Memory-mapped files, flat-file).

When persisting, if the database table structure definition has already been auto-generated from the underlying payload representation then this removes one category of errors and can reduce testing overhead and time-to-market. We'll cover an actual example of this in a later post...

And a presentation on FIX Orchestra to give some understanding...