• News
  • Hands-on Reactive Microservices with RedElastic Commerce

Hands-on Reactive Microservices with RedElastic Commerce

May 25, 2017

RedElastic Commerce is an application to help developers understand reactive programming and microservices with Play and Akka. Our inspiration is PetClinic for Spring.

In the spirit of PetClinic, we’ve put together this early example of a functional application that demonstrates many different approaches for building reactive systems. We’re releasing RedElastic Commerce early and will be incrementally enhancing it over time, so if you’re looking to get some hands-on experience with reactive programming follow the project on GitHub, fork it, or contribute directly!

This article will describe the current state of the application, its architectural features, and the approaches we are using that will allow us to transform the application from a monolith into microservices later.

Monolith First with Onion Architecture

The application is built with Play for the API layer and is carefully crafted using the principles of Domain Driven Design and Onion architecture.

Domain-Driven Design

Domain-Driven Design (DDD) is an architectural style which makes the business domain model a first class entity in the development of software.

Practices such as event storming are conducted collaboratively with the domain experts to first understand the business domain, and then to map it to structural models of entities and events. The software is built to reflect these models. The result is that the software reflects the language that the business uses, which makes it possible for technologists and business owners to speak using the same terminology.

To help reduce complexity in the software, bounded contexts are defined to isolate and understand smaller parts of the domain. In the software, these bounded contexts only talk to each other through an application layer to prevent inappropriate coupling that adds complexity to the software.

For a deeper dive into DDD, we highly recommend Domain-Driven Design Distilled by Vaughn Vernon.

Onion architecture

Once the business domain is modelled, other layers of the application such as the presentation layer and infrastructure concerns – e.g, monitoring – start to become necessary to build around the core domain.

It’s unfortunately very common to see business logic in the other layers, especially the presentation layer, which goes against the core principles of Domain-Driven Design. Keeping layers pure helps us to overcome complexity in software. Onion architecture helps prevent bleeding of the business logic out of the core domain into other layers by defining a unidirectional dependency from outer layers into the core domain. It becomes easy to identify if logic exists in the wrong area of the application by following simple conventions.

Also see Hexagonal Architecture.

RedElastic Commerce Architecture

Starting a system with microservices is an anti-pattern. Microservices are essentially a refactoring technique that should be applied to a monolithic system in order to evolve it to a state where maintenance and runtime costs are reduced. Systems that start as microservices can easily become over-complicated because it’s very difficult to understand the ideal boundaries within a system until it’s somewhat mature.

With this in mind, we have designed RedElastic Commerce as a monolithic Play and Akka application. Over time, we will fork it and begin to migration to microservices to demonstrate the process.

There are a few packages to note:

  • Controllers: Contain the outermost layer for the application API
  • Infrastructure: Contains anything non-domain oriented that supports the application’s functionality.
  • Core: Represents the core of the application - the rich domain model

The purpose of the Onion architecture is that the outer layers – controllers and infrastructure – can be aware of the core, but that the core should never be aware of the infrastructure. This ensures an extremely loose coupling between the bounded contexts allowing us to easily refactor the bounded contexts into their own services in the future. Onion architecture helps us identify poor architectural choices early that will couple components together in the future.

Things are never simple when it comes to to breaking apart a monolith into individual services, as described in this article by Stefan Tilkov. We hope to demonstrate that careful application architecture approaches can reduce corruption in design to mitigate these tangled webs from occurring in the first place, even in a monolithic application.


Play and Akka both support Java, which is common choice for enterprise systems development. Java 8 is a significantly improved language over Java 7, and Java 9 promises even more improvements. Any Scala developer will easily be able to read the code.

There are a few tricky areas when building on the Lightbend Reactive Platform with Java, in particular around using immutable messages with Akka. You’ll notice that instead of using Collections.unmodifiable we are using true immutable collections in messages via vavr, formerly known as JavaSlang (if you look carefully, vavr is “Java” upside down). This is a much safer approach considering that unmodifiable collections are still mutable from the stack frame they’re created in, which is something we need to avoid.

Event Sourcing

When we query a traditional 3-tier application for the state of an entity, the application will go to a datastore to retrieve the current state. Datastores typically store data as mutable records.

With Event Sourcing, we instead store any events associated with state change in the datastore. This allows us to effectively move the source of truth into the application – the application will store all relevant values (such as a bank balance) while the journal will store all of the transactions that lead to that balance. The backing storage engine, then, contains a history of events for an entity so that we can replay those events to recover the current application state in the event of a failure. This gives added flexibility as we can analyze the logs, or even retroactively apply changes to the entity and simply replay the history to get the correct current state.

You’ll note that the Cart is using event sourcing. Unfortunately, some infrastructure code must be present for this to work (link), but it’s very minimal relative to other rich domain models of persistence such as the active record pattern.

Akka Persistence helps us abstract away all of the details of the persistence journal. The end result is state and behavior existing together with the source of truth being in the application and the datastore acting only as a log of events incase recovery is needed.

Event Sourcing is an excellent approach to use to fight against Anemic Domain Models as well as complex locking mechanisms as we can synchronously and atomically deal with messages in the application. The application is actually using a flavour of event sourcing called command sourcing as we persist the commands; the persisted events will eventually be changed to their own events, and then likely emitted through a pub/sub mechanism such as Kafka.

As an organization, it may seem that EventSourcing would be overkill for use with a cart. The fact that a shopping cart can be treated as an ephemeral entity and freely destroyed makes it an excellent target for trialing event sourcing in a team. The code looks simple, but there’s a lot hidden complexity between the journal and the entity that is important for a team to understand. The journal contains serialized events, and those events may change over time, so learning what approaches will work for your team are important, while any mistakes discovered in handling the production journal are non-critical as the cart journal can be deleted without serious impact. Other less-triavial contexts could use event sourcing after learning how to best deploy the approach.


In the example application, Java serialization is used, so any changes to events will break the ability to replay old commands from the datastore. This is not appropriate for production. In a future update we’ll switch to a more suitable tool; as we’re using Java, we’ll likely use Kryo with the CompatibleFieldSerializer to allow backward and forward compatibility. ProtocolBuffers are a more explicit format to use for this and another option.

Akka Cluster

To support near-linear scalability of the cart, we’ve employed Akka Cluster to shard the carts across running servers. The application can be scaled up and down without interruption to the service; Akka will reshard the entities when the cluster composition is changed.

Because we wanted to target deployment in RedElastic’s DC/OS lab, we used ConstructR with the ZooKeeper backend to bootstrap the cluster as using seed nodes inside an orchestration cluster like Kubernetes or Mesos would be quite difficult.

Infrastructure using Mesos, Marathon, and DC/OS

The example application was designed for deployment to DC/OS. DC/OS allows us to scale up and down on demand in order to demonstrate event sourcing with Akka Cluster and Akka Persistence. There are a few areas of interest in the code related to this: health-checks and shutdown.

The health-check should indicate when a node is not a member of the cluster (this has yet to be implemented yet). If a member can’t join the cluster or for any reason is not a member of the cluster, then the application should be restarted.

The shutdown should allow for a graceful handoff. In the example, we catch SIGTERM and stop Play and Akka from doing their usual shutdown routine. Instead we invoke and wait for the new graceful cluster handoff to complete, which was introduced in Akka 2.5. We’re actually using incompatible versions of Play and Akka so it is becoming more important that we separate the cart context into its own service shortly.

You’ll see in the root of the application a file called marathon.example.json which outlines the configuration for integration with DC/OS including the healthcheck.

Storing configuration in the environment

You’ll find the application.conf file adheres to the Twelve-Factor App’s third tenet by allowing us to store configuration in the environment.

What we’ve done is to include both a sane local default value, and then an optional value from the environment for when the application is deployed into DC/OS. Here is an example showing the ConstructR ZooKeeper configuration pointing to the Mesos master’s ZooKeeper instance. If the value is missing, it will fall back to the default value but if it exists in the environment then it will be used.

constructr {
  coordination.nodes = ["localhost:2181"]
  coordination.nodes = [${?MESOS_ZK}]

Next Steps

We’ll continue to grow the application as we find different needs in talks and demonstrations that we give. If you’d like to contribute or have ideas for the project, please feel free to create issues against the project on GitHub.