In this blog series I will discuss using docker-compose to manage several Microservices and their dependencies, creating a reproducible environment that can be used for spawning any number of services with a single command.
This first part is all about Dockerizing your services from zero, we’ll then jump into defining the environment using Docker Compose in part 2.
Lift uses Akka streaming workflows to define a flexible and generic exercise classification pipeline. The classification pipeline is able to modularly include any machine learning classifier and is able to monitor the real-time streams of classification results using a linear dynamic logic.
This post provides a summary overview of this classification pipeline with future posts introducing the implementation details.
Here we present a flexible and generic framework within which distributed applications, built upon a microservice architecture, may be implemented and deployed.
We achieve this by deploying dockerised microservices to a cluster of CoreOS machines (complete with etcd for service discovery and fleet for controlling services and specifying affinity rules).
Microservices are implemented using Akka actors that support clustering, Cassandra persistence and data sharding. Interaction with the microservices is mediated using a Vulcand load balancer that round-robin connects (via circuit-breakers) to microservice REST endpoints.