<img height="1" width="1" src="https://www.facebook.com/tr?id=1076094119157733&amp;ev=PageView &amp;noscript=1">

SMACK stack on DC/OS – how it’s all woven together

Posted by Anirvan Chakraborty on Tue, Apr 19, 2016

In today's day and age, for a business to stay competitive, it's no longer enough to just design a good-looking and functional product. Consumers worldwide want intelligent products that can create, access and analyze data, and then feed those results back to them in order to optimize the user experience. Simply put, the Internet of Things (IoT) combined with Big Data, is the future of technology that is making our lives more efficient.

Many architecture patterns have emerged in the recent past that have allowed developers and data scientists build applications on real-time data pipelines for big data and IoT. Batch, streaming, Lambda and Kappa have been the most popular ones amongst them. They all need a scalable data processing platform as the base. In late 2014, a group of interoperable, open-source components were stacked together to address that very need and the SMACK stack was born.

The SMACK stack consists of Spark, Mesos, Akka, Cassandra, and Kafka. The features of the stack can be summarized as:

  • concise toolbox that can deal with a wide variety of data processing scenarios
  • composed of battle-tested and widely used software components with large open-source communities behind them
  • easy scalability and replication of data while preserving low latencies
  • single cluster managed platform that can handle heterogeneous loads and any kind of applications

In this blog we are going to explore how easy it is to run an application based on the SMACK stack on DC/OS. We would focus on how the different components of the stack are woven together to support such an application.

High level architecture diagram

Following is a high level architecture diagram of a typical application for the SMACK stack. The application detailed below deals with a high volume of data ingestion and then analyzes them. It simulates an application that ingest energy usage data from smart meters in a typical home for further introspection. The energy usage data is then used to plot energy consumption profile of a geographical area on a map. This can then be used by potential tenants to estimate their energy usage if they are looking to move to a new area.

HA_diagram_-_High_level_1-2.pngHow to weave the pieces

Prerequisites

The DC/OS services

Now let’s ensure that all required DC/OS services are set and ready to be used by the application stack. Although some of the services enumerated below are core components of DC/OS, we’ll add them to the list as they are also core dependencies to our application stack.

  • Marathon

The Marathon DC/OS service is a core component of DC/OS, so it will be available as part of the prerequisites. A health check for a service is always a good point to start using that service:

dcos marathon about | jq ".elected == true"
  • Marathon LB - external

The default Marathon LB framework settings will provide an external Marathon LB framework instance.

A comprehensive documentation on the specifics of Marathon LB as a service/framework in DC/OS can be consulted here.

Quick install:
dcos package install marathon-lb

The stats for haproxy, on which Marathon LB is based, are accessible on the url: http://p1.dcos:9090/haproxy?stats. The internally resolvable Mesos DNS record that the Marathon LB service installation will add is: marathon-lb.marathon.mesos. Mesos DNS is a core component of DC/OS. The Marathon LB service is installed on any available DC/OS public agent and hence we have p1 as a single public agent, the Marathon LB service will also be available on the url: http://p1.dcos.

  • Marathon LB - internal

An additional Marathon LB, internal, is worth using in order to avoid traffic going through the Internet when that is not necessary and definitely not recommended either.

Quick install:
cat < marathon-internal-lb-options.json
{ "marathon-lb":{ "name":"marathon-lb-internal", "haproxy-group":"internal", "bind-http-https":false, "role":"" } }
EOF
dcos package install --options=marathon-internal-lb-options.json marathon-lb

The internal Mesos DNS for this additional service is: marathon-lb-internal.marathon.mesos.

  • Kafka

One of the available services in DC/OS repositories is the Kafka service and that is what we are going to use instead of managing and maintaining a Kafka cluster.

Quick install:
dcos package install --yes kafka

A quick validation of the service availability is recommended and for that all you need to do is run this: dcos package list kafka; dcos kafka help. The Kafka service is installed as a long running, highly available and resilient job on Marathon. The installation of the Kafka service may take a few minutes until it becomes available and the progress of the installation can be checked on Marathon.

Kafka service comes out of the box with three broker instances by default. You can customize the Kafka service and create your own set of brokers tuned to match the expected load, or you can just go ahead and use Kafka now. The application stack will manage the creation of topics from which components of the application stack will be consuming messages.

  • Cassandra

As a part of our infrastructure stack, we require the Cassandra service running on DC/OS as well. Now Cassandra is available as a service on the DC/OS official repositories.

Quick install:

Installing the Cassandra service using the DC/OS CLI is very simple and can be done by running:

$ dcos package install cassandra
Installing Marathon app for package [cassandra] version [1.0.0-2.2.5]
Installing CLI subcommand for package [cassandra] version [1.0.0-2.2.5]
New command available: dcos cassandra
DC/OS Cassandra Service is being installed.

This will take a few minutes for Cassandra to run on our DC/OS cluster. By default, on installation, the Cassandra service typically has 3 nodes out of with 2 are seed nodes.

SSH into the Cassandra Cluster

Now that we have a Cassandra cluster up and running, it's time to connect to our Cassandra cluster. In order to do that let's retrieve the connection information using following command:

$ dcos cassandra connection
{
    "nodes": [
        "192.168.65.111:9042",
        "192.168.65.121:9042",
        "192.168.65.131:9042"
    ]
}

Now, let's SSH into the newly created DC/OS cluster, so that we can connect to the Cassandra cluster.

$ dcos node ssh --master-proxy --leader

At this point, we are now inside our DC/OS cluster and can connect to Cassandra cluster directly. Let's connect to the cluster using cqlsh client. For that we would select one of the IPs of the Cassandra nodes we retrieved earlier. And we can run:

$ docker run -ti cassandra:2.2.5 cqlsh 10.0.2.136
cqlsh>
Create keyspace

Now that we are connected to our Cassandra cluster, we can create a keyspace called iot_demo by running:

cqlsh> CREATE KEYSPACE iot_demo WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 };

With keyspace prepared, we are now ready to add some tables and some dummy data into our keyspace in order to start building our application with Cassandra.

Service discovery

DC/OS CLI based discovery

We are making use of the DC/OS tools and features to achieve the level of service discovery that the application stack requires. Discovering services on DC/OS can be achieved in the Docker containers at the Entrypoint level by making DC/OS CLI calls and exporting environment variables that the application stack will be expecting to overrides the defaults used on a typical DEV environment.

Akka discovery

In order to discover the running Akka nodes within a Docker container Entrypoint docker-entrypoint.sh, the following DC/OS CLI call would be typically required:

export AKKA_SEED_NODES=`dcos marathon app show  | jq -r ".tasks[].host" | tr '\n' ','  | sed 's/,$//g'`

The application configuration will use the environment variable set this way and discover the Akka cluster existing seeds or bootstrap the Akka cluster, in case the current Akka node is the first to come up.

We have the following edge case to be considered:

The current starting container akka node is actually the first seed node to start In this particular case, the outcome of the discovery step is to self discover the IP of self container, which is the desired outcome. This edge case will not require any further special handling as it will be handled by default.

Kafka discovery

The Kafka brokers endpoints have to be discovered within the Docker container Entrypoint as well and for that you will need a DC/OS CLI call similar to this one:

export KAFKA_BROKERS_LIST=`dcos kafka connection --dns | jq -r ".names[]" | tr '\n' ','  | sed 's/,$//g'`
Marathon LB based discovery

The complementary way of achieving service discovery at the planned level is to make use of the Marathon LB services, both the internal and the external.

Application stack deployment

Okay, we now have the services provided by DC/OS, we do have a set of service discovery mechanisms in place. Let’s move on to deploying the application and make use of all of the services described and the methods we now have.

We are going to use long running Marathon jobs to ensure our application stack is running. The application stack components will be running as Docker containers based on the Marathon jobs we are going to define. The eventual consistency of the application stack is ensured through good practices and a set of features that DC/OS core service, Marathon offers.

The stack consist of two microservices, the meter microservice and the simulator microservice and a simple web client for visual demonstration purpose.

Meter microservice

The meter microservice is built as an Akka cluster and it also exposing a REST endpoint that is used by the simulator microservice and by the web client.

The marathon job json for the meter microservice will look like this:

{
  "id": "meter",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "cakesolutions/iot-demo-meter"
    }
  },
  "labels":{
    "HAPROXY_GROUP":"external,internal"
  }
…
}

This Marathon job definition includes a special label with the key HAPROXY_GROUP. This label tells marathon-lb whether or not to expose the application. "external" group is the default group that the Marathon LB service starts configured with.

The “internal” group will ensure that the meter service will be registered as a backend on the internal Marathon LB, thus exposing internally the REST endpoint on an internally resolvable DNS: marathon-lb-internal.marathon.mesos:1900. The simulator microservice described below will make use of the privately resolvable DNS in order to connect to the meter microservice REST endpoint within the boundaries of the DC/OS datacenter.

The “external” will ensure the following internally resolvable DNS: marathon-lb.marathon.mesos:19002, which will be publicly resolvable as p1.dcos:19002. The web client described below will make use of the publicly resolvable DNS, as the web client is a simple client running in the browser, hence outside of the DC/OS datacenter, unable to resolve the internally resolvable DNS.

We now add the Marathon job that we have prepared for the meter microservice

dcos marathon app add meter.json
The Marathon jobs can be redeployed by using either Marathon API, either DC/OS CLI. Traditionally now we’ve been using the Marathon API, directly or with the Python driver.
Simulator service

The Marathon job json for the simulator microservice will look like this:

{
  "id": "simulator",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "cakesolutions/iot-demo-simulator"
    }
  },
  "env": {
    "METER_HOST": "marathon-lb-internal.marathon.mesos",
    "METER_PORT": "19002"
  },
  "labels":{
    "HAPROXY_GROUP":"external"
  }
}

Similar to the meter microservice, we now add the Marathon job that we have prepared for the simulator microservice so we have a long running Marathon job running the simulator.

dcos marathon app add simulator.json

The simulator will require to know the endpoint of the meter microservice api. For that we are going to make use of the DC/OS services and the environment variables on the Marathon job.

The “external” will ensure the following internally resolvable DNS: marathon-lb.marathon.mesos:19001, which will be publicly resolvable as p1.dcos:19001 and will expose the REST endpoint that the web client connects to. An “external” group is not used as it is not required.

Web client

The Marathon job json for the web client will look like this:

{
  "id": "web",
  "container": {
    "type": "DOCKER",
    "docker": {
      "image": "cakesolutions/iot-demo-web",
    }
  },
  "env": {
    "METER_HOST": "p1.dcos",
    "METER_PORT": "19002",
    "METER_HOST": "p1.dcos",
    "METER_PORT": "19001"
  },
  "labels":{
    "HAPROXY_GROUP":"external"
  }
}

Now we add the corresponding Marathon job

dcos marathon app add web.json

The web client will require to contact both the meter microservice API and the simulator microservice API, running on client side, within the browser. The endpoints of the microservices must be public endpoint because of that.

The following ENVIRONMENT variables will be available for the web application in order to function on the client side, within the browser. p1.dcos is the DNS of the public agent node within our DC/OS datacenter, on which the Marathon LB service will be running based on the default configuration, a specific Mesos role being assigned to the public agents.

METER_HOST=p1.dcos
METER_PORT=19002
SIMULATOR_HOST=p1.dcos
SIMULATOR_PORT=19001

Summary

In this post we looked at a few components of the SMACK stack running on DC/OS and how easily they can be installed and configured to run reactive applications designed for the SMACK stack. Hopefully we have been able to show you how easy it is to have complex components like Kafka, Cassandra etc running on DC/OS as services. From our experience of using DC/OS and it's various services we can safely conclude that DC/OS:

  • is the easiest way to run containers in production
  • is the easiest way of getting the most out of your infrastructure
  • makes it easy to run different computing tasks on the same hardware
  • provides the easiest way to scale your services up or down

 

In summary, DC/OS is a complete solution that provides everything that you need out of the box. And because it's based on Mesos, you know it is battle-hardened.

Posts by Topic

see all

Subscribe to Email Updates