<img height="1" width="1" src="https://www.facebook.com/tr?id=1076094119157733&amp;ev=PageView &amp;noscript=1">

Distributed lambda architecture

Posted by Jan Machacek on Mon, Apr 13, 2015

As you probably know by now, Muvr performs near real-time exercise classification. It does so by fusing data from multiple (wearable) sensors, then sends the raw data to the server, in a simple binary encoding. The server decodes the data, reconstructs the sensor's data and locations, and feeds column slices to to the exercise model.

The model performs feature extraction, the results of which are fed to queries, their results to the decider, ultimately giving Option[ClassifiedExercise]. In this post, I will explain why our original approach failed outside lab conditions, enumerate the problems and outline the solutions.

The original Muvr implementation is a canonical example of the lambda architecture. It includes the speed layer, which performs the as-it-happens classification, and the batch layer, which can provide deeper insights into the data. Both layers make their results accessible through the query layer.



Overlaying the lambda architecture diagrams on Muvr's architecture clearly shows that the Akka cluster implements the speed layer, and the Spark jobs implement the batch layer. (The views and Spray services provide the query layer.)



It turns out that this approach is unusable outside lab conditions. It has far too strict network quality requirements: the sensor data packet includes 1.24 seconds worth of data, so to maintain as-it-happens processing, the network needs to complete the transfer of the packet within 1.24 seconds. This, combined with further latency in server's processing makes it very difficult to provide as-it-happens feedback to the users. Similarly, because of the overall latency, the users struggle to provide meaningful feedback on the machine classification. In addition to this fundamental problem, the system is wastefully transmitting continuous stream of the sensor data, even when the user is not exercising. This results in higher data transfer bills, and it keeps the mobile phones' radios on, which wastes battery power.

Architectural changes

The ML pipeline is—in principle—right. The only problem is the location of the computation. It is necessary to distribute the speed layer across different classes of computers, running at different network locations. This is a rather pompous way of saying that the speed layer has to be split between the mobile and the server. The code on the mobile has to perform the as-it-happens classification, provide immediate feedback to the user. This immediate feedback also allows the user to provide corrections to failed classification.





In this diagram, you can see that the mobile performs a lot of the processing that was originally performed by the server. In fact, it is possible to perform the entire classification on the mobile. It is, of course, better to have connection to the server, but it is no longer necessary to have strict network quality requirements. It is perfectly fine for a failed classification request, which includes the recorded sensor data, the incorrect result, and the correct result to have timeout in tens of seconds. If the mobile has internet connection to the server, and if the users agrees, it also sends the correct classification. This way, the server has both positive and negative samples. It persists those samples in the journal, where they are picked up by the batch layer. The batch layer is a Spark job, which trains the classifiers (a simple SVM at the moment, but I am moving towards a deep neural network implementation—more on that in the next few blog posts!). After updating the model, the Spark job replays the old positive and negative samples from the journal to verify that the new model is indeed an improvement on the old model. If so, the new model is submitted to the user profile in the Akka cluster. The Akka cluster then pushes a notification to the mobile application, telling it to request the updated model. Muvr keeps one last component in the speed layer running on the server: programme and compliance. Programme is the suggested structure of the exercise session (perhaps from a personal trainer). Upon completing a block of exercise, the server can push the next exercise in the programme to the mobile. Much more interesting is the compliance code. It is useful for specialised exercise programmes, typically in physiotherapy. To achieve the best results (think speed of recovery, range of motion, etc) after injury, it is important that patients follow the programme with much greater precision—not just doing whatever they feel like in their sessions.

I call this approach distributed lambda architecture with feedback. It solves the issues I outlined above: it significantly speeds up the as-it-happens processing, providing nearly immediate feedback to the users. This allows the users to correct the classification, which allows the server to update the (per user!) exercise model. The

Complicated mobile code

The only downside of this approach is significantly more complex code that runs on the mobiles. Because I still aim to release Android as well as iOS apps, I moved the entire speed layer to C++, building it natively for each platform. Fortunately, with Xcode 6.3, I finally have support for C++14. This, together with OpenCV, makes the classification work relatively painless to implement even in iOS.



Summary

Over the next few blog posts, I will explore in detail the updated iOS code (focussing both on the interop between the platform-independent C++ classification layer and the Swift/Objective-C++ codebase); the approach to experiments in MATLAB; the updates to the Akka cluster; and finally the new & shiny Spark code.

At the same time, I will be making progress towards using deep neural networks to perform the classification. It was a good choice to start with simple SVMs, now that we have users giving us the positive and negative samples, I am beginning to have enough data to make the DNN approach possible.

Posts by Topic

see all

Subscribe to Email Updates