Jan Machacek is the CTO at Cake Solutions. He is a highly experienced Java enterprise architect, consultant and developer with very strong technical skills, author of Pro Spring 2.5, Pro Spring and other books, blogs and articles. He regularly speaks at conferences and developer events in the UK and abroad and he is the editor of the Open Source Journal. Jan regularly works on open source projects; he is the author of Specs2 Spring, Scalad, Spock Spring Integration and Spring Workflow Extension. Jan’s technical interests and expertise include lightweight JVM-based applications in Scala, Groovy and Java with the help of the Spring Framework, Grails, Akka and Play 2; with asynchronous, resilient and scalable messaging, deployed on cloud infrastructures. Jan has demonstrated his technical and agile management skills on numerous projects in the public and private sectors; working with the in-house teams as well as delivering projects at Cake Solutions. He has led teams through the perils of agile software delivery, bringing control and value to the business and the joy of programming back to the technical teams. Jan shares his agile leadership experience in publications for the NCC, at public events and at national conferences. In his spare time, Jan likes to explore new programming languages and experiment with microcontrollers. Jan also competes in time trials and road races as a member of the Manchester Wheelers Cycling Club.
One of the sensors in the Muvr project is the magnificientPebble smartwatch. We use it to record the accelerometer values, pack them into a naïvely efficient data structure, and send them over the bluetooth connection to the mobile. The mobile then performs further processing, but that's for another blog post. In this post, I will show how we structured the recording and sending functions, and how we tested them.
As you probably know by now, Muvr performs near real-time exercise classification. It does so by fusing data from multiple (wearable) sensors, then sends the raw data to the server, in a simple binary encoding. The server decodes the data, reconstructs the sensor's data and locations, and feeds column slices to to the exercise model.
In the next few posts, we will describe the journey of collecting (tagged) data, experimenting with potential classification models, and then finally implementing these models. It was revealing to experience the challenges of implementing truly reliable and near real-time analysis system in a world of unerliable networks and users who cannot tolerate interruptions.
We ended up performing principal component analysis on a type II DCT of the sensor data we receive: this then formed basis for the training set of a support vector machine. We have done the modelling in R, then exported the libsvm settings, and loaded this code in Scala, where we perform the classification on the incoming stream of data.