I guess that it’s hard to get recognition of micro-gestures using the Myo armband. Surely not using just raw data. I guess better to rescale data in a way that they could be more meaningful to train a model. If you don’t have issues with cables or you have to perform gestures in a specific space without moving your body a lot, I would go with the Leap Motion, or something similar. It is way more precise for those things.
I just hacked Myo Mapper to hook it up with Wekinator, and gesture recognition works amazingly combining IMU data. EMG data a bit less. I did combine different machine learning techniques and I guess that Thalmic uses combined techniques. My intuition is that they use SVM and DTW, but I’m not a ml expert at all so do not trust my judgement.
Apart from the video posted above I’m working on gesture recognition using EMG. I’m publishing soon something about it, meanwhile, if you want to have a look at this paper, where I did report some techniques to recognise throwing gestures by direct mapping. This is also a great paper which shows comparisons of EGM features. Of course, they are not the only features, which can be extracted using the Myo.