yes we have tested quickly your data on our own solved but non-optimized facerobot model. The overall quality of the data is pretty good and stable!
I don�t know the reason for your problem, we will do a test tomorrow, with your model.
Anyway, after you have solved that problem you will enter the next stage, which I call the "Uncanny Interpretation".

Because of the mismatch concerning the proportions of your actor and the target model, you will have to do (more or less) a fine tuning pass, where you will interpret the mocap data from the point of view of your model.
I think that the position and the movement-level of the markers should be adjusted to the model. A retargeting of the data, similar to a full body mocap production, but way more demanding.
Even after that, the lips, teeth and tongue will need some extra data love, not to mention the eyes! This time on the model side...
I don�t want to paint a wrong picture, we are at the same level, still searching the best pipeline inside facerobot and in 3ds max.
Thanks for your Project Palitoy videos, they were a good reference in the past!
regards