I am using Motive:Body with 12 Prime13 cameras for two weeks now. I'm in the process of setting it up as means of acquiring motion data for researching movement/gestures in conjunction with speech. Several questions came up and I would like to pose them alltoghether here if that is Ok with you.
- I understand there is no possibility to create a user defined skeleton. I found "Rigid Body Assisted Labelling" (RBAL) promising. However, documentation is thin. Are the undefined markers near a rigid body somehow labeled distinctively when this labeler "acquires" them? Having RBAL activated I see no difference between markers in the vicinity of a rigid body and other markers. Am I doing sth wrong? (I tried finger tracking just as in the wiki for that)
- Is there a possibility to tweak the tolerance for rigid body markers? I.e. to assume a slightly "less rigid" body?
- The Reconstruction Settings in the Reconstruction View essentially define trajectorization as in Left-Clicking on a Take and selecting "Trajectorize", is that right? I assume 2D Data from the cameras is always stored once, never touched and "Trajectorization" performs always on the original 2D data albeit with different settings depending on the "Reconstruction view", correct?
- Does the use of a skeleton somehow affect the position (not the labelling) of markers in the 3D data during trajectorization? Or is the segment/limb data calculated "on top" of the marker data, and marker data thus not altered?
- Is there any known limit for the duration of one take save the one by available hard disk space?
- When we use one Prime13 camera as reference camera, should this one be included in the calibration process or not?
- Is there some way to transfer a skeleton defined in one take to another take of the same subject?
- What and when do the Move/Scale/Rotate 3D operations modify exactly?
- Is it possible to record analog data fed into an eSync 2 device?
- Using the Batch Processor, is it possible to extract avi-files from the takes as I can in the user interface? If not, is it possible to somehow else exctract 2D video data from the .tak files? The problem is, we want to further process the movie files, and exporting the avi's one by one is tedious.
- I'm having trouble running the example c# batch script as well as my python interpretation of it (see below) the error message is with both scripts the same: "Exception executing script: Object reference not set to an instance of an object". Also, you seem to have a typo in your wiki. (ITakeProcessingScript vs. ItakeProcessingScript)
Code: Select all
#import sys and clr modules
import sys
import clr
# Add a reference to the NMotive assembly
clr.AddReference("NMotive")
# Import everything from sys and NMotive.
from System import *
from NMotive import *
# Define the ProcessTake function.
def ProcessTake(take, progress):
exporter = C3DExporter()
exporter.Export( take, take.FileName, True)