I am trying to run an experiment to see how the Client Latency (or more specifically, the Software Latency) changes as the number of cameras changes. We are doing this test as I was informed that each camera must reference all other cameras in order to create and export the mesh data, thus increasing camera count would increase latency. I am using the WinFormTestApp sample client that ships with the NatNet SDK to do so. I have created a rigid body in Motive (using the three markers of the calibration wand) to track. The rigid body is stationary in the corner of the room, and I stream the rigid body data locally to the client app. I expected that as the camera count decreases, the latency would also decrease. As you can see from the attached pictures, this is not the case. The graphs I have attached is the "Total Latency" column of the client app, and as you can see, between 4, 5, and 6 cameras (all with good view of the rigid body), the total latency remains the same.
I was also hoping someone could shed some light on what Interframe time refers to (in the client app), as it is always larger than total latency, but when I looked at the source code it seemed like it was the time from camera to motive for a frame. So I'm not clear how that could be larger than the total latency time.
FYI I tried to use the TimingClient.exe that also ships with the SDK, but it would not run for me.
Thanks for any information!
NatNet, VRPN, TrackD, and Plugins
2 posts • Page 1 of 1
Sorry, for some reason its having a hard time posting the logs of the other runs.
- 6cam.png (34.99 KiB) Viewed 383 times
- 5cam.png (35.48 KiB) Viewed 383 times
- 3cam.png (41.24 KiB) Viewed 383 times