Using "Tracking Tools" for Robotics Research?

Post Reply
SeanGT
Posts: 1
Joined: Mon May 16, 2011 3:23 pm

Using "Tracking Tools" for Robotics Research?

Post by SeanGT » Mon May 16, 2011 3:31 pm

I represent a research lab at Georgia Tech interested in using optical motion tracking as part of a closed loop feedback system for controlling multiple small air and ground based robots. From the reading I have done on the NaturalPoint website and these forums, it seems like one of the �Tracking Tools� bundles would be best suited for our purposes. What I would like to learn more about before deciding on a specific package are the capabilities and limitations of the USB vs. Ethernet cameras, as well as the capabilities of the system as a whole, as they relate to the following areas:
  • Space requirements � How much space is required for a given capture volume (i.e., can a number of cameras be mounted throughout a room, turning the entire room into usable volume, or must the cameras be placed some distance outside the capture volume)?
  • Maximum volume � What is the largest capture volume that can be obtained (and with which cameras) using each configuration:
    [**]Single capture region � All the cameras calibrated and tracking together.
    [**]Multiple camera groups � Sets of cameras in overlapping groups that have been aligned side by side to cover a greater area.
  • Update frequency/Latency � Including software and network delays, what total latency can be expected from the system?
  • Number of simultaneous targets � How many unique targets (robots) can be tracked at the same time?
I recall reading on some older forum posts from a number of years ago that new developments were underway in several of the areas I mentioned above, including the use of multiple camera groups to expand the maximum capture volume. What recent developments have been made, or are expected, in these areas?

Thank you for your assistance

ypapelis
Posts: 9
Joined: Mon Nov 08, 2010 8:44 am

Re: Using "Tracking Tools" for Robotics Research?

Post by ypapelis » Thu Jun 30, 2011 4:16 pm

SeanGT,

I can provide some feedback as one of the uses of our systems is for the exact same application, i.e., providing indoor localization for small unmanned vehicles (both ground and aerial).

Unlike skeletons, where even for small volumes large number of cameras can provide improvements, for rigid bodies, the capture volume is generally constrained by the resolution of your cameras and the size of the markers. At some distance, the large markers and even active markers end up with too small of a footprint on the camera to be track-able. The basic recommendation is to create a square of about 20x20 feet, which gives you an effective capture footprint of about 10x10 feet. For us, that is nowhere near enough to providing an effective sandbox within which to reliable simulate indoors gps. We have tried to space the cameras out and you can gain some useful space but much beyond 25 feet distance, tracking becomes erratic (but keep in mind we only have 12 cameras, and they are not the latest high-res ones).

Another issue to consider for aerial vehicles is that the space is not a rectangular box, it is more like a squashed sphere so as you get higher, the footprint narrows.

Now, I believe it is possible to use multiple systems and create overlapping spaces, calibrate them relative to each other and create a wider space but we have not tried that. But you will not escape the fact that as you get higher, the effective footprint of your capture space narrows. And my understanding of the current system is that 24 is the maximum number of cameras, so unless you go with multiple systems, you would be limited.

Regarding number of targets, we have tracked up to 7 ground vehicles with no problem, and I believe the system can support a much larger count. Trick is to create unique marker configurations and spread the markers as far apart on your vehicle. Establishing a 'clean' origin is a bit of a pain but you can eyeball it make it work fine. And the accuracy is superb, far exceeds any other localization method (gps, cricket, sonars, etc.).

Regarding frequency, the system broadcasts packets with all the data at pretty high frequency, I believe 100 Hz. We are currently doing some detailed investigation on the latency (we are using a ground marker on which the bot knows its position and we are comparing the delay between reaching that position and when the message with that location actually arrives on the bot) but work is in progress. But I do not think the lag would be more than a couple of frames plus udp overhead, plenty fast for control loops on ground vehicles and possibly ok for flight vehicle control. In any serious application, you would have to implement a kalman filter anyway, so you may be able to get away with a bit of delay.

My only final recommendation is to purchase a system and try it out. A minimal system is so cheap (relative to others) that it is probably cheaper to purchase a system and try it than spend too much time pondering. If possible, go for the higher resolution cameras you can afford. And good luck!

Seth Steiling
Posts: 1366
Joined: Fri Jun 27, 2008 11:29 am
Location: Corvallis, Oregon

Re: Using "Tracking Tools" for Robotics Research?

Post by Seth Steiling » Thu Jun 30, 2011 5:18 pm

Vavr wrote:...The basic recommendation is to create a square of about 20x20 feet, which gives you an effective capture footprint of about 10x10 feet. For us, that is nowhere near enough to providing an effective sandbox within which to reliable simulate indoors gps. We have tried to space the cameras out and you can gain some useful space but much beyond 25 feet distance, tracking becomes erratic (but keep in mind we only have 12 cameras, and they are not the latest high-res ones)...
Unfortunately, we haven't been able to test the max volume yet with the new S250e, but it should be much, much larger than what even a 24 camera V100:R2 system can offer. For starters, you can scale the camera count much higher--potentially up to 96 cameras in one volume. Additionally, each S250e camera offers more 2D coverage and up to 50% increased range compared to the R2 system. The net result is a much larger capture-volume-to-setup-area ratio.
...Now, I believe it is possible to use multiple systems and create overlapping spaces, calibrate them relative to each other and create a wider space but we have not tried that. But you will not escape the fact that as you get higher, the effective footprint of your capture space narrows. And my understanding of the current system is that 24 is the maximum number of cameras, so unless you go with multiple systems, you would be limited...

You can calibrate multiple groups of overlapping volumes, but at this time the calibration does not support actually stitching them together. But, as I mentioned above, if you go with S250e cameras you can actually go way higher than 24, to create some pretty massive volumes.
Marketing Manager
TrackIR | OptiTrack

Post Reply