Working with two cameras

Post Reply
itzikgili
Posts: 3
Joined: Fri Dec 27, 2013 5:47 am

Working with two cameras

Post by itzikgili »

Hi all,
As for now I figured out how to work with the SDK , I got x, y of the objects. worked with frame group and got frames and data from two cameras.

I am facing some issues :
1. I would like to calculate the data from two cameras:
a. Where can I provide the position and orientation of the cameras.
b. How do i combine the data from the cameras, given two frames in the frame group.

2. Is there a way to get yaw pitch and roll of a rigid body with 3 markers on it? (using 2 cameras). I was trying to work with vector and vectorproccessor but could not find enough documentation.


3. How can I write costume text on the output screen?

thank you!
Itzik
NaturalPoint-Dustin
Posts: 609
Joined: Tue Mar 19, 2013 5:03 pm

Re: Working with two cameras

Post by NaturalPoint-Dustin »

Dear Itzik,

I will research this for you. I will try to have a response for you shortly. Most of my resources are on vacation this week.

Best Regards,
Dustin
Technical Support Engineer
OptiTrack | TrackIR | SmartNav
itzikgili
Posts: 3
Joined: Fri Dec 27, 2013 5:47 am

Re: Working with two cameras

Post by itzikgili »

Ok, thanks for the effort.
Looking forward to hear from you soon.

Have a great weekend. :D
beckdo
Posts: 520
Joined: Tue Jan 02, 2007 2:02 pm

Re: Working with two cameras

Post by beckdo »

The vector processing algorithm is a single camera algorithm. As a result, it can solve the position and orientation of the vector & track clip with a single camera.

Unfortunately, there's no implementation out of the box that will combine results from multiple cameras. As you've probably found already, there's nothing from preventing you from instantiating and running multiple vector solvers on multiple cameras. However, you're going to be on your own to combine the results.
geekarch
Posts: 1
Joined: Thu Jan 09, 2014 12:01 am

Re: Working with two cameras

Post by geekarch »

i am also using two v100 cameras to track one marker. i want to triangulate the position of the marker in 3d.

i got some pointer from friends to use triangulation function from opencv. i found an article about that matter but it says that i need several things like :

- extrinsic parameters of the cameras : the difference in location and rotation between them
- the cameras matrices

is there a way to get these parameters from Camera SDK and/or measuring it physically?

and also can anyone give some pointers for another way to get a quite accurate 3d position of a marker from datas captured from the cameras using Camera SDK (for example each camera can output 2d position of the marker)

thanks
Post Reply