Camera sdk tracking wrong, why?
Camera sdk tracking wrong, why?
Hello friends!!
I'm using Camera SDK for my project, I have two cameras to detect the position of my track clip (one in left and one in right).
I push recenter and I start to track the position.
My surprise came when I go in a straight line forward, the values of the cameras range in x one leaves to the right and one left. When I go back it's the same but the one before was now right up to the left and before going to the left now marches to the right.
Any idea?
thanks!
I'm using Camera SDK for my project, I have two cameras to detect the position of my track clip (one in left and one in right).
I push recenter and I start to track the position.
My surprise came when I go in a straight line forward, the values of the cameras range in x one leaves to the right and one left. When I go back it's the same but the one before was now right up to the left and before going to the left now marches to the right.
Any idea?
thanks!
Re: Camera sdk tracking wrong, why?
First off, the Camera SDK is tracking the track clip properly. Now, what you need to realize is that the vector module tracks a track clip with respect to the view from a single camera. As a result, the position it reports for the track clip is relative to the cameras position.
If you try to track the track clip from multiple cameras simultaneously, they will both report the track clip's position correctly however the position will be relative to each camera's position. So you can't compare the output from one camera to the output from another because the results are relative to two different coordinate systems.
What are you trying to accomplish by tracking a track clip with two separate cameras?
If you try to track the track clip from multiple cameras simultaneously, they will both report the track clip's position correctly however the position will be relative to each camera's position. So you can't compare the output from one camera to the output from another because the results are relative to two different coordinate systems.
What are you trying to accomplish by tracking a track clip with two separate cameras?
Re: Camera sdk tracking wrong, why?
My idea is that having a camera on every corner I extended the size of the field of vision than do tracking. So when one fails to make tracking the other can still continue doing it. To do this I had thought the inclination of the cameras. I thought reset by changing the coordinate system of the cameras.
Any ideas how to solve my problem?
thanks!
Any ideas how to solve my problem?
thanks!
Re: Camera sdk tracking wrong, why?
Since each vector output is in the camera's coordinate system, you'd need to figure out the relationship between them.
Any time the track clip is tracked by more than one camera you should be able to compute a transformation matrix that when applied to one result would put it in the coordinate system of the other. You're essentially calculating the cameras position & orientation relationship between one another in the form of a transformation matrix.
Not trivial, but not too hard. However, once this was all working there would be other concerns. What happens when more than one camera outputs what it thinks is a valid result, but doesn't jive with the results from the other cameras? What is the best way to combine the results from each camera to create the best result?
Any time the track clip is tracked by more than one camera you should be able to compute a transformation matrix that when applied to one result would put it in the coordinate system of the other. You're essentially calculating the cameras position & orientation relationship between one another in the form of a transformation matrix.
Not trivial, but not too hard. However, once this was all working there would be other concerns. What happens when more than one camera outputs what it thinks is a valid result, but doesn't jive with the results from the other cameras? What is the best way to combine the results from each camera to create the best result?
Re: Camera sdk tracking wrong, why?
OK, I use matrix rotation to rotate coordinate system of cameras 45 degree. The problem is that because if you turn a certain angle the track clip that influences their position, also to go forward in my new coordinate axis is still moving sideways.
thanks for your help!
thanks for your help!
Re: Camera sdk tracking wrong, why?
Full transformation matrix is required, not just rotation matrix.
Re: Camera sdk tracking wrong, why?
Could you explain better?
thanks for all the help you are giving me, I was pretty wasted
thanks for all the help you are giving me, I was pretty wasted
Re: Camera sdk tracking wrong, why?
This is my code in case you clarify something:
-------------------------------------------------------------------
---------------------------------------------------------------
---------------------------------------------------------------
-----------------------------------------------------------------
---------------------------------------------------
---------------------------------------------------
I want to put a camera in a certain number of degrees and to calculate the position in my new coordinate system
[img:center]http://mural.uv.es/maigon2/cameras2.JPG[/img]
Code: Select all
ConfigureCamera(){
if(camera==0)
{
printf("\n\n Please connect a camera ");
return;
}
//== Determine camera resolution to size application window ==----
cameraWidth = camera->Width();
cameraHeight = camera->Height();
//== Create a texture to push the rasterized camera image ====----
//== We're using textures because it's an easy & cpu light
//== way to utilize the 3D hardware to display camera
//== imagery at high frame rates
texture = new Surface(cameraWidth, cameraHeight);
framebuffer = new Bitmap(cameraWidth, cameraHeight, texture->PixelSpan()*4,
Bitmap::ThirtyTwoBit, texture->GetBuffer());
//== Set Video Mode ==--
camera->SetVideoType(PrecisionMode);
//== Start camera output ==--
camera->Start();
//== Turn on some overlay text so it's clear things are ===---
//== working even if there is nothing in the camera's view. ===---
camera->SetTextOverlay(false);
camera->SetIntensity(200);
camera->SetExposure(3);
vec = cModuleVector::Create(); //new cModuleVector();
vecprocessor = new cModuleVectorProcessing();
Core::DistortionModel lensDistortion;
camera->GetDistortionModel(lensDistortion);
//== Plug distortion into vector module ==--
cVectorSettings vectorSettings;
vectorSettings = *vec->Settings();
vectorSettings.Arrangement = cVectorSettings::VectorClip;
vectorSettings.Enabled = true;
cVectorProcessingSettings vectorProcessorSettings;
vectorProcessorSettings = *vecprocessor->Settings();
vectorProcessorSettings.Arrangement = cVectorSettings::VectorClip;
vectorProcessorSettings.ShowPivotPoint = false;
vectorProcessorSettings.ShowProcessed = false;
vectorProcessorSettings.ScaleRotationYaw = 1.0;
vectorProcessorSettings.ScaleRotationPitch = 1.0;
vectorProcessorSettings.ScaleRotationRoll = 1.0;
vectorProcessorSettings.ScaleTranslationX = 1.0;
vectorProcessorSettings.ScaleTranslationY = 1.0;
vectorProcessorSettings.ScaleTranslationZ = 1.0;
vecprocessor->SetSettings(vectorProcessorSettings);
//== Plug in focal length in (mm) by converting it from pixels -> mm
vectorSettings.ImagerFocalLength = (lensDistortion.HorizontalFocalLength/((float) camera->PhysicalPixelWidth()))*camera->ImagerWidth();
vectorSettings.ImagerHeight = camera->ImagerHeight();
vectorSettings.ImagerWidth = camera->ImagerWidth();
vectorSettings.PrincipalX = camera->PhysicalPixelWidth()/2;
vectorSettings.PrincipalY = camera->PhysicalPixelHeight()/2;
vectorSettings.PixelWidth = camera->PhysicalPixelWidth();
vectorSettings.PixelHeight = camera->PhysicalPixelHeight();
vec->SetSettings(vectorSettings);
return;
}
Code: Select all
GetAndProcessFrame(){
//== Fetch a new frame from the camera ===---
objectDetected = false;
//printf("\n\n ----------------------------------------- \n\n " );
//printf("\n\n camara %d \n\n " ,numberCamera );
frame = camera->GetFrame();
Core::DistortionModel lensDistortion;
camera->GetDistortionModel(lensDistortion);
if(frame)
{
//printf("\n\n frame ok camara %d \n\n " ,numberCamera );
//== Ok, we've received a new frame, lets do something
//== with it.
//== Lets have the Camera Library raster the camera's
//== image into our texture.
frame->Rasterize(framebuffer);
frameOK=true;
vec->BeginFrame();
printf("\n\n Detectados %d objetos \n\n " ,frame->ObjectCount() );
for(int i=0; i<3/*frame->ObjectCount()*/; i++)
{
cObject *obj = frame->Object(i);
float xp = obj->X();
float yp = obj->Y();
Core::Predistort2DPoint(lensDistortion,xp,yp);
vec->PushMarkerData(xp, yp, obj->Area(), obj->Width(), obj->Height());
}
vec->Calculate();
vecprocessor->PushData(vec);
if(frame->ObjectCount() >= 3){
objectDetected = true;
}
//printf("\n\n ----------------------------------------- \n\n " );
}else{
//printf("\n\n ----------------------------------------- \n\n " );
frameOK=false;
}
}
Code: Select all
DrawPosition(){
//pinta el eje de coordenadas
glEnable(GL_BLEND);
glColor4f(1,1,1,0.3f);
glBegin(GL_LINES);
glVertex3f(10,0,0);glVertex3f(-10, 0,0);
glVertex3f(0,10,0);glVertex3f( 0,-10,0);
glVertex3f(0,0,10);glVertex3f( 0, 0,-10);
glEnd();
if(vecprocessor->MarkerCount()>0)
{
if(numberCamera == 0)
glColor3f(0,1,1);
else
glColor3f(1,0,1);
glBegin(GL_LINES);
glVertex3f(x/200,y/200,z/200);
glVertex3f(x/200,(y+50)/200,(z+50)/200);
glVertex3f(x/200,y/200,z/200);
glVertex3f(x/200,(y+50)/200,(z-50)/200);
glVertex3f(x/200,(y+50)/200,(z+50)/200);
glVertex3f(x/200,y/200,z/200);
glVertex3f(x/200,(y+50)/200,(z+50)/200);
glVertex3f(x/200,(y+50)/200,(z-50)/200);
glVertex3f(x/200,(y+50)/200,(z-50)/200);
glVertex3f(x/200,y/200,z/200);
glVertex3f(x/200,(y+50)/200,(z-50)/200);
glVertex3f(x/200,(y+50)/200,(z+50)/200);
glEnd();
}
return;
}
Code: Select all
SetPosition(){
Vec3 position = Vec3();
double angle;
if(numberCamera == 0){
angle = 0;
}else{
angle = 0;
}
angle = angle * 3.14159 / 180;
vecprocessor->GetPosition(x,y,z);
Vec3 readPosition = Vec3(x,y,z);
Mat33 matrixProduct = Rzyx(0.0, angle, 0.0);
position = matrixProduct * readPosition;
x = position.v[0];
y = position.v[1];
z = position.v[2];
printf("\n\n camara %d.- x %.3f , y %.3f , z %.3f ",numberCamera, x,y,z);
return;
}
Code: Select all
Rzyx(double tx, double ty, double tz)
{
Mat33 mx,my,mz,mr,maux;
mx = Rx(tx);
my = Ry(ty);
mz = Rz(tz);
maux = my*mx;
mr = mz*maux;
return mr;
}
---------------------------------------------------
I want to put a camera in a certain number of degrees and to calculate the position in my new coordinate system
[img:center]http://mural.uv.es/maigon2/cameras2.JPG[/img]
Re: Camera sdk tracking wrong, why?
nothing?? any idea??
thanks a lot!
thanks a lot!