Previously, I tried to cover the whole volume created by the intersection of each camera's viewing volume. This time, I focus only on my 3D-grid space, the 3D-space (cube) in which tracking will occur in my application. We don't care much about tracking outside this space.
Is this the way I should wave my wand? Focusing only on the space where tracking will occur in the application? or Should I try to cover the whole space created by the intersection of each camera's viewing volume. I am asking to make sure that I do not just get lucky this time.
Also, do you have any suggestion if I want to improve the error further? What is the best accuracy (in terms of error) I should expect to achieve from this 3-camera setup?
