Discussion and Support for the OptiTrack, SmartNav and TrackIR brands by NaturalPoint

CameraSDK 2D data and UDP

by lnsundh » Fri Nov 20, 2015 7:51 am


I have a Prime 13 camera to track only one marker and send the data to another application. I would like to send the x and y positions of the marker with UDP to the other application which is prepared for that. I wonder if I came make use of the UDP streaming functionality provided in NatNet SDK in order for this to work?

Thanks in advance!
Posts: 4
Joined: Fri Nov 20, 2015 7:48 am

by steven.andrews » Mon Nov 23, 2015 12:19 pm

Hello lnsundh,

Thank you for reaching out to us regarding your question.

The NatNet SDK is used to connect to the realtime data stream that is broadcast by our Motive software. If you are working with a single camera, you are most likely working with the Camera SDK, which will not provide the same high level functionality as Motive.

With the Camera SDK it is definitely possible for you to retrieve the X Y coordinates of the marker. In order to broadcast this, however, you may need to generate your own XML and broadcast the UDP yourself.

If you require any further assistance with this, please feel free to open a ticket with us at

Best regards,
Steven Andrews
OptiTrack | Customer Support Engineer
NaturalPoint Employee
NaturalPoint Employee
Posts: 411
Joined: Mon Jan 19, 2015 11:52 am

by lnsundh » Tue Dec 01, 2015 8:19 am

Thanks! I wrote my own program sending the UDP data from the application to another and it works good. Do you know a way of identifying a marker as I noticed that other objects can turn up that are not marker but just metal or light reflecting in the camera. I use a threshold for width now but I would like something more bulletproof.

Thanks for a great product!
Posts: 4
Joined: Fri Nov 20, 2015 7:48 am

by lnsundh » Wed Dec 09, 2015 3:47 am

I send the positions of a specific marker with UDP but i noticed that it takes up to 7 ms to read each frame. Is there a way to speed it up? In my code I save the positions for a specific marker and send to with UDP. Note that I only have one camera and this is why I need to write my own code to send the data over UDP. Here is my code:

Code: Select all
//== NaturalPoint 2010
//== Camera Library SDK Sample
//== This sample brings up a connected camera and displays it's output frames.

#include <chrono>
#pragma comment(lib,"ws2_32.lib") //Winsock Library

#define SERVER ""  //ip address of udp server
#define BUFLEN 512  //Max length of buffer
#define PORT 3000   //The port on which to listen for incoming data

#ifdef WIN32
#include "supportcode.h"       //== Boiler-plate code for application window init ===---
#include "SDL/SDL.h"
#include "lock.h"

#include "cameralibrary.h"     //== Camera Library header file ======================---
using namespace CameraLibrary;
using namespace std::chrono;

int main(int argc, char* argv[])

   struct sockaddr_in si_other;
   int s, slen = sizeof(si_other);
   char buf[BUFLEN];
   char message[BUFLEN];
   WSADATA wsa;
   bool showDisplay = true;
   bool sendData = true;

   //Initialise winsock
   printf("\nInitialising Winsock...");
   if (WSAStartup(MAKEWORD(2, 2), &wsa) != 0)
      printf("Failed. Error Code : %d", WSAGetLastError());

   //create socket
      printf("socket() failed with error code : %d", WSAGetLastError());

   //setup address structure
   memset((char *)&si_other, 0, sizeof(si_other));
   si_other.sin_family = AF_INET;
   si_other.sin_port = htons(PORT);
   si_other.sin_addr.S_un.S_addr = inet_addr(SERVER);
   //== For OptiTrack Ethernet cameras, it's important to enable development mode if you
   //== want to stop execution for an extended time while debugging without disconnecting
   //== the Ethernet devices.  Lets do that now:


   //== Initialize Camera SDK ==--

   //== At this point the Camera SDK is actively looking for all connected cameras and will initialize
   //== them on it's own.

   //== Now, lets pop a dialog that will persist until there is at least one camera that is initialized
   //== or until canceled.


    //== Get a connected camera ================----

    Camera *camera = CameraManager::X().GetCamera();

    //== If no device connected, pop a message box and exit ==--

        MessageBox(0,"Please connect a camera","No Device Connected", MB_OK);
        return 1;

   //== Determine camera resolution to size application window ==----
    int cameraWidth  = camera->Width();
    int cameraHeight = camera->Height();

    //== Open the application window =============================----
    if (!CreateAppWindow("Camera Library SDK - Sample",cameraWidth,cameraHeight,32,gFullscreen))
       return 0;

    //== Create a texture to push the rasterized camera image ====----

    //== We're using textures because it's an easy & cpu light
    //== way to utilize the 3D hardware to display camera
    //== imagery at high frame rates

    Surface  Texture(cameraWidth, cameraHeight);
    Bitmap * framebuffer = new Bitmap(cameraWidth, cameraHeight, Texture.PixelSpan()*4,
                               Bitmap::ThirtyTwoBit, Texture.GetBuffer());

    //== Set Video Mode ==--

   camera->SetThreshold(100); /*The Threshold setting determines a minimum brightness of a pixel for it to be considered in the calculation of a 2D object. All pixels with a brightness below the Threshold setting are ignored. Increasing the Threshold value can help filter light interference from non-markers in the camera view. Lowering the Threshold value can allow less-visible markers (e.g. small markers, worn markers, and markers at longer distances from the camera) to be seen by the camera.*/
   camera->SetExposure(1); /*The Exposure value in the camera setting controls how long the shutter remains open, per frame. Increasing the Exposure value allows more light in, creating a brighter image that can increase visibility for small and dim markers. However, setting the Exposure too high can introduce merging of adjacent markers and marker blurring—all of which can negatively impact the tracking quality.*/

    //== Start camera output ==--


    //== Turn on some overlay text so it's clear things are     ===---
    //== working even if there is nothing in the camera's view. ===---


    //== Ok, start main loop.  This loop fetches and displays   ===---
    //== camera frames.   ===---
   showDisplay = false;
   std::string data = "0:0";
   std::string prev_data = "0:0";

   int cloc = clock();
   int nano_s = 0;
   auto prev_nano_s = 0;

   while (1)
      //== Fetch a new frame from the camera ===---

      Frame *frame = camera->GetFrame();

      if (frame) {
         int numOfObjects = frame->ObjectCount();

         //== Display Camera Image ============--
         if (showDisplay){
            if (!DrawGLScene(&Texture))

            //== Escape key to exit application ==--

            if (keys[VK_ESCAPE])

            //== Release frame =========--
         //vec->PushMarkerData(x, y, obj->Area(), obj->Width(), obj->Height());
         nanoseconds ns = duration_cast< nanoseconds >(

         milliseconds ms = duration_cast< milliseconds >(

         auto millis = ms.count();
         auto dif = nano_s - prev_nano_s;
         data = "0#0:0:0;" + std::to_string(millis);

         if (numOfObjects > 1) {
            for (int i = 0; i < frame->ObjectCount(); i++) {
               cObject *obj = frame->Object(i);
               float x = (obj->X() - 640) / 10;
               float y = (obj->Y() - 510) / 10;
               int width = obj->Width();
               if (width > 20)  {
                  float x = (obj->X() - 640) / 10;
                  float y = (obj->Y() - 510) / 10;
                  data = std::to_string(numOfObjects) + "#" + std::to_string(x) + ":" + std::to_string(y) + ";" + std::to_string(millis);
         } else {
               cObject *obj1 = frame->Object(0);
               int width = obj1->Width();
               float x = (obj1->X() - 640) / 10;
               float y = (obj1->Y() - 510) / 10;
               if (width > 20) {
                  data = std::to_string(numOfObjects) + "#" + std::to_string(x) + ":" + std::to_string(y) + ";" + std::to_string(millis);



      if (!showDisplay) {

         strcpy_s(message, data.c_str());

         //send the message
         if (sendto(s, message, strlen(message), 0, (struct sockaddr *) &si_other, slen) == SOCKET_ERROR)
            printf("sendto() failed with error code : %d", WSAGetLastError());

         //printf("Data: %s\n", message);
         //clear the buffer by filling null, it might have previously received data
         //memset(buf, '\0', BUFLEN);


      // Sleep(2);

        //== Service Windows Message System ==--



    //== Close window ==--


    //== Release camera ==--


    //== Shutdown Camera Library ==--


    //== Exit the application.  Simple! ==--

   return 1;

Posts: 4
Joined: Fri Nov 20, 2015 7:48 am

by steven.andrews » Wed Dec 09, 2015 11:43 am

Hi Insundh,

Where are you seeing the latency, in the reading of the frames or on the UDP receiving side?

It would be useful to know how you are determining this latency.

Steven Andrews
OptiTrack | Customer Support Engineer
NaturalPoint Employee
NaturalPoint Employee
Posts: 411
Joined: Mon Jan 19, 2015 11:52 am

Return to Camera SDK