Forums - fcvTrackLKOpticalFlowu8 features not stable when phone moves

10 posts / 0 new
Last post
fcvTrackLKOpticalFlowu8 features not stable when phone moves
MK00000010
Join Date: 22 Jul 13
Posts: 1
Posted: Mon, 2013-07-22 11:16

I have put together a KLT optical flow tracker using fcvTrackLKOpticalFlowu8. I use Harris corners to generate initial features in a masked section of the buffer. When phone doesn't move, the tracking is fine and features are stable. As soon as I move the phone, a great majority of tracked features disperse across the preview screen. Is this behavior normal? If so, how can I improve the stability of the tracker?

Here's my code snipped:

//Extract initial corners

fcvCornerHarrisInMasku8((uint8_t*) pCurImg, trackerState.imgWidth, trackerState.imgHeight, trackerState.imgWidth, 7, trackerState.corners, 1000, &trackerState.numCorners, 5, trackerState.mask, trackerState.maskWidth, trackerState.maskHeight);

//make pyramid

int pyramidSuccess = fcvPyramidCreateu8(pCurImg, trackerState.imgWidth,
trackerState.imgHeight, 4, trackerState.curPyramid);
 
//calculate image gradients
int sobelSucess = fcvPyramidSobelGradientCreatei8(trackerState.curPyramid,
trackerState.dxPrevPyr, trackerState.dyPrevPyr, 4);
 
for (int i = 0; i < 1000; i++) {
trackerState.featureStatus[i] = 0;
}

//track

fcvTrackLKOpticalFlowu8(trackerState.prevFrameAligned, pCurImg, w, h, trackerState.prevPyramid, trackerState.curPyramid, trackerState.dxPrevPyr, trackerState.dyPrevPyr, trackerState.prevFeatureXY, trackerState.currentFeatureXY, trackerState.featureStatus, trackerState.numCorners, 7, 7, 7, 4, 0, 0, 0, 0);

 

  • Up0
  • Down0
sourav3515
Join Date: 9 Jan 13
Posts: 6
Posted: Thu, 2014-08-21 22:52

Hi, 

I am facing the exact same problem. 

"When phone doesn't move, the tracking is fine and features are stable. As soon as I move the phone, a great majority of tracked features disperse across the preview screen. Is this behavior normal? "

please help.

  • Up0
  • Down0
jeff4s Moderator
Join Date: 4 Nov 12
Posts: 106
Posted: Fri, 2014-08-22 19:09

Please follow function header to set up parameters for fcvTrackLKOpticalFlowu8_v2, or share your code snippet for us to look into it.

Cheers,

-Jeff

  • Up0
  • Down0
sourav3515
Join Date: 9 Jan 13
Posts: 6
Posted: Sat, 2014-08-23 02:36

Hi,

First I am detecting the features from the first frame, which is 1920x1080 resolution grayscale, using fcvGoodFeaturetoTrack and getting the corners(x, y tuples) and the number of corners. I am saving this first frame as markerJImage. The motive is to only tack the centre 200x200 window from this first frame in the subsequent frames.

After this, I am using fcvfcvTrackLKOpticalFlowu8,  in which I am passing, the first frame(which was saved), and the subsequent frames coming to track the corners found in the first frame. Below is the code snippet:

////////////////////////////////////////////////////////////////////////////////////////////

int windowWidth, windowHeight, maxIterations, m;

 int x1, y1, x2, y2;
 uint32_t nPyramidLevels;
 
 float featureXY_out[MAX_CORNERS_TO_DETECT*2];
 int32_t featureStatus[MAX_CORNERS_TO_DETECT];
 uint32_t   corners_out[MAX_CORNERS_TO_DETECT*2];
 
 windowWidth = 7;
 windowHeight = 7; 
 maxIterations = 7;
 nPyramidLevels = 4;
 
 memset(featureXY_out, 0.0, MAX_CORNERS_TO_DETECT * 2);
 memset(corners_out, 0, MAX_CORNERS_TO_DETECT*2);
 memset(featureStatus, 0, MAX_CORNERS_TO_DETECT);
 
 fcvPyramidLevel * src1Pyr = new fcvPyramidLevel[nPyramidLevels];
 fcvPyramidLevel * src2Pyr = new fcvPyramidLevel[nPyramidLevels];
 fcvPyramidLevel * dx1Pyr = new fcvPyramidLevel[nPyramidLevels];
 fcvPyramidLevel * dy1Pyr = new fcvPyramidLevel[nPyramidLevels];
 
 src1Pyr[0].width = w;
 src1Pyr[0].height = h;
 src2Pyr[0].width = w;
 src2Pyr[0].height = h;
 
 fcvPyramidAllocate(src1Pyr, src1Pyr[0].width, src1Pyr[0].height, 4, nPyramidLevels, 0);
 fcvPyramidAllocate(src2Pyr, src2Pyr[0].width, src2Pyr[0].height, 4, nPyramidLevels, 0);
 fcvPyramidAllocate( dx1Pyr, src1Pyr[0].width, src1Pyr[0].height, 4, nPyramidLevels, 1 );
 fcvPyramidAllocate( dy1Pyr, src1Pyr[0].width, src1Pyr[0].height, 4, nPyramidLevels, 1 ); 
 
 fcvPyramidCreateu8( markerImg, w, h, nPyramidLevels, src1Pyr );
 fcvPyramidCreateu8( data, w, h, nPyramidLevels, src2Pyr );;
 fcvPyramidSobelGradientCreatei8( src1Pyr, dx1Pyr, dy1Pyr, nPyramidLevels );
 
fcvTrackLKOpticalFlowu8(markerImg, data, w, h, src1Pyr, src2Pyr, dx1Pyr, dy1Pyr, featureXY_in, featureXY_out, featureStatus,
          featureLen, windowWidth, windowHeight, maxIterations, nPyramidLevels, 0.5, 0.15, 0, 0);
 
/////////////////////////////////////////////////////////////////////////////////////////////////////
featureXY_in is the corners detected in the first frame(we are only passing the corners which is in the center 200x200 window, rest of the corners I made as centre(w/2, h/2). Also I modified the first frame such that all the pixels values, except the centre 200x200 pixels as 0.This is because I want to track only the centre 200x200 pixels in the subsequent frame.
 
Please let me know where I am going wrong.
 
Thanks 
Sourav
 
 
 
 
 

 

  • Up0
  • Down0
xzhong
Join Date: 7 Feb 13
Posts: 4
Posted: Mon, 2014-08-25 22:13

hi MK00000010,

If you see tracked feature points dispersed in the preview frame, it's likely LK OF failed tracking them completely.  LK Optical Flow generally has a limitation of how far it can reliably track. Based on my experience, if the image is QVGA, with pyramid level = 4, and window size 7, LK OF works well to track any motion that is < 30pixels in each direction.  If your input image is 1080p for example, you may want to resize the image to QVGA, or set pyramidLevel = 5 or 6.

My suspision is that the camera motion may be significant when you move the phone.  Can you try to move the phone at slower speed to see if this behavior still exists?  If at slower motion, your problem is resolved. I suggest you either increase the pyramid level or resize the image to smaller before you try the faster phone motion.

 

Thanks,

Xin

  • Up0
  • Down0
xzhong
Join Date: 7 Feb 13
Posts: 4
Posted: Mon, 2014-08-25 22:18

hi Sourav,

I suggest you try the same thing: reduce your input frame size to some resolution around QVGA before you run LK optical flow.  Also it's more advisable to track the object from frame N-1 to frame N, intead of tracking from frame 0 to frame N.  The appearance change and accumulated motion may be too large for optical flow to handle.

Curious how you choose to send only the corners in the center 200x200 patch to LK OF?

please let me know if these suggestions work.

Xin

  • Up0
  • Down0
sourav3515
Join Date: 9 Jan 13
Posts: 6
Posted: Fri, 2014-08-29 12:42

Hi xzhong,

It worked well after following your inputs. Still when moving device a bit faster, key points are lost.

For filtering keypoints for a given window, i am considering corners which only lie in that window. Keeping those corner's x,y in a different array in a serial fashion and the number of corners modify accordingly. So only the corners in that window are getting tracked now onwards,

  • Up0
  • Down0
sourav3515
Join Date: 9 Jan 13
Posts: 6
Posted: Wed, 2014-10-08 08:00

Hi Everyone,

i am trying to track the centre region(200x200) of the preview whichis 960x540 in subsequent frames.

First i am detecting the feature points in the centre 200x200 region, for this I am filtering out the features points which lie in centre 200x200 region and updated the featurelen as well

fcvGoodFeatureToTracku8(mBuffer,
mBufferWidth,
mBufferHeight,
0,
5.0,
7,
5.0,
newFeatures,
MAX_NUM_CORNERS,
&corners);
 
mBufferWidth and mBufferHeight are 960 & 540.
I am using the above parameters. MAX_NUM_CORNERS is defined 1000.
 
int centreX = mBufferWidth>>1;
int centreY = mBufferHeight>>1;
 
uint32_t* refCorners = newFeatures;
int numMarkerCorners = (int)corners;
int z = 0;
while(numMarkerCorners){
 float x = *refCorners++;
 float y = *refCorners++;
 int tempX = x;
 int tempY = y;
 if(tempX >= (centreX-100) && tempX < (centreX+100) && tempY >= (centreY-100) && tempY < (centreY+100)){
  mFeaturesIn[z++] = x;
  mFeaturesIn[z++] = y;
  mNumCornersDetected++;
 }
 numMarkerCorners--;
}
 
So, the centre corners are filtered, mFeaturesIn array contains the corners from the centre region and mNumCornersDetected is the number of such corners. Then I am using fcvTrackLKOpticalFlowu8 for tracking these corners
 
int windowWidth, windowHeight, maxIterations;
int x1, y1, x2, y2;
uint32_t nPyramidLevels;
 
float featureXY_out[MAX_NUM_CORNERS*2];
memset(featureXY_out, 0, MAX_NUM_CORNERS*2*sizeof(float));
 
int32_t featureStatus[MAX_NUM_CORNERS];
memset(featureStatus, 0, MAX_NUM_CORNERS*sizeof(int32_t));
 
windowWidth = 7;
windowHeight = 7;
maxIterations = 7;
nPyramidLevels = 7;
 
fcvPyramidLevel * src1Pyr = new fcvPyramidLevel[nPyramidLevels];
fcvPyramidLevel * src2Pyr = new fcvPyramidLevel[nPyramidLevels];
 
src1Pyr[0].width = mBufferWidth;
src1Pyr[0].height = mBufferHeight;
src2Pyr[0].width = mBufferWidth;
src2Pyr[0].height = mBufferHeight;
//Only Y data
fcvPyramidAllocate(src1Pyr, src1Pyr[0].width, src1Pyr[0].height, 1, nPyramidLevels, 1);
fcvPyramidAllocate(src2Pyr, src2Pyr[0].width, src2Pyr[0].height, 1, nPyramidLevels, 1);
fcvPyramidCreateu8(mPrevBuffer, mBufferWidth, mBufferHeight, nPyramidLevels, src1Pyr);
fcvPyramidCreateu8(mBuffer, mBufferWidth, mBufferHeight, nPyramidLevels, src2Pyr);
 
//mPrevBuffer is the buffer from previous frame & mBuffer is the current frame
fcvTrackLKOpticalFlowu8(mPrevBuffer,
mBuffer,
mBufferWidth,
mBufferHeight,
src1Pyr,
src2Pyr,
NULL/*dx1Pyr*/,
NULL/*dy1Pyr*/,
mFeaturesIn,
featureXY_out,
featureStatus,
mNumCornersDetected,
windowWidth,
windowHeight,
maxIterations,
nPyramidLevels,
0, 0, 0, 0);
 
int countFeatures = 0;
mNumCornersDetected= 0;
 
while(countFeatures < mNumCornersDetected){
 if(featureStatus[countFeatures] > 0){
  mFeaturesIn[2*z] = featureXY_out[2*z];
  mFeaturesIn[2*z + 1] = featureXY_out[2*z + 1];
  mNumCornersDetected++;
 }
 countFeatures++;
}
 
mNumCornersDetected & mFeatureIn are updated input for the next frame.
 
So I should get the input for the next frame in featureXY_out array wherever featureStatus > 0.
But these values are not coming correct. Even when i dont move my camera, the output points are far from their corresponding input points. 
Please tell me where am i going wrong.
 
Regards,
Sourav
 

 

  • Up0
  • Down0
sourav3515
Join Date: 9 Jan 13
Posts: 6
Posted: Mon, 2014-10-13 22:04

Hello Everyone,

I am able to extract the feature points and track the object successfully using optical flow method in our test App. The test App is prepared on top of Sample App provided by Qualcomm. The results are excellent.

However, I have my own rendering engine that is based upon openGLES 2.0(native) and I am trying to integrate the above test App's tracking code into my own code. But results are very poor. The feature points are detected properly but optical flow tracking results are poor. Even though feature status is 1 for many of the tracked feature points, the corresponding feature out coordinates are not proper at all. 

I tried optcial flow tracking using openCV in my code and results are correct.

I am not sure where am I going wrong and what might the cause for this improper results.

Any help would be appreciated !

 

Regards

Sourav

  • Up0
  • Down0
Mukesh Roop Solanki
Join Date: 22 Jul 14
Posts: 1
Posted: Wed, 2014-10-15 04:42

Oh...can't believe....I am facing exactly same integration issue.

The tracking works fine in Sample code provided by qualcomm but it fails when I integrated the Sample code in my own Android code.

I have not tried openCV yet..

Looking for some response from the Qualcomm developer on this integration issue..

 

Cheers..

Mukesh

  • Up0
  • Down0
or Register

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.