I have put together a KLT optical flow tracker using fcvTrackLKOpticalFlowu8. I use Harris corners to generate initial features in a masked section of the buffer. When phone doesn't move, the tracking is fine and features are stable. As soon as I move the phone, a great majority of tracked features disperse across the preview screen. Is this behavior normal? If so, how can I improve the stability of the tracker?
Here's my code snipped:
//Extract initial corners
fcvCornerHarrisInMasku8((uint8_t*) pCurImg, trackerState.imgWidth, trackerState.imgHeight, trackerState.imgWidth, 7, trackerState.corners, 1000, &trackerState.numCorners, 5, trackerState.mask, trackerState.maskWidth, trackerState.maskHeight);
//make pyramid
//track
fcvTrackLKOpticalFlowu8(trackerState.prevFrameAligned, pCurImg, w, h, trackerState.prevPyramid, trackerState.curPyramid, trackerState.dxPrevPyr, trackerState.dyPrevPyr, trackerState.prevFeatureXY, trackerState.currentFeatureXY, trackerState.featureStatus, trackerState.numCorners, 7, 7, 7, 4, 0, 0, 0, 0);
Hi,
I am facing the exact same problem.
"When phone doesn't move, the tracking is fine and features are stable. As soon as I move the phone, a great majority of tracked features disperse across the preview screen. Is this behavior normal? "
please help.
Please follow function header to set up parameters for fcvTrackLKOpticalFlowu8_v2, or share your code snippet for us to look into it.
Cheers,
-Jeff
Hi,
First I am detecting the features from the first frame, which is 1920x1080 resolution grayscale, using fcvGoodFeaturetoTrack and getting the corners(x, y tuples) and the number of corners. I am saving this first frame as markerJImage. The motive is to only tack the centre 200x200 window from this first frame in the subsequent frames.
After this, I am using fcvfcvTrackLKOpticalFlowu8, in which I am passing, the first frame(which was saved), and the subsequent frames coming to track the corners found in the first frame. Below is the code snippet:
////////////////////////////////////////////////////////////////////////////////////////////
int windowWidth, windowHeight, maxIterations, m;
hi MK00000010,
If you see tracked feature points dispersed in the preview frame, it's likely LK OF failed tracking them completely. LK Optical Flow generally has a limitation of how far it can reliably track. Based on my experience, if the image is QVGA, with pyramid level = 4, and window size 7, LK OF works well to track any motion that is < 30pixels in each direction. If your input image is 1080p for example, you may want to resize the image to QVGA, or set pyramidLevel = 5 or 6.
My suspision is that the camera motion may be significant when you move the phone. Can you try to move the phone at slower speed to see if this behavior still exists? If at slower motion, your problem is resolved. I suggest you either increase the pyramid level or resize the image to smaller before you try the faster phone motion.
Thanks,
Xin
hi Sourav,
I suggest you try the same thing: reduce your input frame size to some resolution around QVGA before you run LK optical flow. Also it's more advisable to track the object from frame N-1 to frame N, intead of tracking from frame 0 to frame N. The appearance change and accumulated motion may be too large for optical flow to handle.
Curious how you choose to send only the corners in the center 200x200 patch to LK OF?
please let me know if these suggestions work.
Xin
Hi xzhong,
It worked well after following your inputs. Still when moving device a bit faster, key points are lost.
For filtering keypoints for a given window, i am considering corners which only lie in that window. Keeping those corner's x,y in a different array in a serial fashion and the number of corners modify accordingly. So only the corners in that window are getting tracked now onwards,
Hi Everyone,
i am trying to track the centre region(200x200) of the preview whichis 960x540 in subsequent frames.
First i am detecting the feature points in the centre 200x200 region, for this I am filtering out the features points which lie in centre 200x200 region and updated the featurelen as well
Hello Everyone,
I am able to extract the feature points and track the object successfully using optical flow method in our test App. The test App is prepared on top of Sample App provided by Qualcomm. The results are excellent.
However, I have my own rendering engine that is based upon openGLES 2.0(native) and I am trying to integrate the above test App's tracking code into my own code. But results are very poor. The feature points are detected properly but optical flow tracking results are poor. Even though feature status is 1 for many of the tracked feature points, the corresponding feature out coordinates are not proper at all.
I tried optcial flow tracking using openCV in my code and results are correct.
I am not sure where am I going wrong and what might the cause for this improper results.
Any help would be appreciated !
Regards
Sourav
Oh...can't believe....I am facing exactly same integration issue.
The tracking works fine in Sample code provided by qualcomm but it fails when I integrated the Sample code in my own Android code.
I have not tried openCV yet..
Looking for some response from the Qualcomm developer on this integration issue..
Cheers..
Mukesh