Hi,
I have some difficulties to make the optical flow working. I detected features from the first image with fcvCornerFast9u8() function. That works really well. Then I convert detected features from uint to float array but it seems that fcvTrackLKOpticalFlowu8() just copies features from input to output array. I have gone throught documentation several times but couldnt find the problem.
Here is my code:
Mat img1 = imread("s1.png",0);
Mat img2 = imread("s2.png",0);
int size = img1.cols* img1.rows;
uint32_t * xy = new uint32_t[size*2];
uint32_t * scores = new uint32_t[size];
float * featureXY_in = new float[size*2];
float * featureXY_out = new float[size*2];
int32_t * featureStatus_in = new int32_t[size];
uint32_t corners;
fcvCornerFast9u8(img1.data, img1.cols, img1.rows, 0, 20, 7, xy, size, &corners);
for( int i = 0; i < corners; i++ )
{
featureXY_in[i*2] = xy[i*2];
featureXY_in[i*2+1] = xy[i*2+1];
featureStatus_in[i] = 0;
}
uint32_t nPyramidLevels = 8;
fcvPyramidLevel * src1Pyr = new fcvPyramidLevel[nPyramidLevels];
fcvPyramidLevel * src2Pyr = new fcvPyramidLevel[nPyramidLevels];
fcvPyramidLevel * dx1Pyr = new fcvPyramidLevel[nPyramidLevels];
fcvPyramidLevel * dy1Pyr = new fcvPyramidLevel[nPyramidLevels];
fcvPyramidAllocate( src1Pyr, img1.cols, img1.rows, 4, nPyramidLevels, 0 );
fcvPyramidAllocate( src2Pyr, img1.cols, img1.rows, 4, nPyramidLevels, 0 );
fcvPyramidAllocate( dx1Pyr, img1.cols, img1.rows, 4, nPyramidLevels, 1 );
fcvPyramidAllocate( dy1Pyr, img1.cols, img1.rows, 4, nPyramidLevels, 1 );
fcvPyramidCreateu8( img1.data, img1.cols, img1.rows, nPyramidLevels, src1Pyr );
fcvPyramidCreateu8( img2.data, img2.cols, img2.rows, nPyramidLevels, src2Pyr );
fcvPyramidSobelGradientCreatei8( src1Pyr, dx1Pyr, dy1Pyr, nPyramidLevels );
fcvTrackLKOpticalFlowu8( img1.data, img2.data, img1.cols, img1.rows,
src1Pyr, src2Pyr, dx1Pyr, dy1Pyr, featureXY_in, featureXY_out,
featureStatus_in,
corners,
32,32,
7,
nPyramidLevels,
0.5,
0.15,
0,0 );
for( int i = 0; i < corners; i++ ){
line(frame,Point(featureXY_in[i*2],featureXY_in[i*2+1]),Point(featureXY_out[i*2],featureXY_out[i*2+1]),CV_RGB(255,0,0), 1, 8, 0);
}
imshow("OpticalFlow",frame);
Here is the output:
The code was supposed to draw lines id the direction of optical flow. The image shows second image (img2) and first image(img1) was the same but in position of those red points (corners).
Do you have any idea what might be wrong?
Thank you
The nPyramidLevels and windowWidth/windowHeight should be selected properly according to input image size.
In my app, I use VGA and WVGA input images. In both cases, a nPyramidLevels of 4, and windowWidth/windowHeight = 9 or 7 works quite well. maxIterations can be either 5 or 7, doesn't show much difference in tracking.
Thank you!
the problem was in windowWidth/windowHeight. I used value 6 as it is recomended int the (poor) documentation. I didn't know that just setting it to 7 can make such a huge difference.