I was looking for a "block-matching" implementation for Android, and I came across the FactCV function fcvTrackBMOpticalFlow16x16u8
The documentation gives information on the inputs and outputs of the function, but I have some more general questions regarding the implementation etc.
1. What is the block matching criteria? Meaning, how does the function measure the similarity of the block of pixels in frame k to the block of pixels in frame k+1. Does the function estimate the displacement of the center pixel of the block as the values that minimize the mean square error (MSE), or maybe the mean absolute differences?
2. The function generates Motion Vectors for blocks where motion is detected. Does it mean that the function estimates the displacement of EVERY block, but only returns the motion vectors (dx, dy) where (dx > 0)||(dy>0)?
3. From few tests that I did, it looks like the influence of the ShiftSize or the Number of Blocks on the running time is not linear, I guess it's because the implementation is not naive and uses memory optimization. If so, is there any rule of thumb for tuning the ShiftSize parameter assuming the SearchWidth(and searchHeight) parameters are given in order to achieve the largest spatial coverage and on the other hand good performance?
Thanks!