I have been trying out the fastCV examples provided in the Hexagon SDK. Running the cornerApp example shows that the example detects 60 corners in an image. Digging into the code I found that the image was saved as a variable in the main .c file. When viewing the image it appears to just be one large black square on a white background. Why would it be that the fastCV corner detection function detects 60 corners from only one square?
In addition, I created a simple image with four rectangles to replace the original image from the example. When running the example again the example detects 240 (60x4) corners. The fact that it scales up by 60 corners for each square seems strange. Does anyone have an idea why this would be?