Hello :)
I'm working on a ColorDetection Algorithm for an Android Smartphone (HTC Sensation XL). I implemented it with OpenCV and it works fine, but very slow. So I thought about to combine FastCV and OpenCV in order to get better Performance. This is my native function:
JNIEXPORT void JNICALL Java_ch_hslu_pren_Sample4View_FindFeatures(JNIEnv* env, jobject thiz, jlong addrYuv, jlong addrRgba)
{
cv::Mat* rgba = (cv::Mat*)addrRgba;
cv::Mat procImg;
// Copy the YUV image to the processImage
((cv::Mat*)addrYuv)->copyTo(procImg);
// Transform into RGB
cv::cvtColor(procImg, procImg, CV_YUV420sp2RGB);
// Output is in RGBA
cv::cvtColor(procImg, *rgba, CV_RGB2RGBA);
// Smooth the image with a gaussian blur
cv::GaussianBlur(procImg, procImg, cv::Size(0,0), 3, 3);
// For Color Detection we need HSV (should be more or less luminance invariant)
cv::cvtColor(procImg, procImg, CV_RGB2HSV);
// Get a binary threshold of the Image with the given color range.
cv::inRange(procImg, cv::Scalar(0, 80, 140), cv::Scalar(30, 160, 255), procImg);
// Find contours
std::vector<std::vector<cv::Point> > contours;
cv::findContours(procImg, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); i++)
{
cv::Mat contour(contours[i]);
// TODO: Check approx. size
/*
if(contour.rows < 37 || contour.rows > 280)
continue;
*/
// Result output in RGBA image
cv::Point p = contours[i][0];
cv::circle(*rgba, cv::Point(p.x, p.y), 14, cv::Scalar(0,255,0), 2);
}
}
Now I've set up a new project based on the fastcorner-sample, but with the OpenCV reference. In the update function I've tried to do something like this:
fcvColorYUV420toRGB565u8(pJimgData,
w, h,
(uint32_t*)renderBuffer );
cv::Mat buffer(h, w, CV_32SC3, renderBuffer);
cv::cvtColor(buffer, buffer, CV_RBG2HSV);
cv::inRange(buffer, cv::Scalar(0, 80, 140), cv::Scalar(30, 160, 255), buffer);
but after this lines the App crashes without any output on LogCat.
Could somebody show me how to use FastCV and OpenCV properly together?
Thank you very much!
Sincerly, Steve.
Hi,
I'm not sure if it will help but there might be two problems:
1. RGB565 is 16bpp and it seems that your output buffer is (uint32_t*)renderBufferwhich is 32bpp
2. There might be restrictions on the input and output pointers to be 128 bit aligned.
Could you find a solution for that problem?
I want to do something similar.
Instead of using the fastcv function fcvColorYUV420toRGB565u8() I would like to do the conversion
with OpenCV, but render the solution with the renderBuffer. So I replaced the function as followed:
Mat myuv(h + h/2, w, CV_8UC1, (unsigned char *)jimgData);
Mat mbgra(h, w, CV_8UC4, (unsigned char *)jimgDataRGB);
cvtColor(myuv, mbgra, CV_YUV420sp2BGR, 4);
renderBuffer = (uint8_t*)mbgra.data;
But unfortunately the screen stays black and I fear that there are some alignment problems or so?!
My final goal is to render a grayscale image since there is no equivalent fastcv function. Someone can tell me
how to do this?
I don't know if you eventually found an answer to your question, but here is my solution, for the record:
First of all, if you are using the sample given with fastCV, you need to make a copy of your data to renderBuffer. It is indeed only a pointer to the buffer and, in your code, you are only modifying the pointer. The actual buffer (defined elsewhere) is therefore unchanged.
If your Mat is continuous (you can check using mbgra.isContinuous()), then you can do a memcpy: