Forums - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

1 post / 0 new
Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)
osehyum
Join Date: 20 Oct 13
Location: KOREA
Posts: 1
Posted: Sun, 2013-10-20 20:31

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.

Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.

What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.

So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.

After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.

Here is the link.

From this link, I tried Karthik's method to my codes.

Only difference is:

//render method I made.
void renderFrame(){
    /* some codes to init */

    /*Bind the frame buffer*/
    glBindFramebuffer(GL_FRAMEBUFFER, iFBO);

    /* Set the viewport according to the FBO's texture. */
    glViewport(0, 0, mTexWidth , mTexHeight);

    /* Clear screen on FBO. */
    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Different Code compare to Karthik's.
    contents->setTexture();
    contents->draw(mPositionVarIndex, mTextrueCoIndex);
    contents->releaseData();

    /* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
    glBindFramebuffer(GL_FRAMEBUFFER,0);

    LOGI("Read Graphic Buffer");
    // Just in case the buffer was not created yet

    void* vaddr;
    // Lock the buffer and retrieve a pointer where we are going to write the data
    buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);

    if (vaddr == NULL)    {

        LOGE("lock error");
        buffer->unlock();
        return;
    }
     /* some code that use the pixels from graphicBuffer....*/
}


void setTexture(){
    glGenTextures(1, mTexture);
    glBindTexture(GL_TEXTURE_2D, mTexture[0]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);

    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glGenerateMipmap(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, 0);
}


void releaseData(){
    glDeleteTextures(1, mTexture);
    glDeleteBuffers(1, mVbo);
}


void draw(int positionIndex, int textureIndex){
    mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);

    glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
    glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
    glEnableVertexAttribArray(positionIndex);
    //    checkGlError("glEnableVertexAttribArray");

    glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
    //    checkGlError("glVertexAttribPointer");
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
    glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat,     lengthOfArray*sizeOfFloat, this->mImgTextureData);
    glEnableVertexAttribArray(textureIndex);
    glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0,   BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
    glBindBuffer(GL_ARRAY_BUFFER, 0);

    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, mTexture[0]);

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
    checkGlError("glDrawArrays");
}

I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, renderer is Mali-400MP. The other is LG-Optimus-G-PRO, and Renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.

E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)

Accroding to this link,

On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism, named Genlock, is used.

Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.

My questions are:

1. Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?

2. Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.

 

  • Up0
  • Down0

Opinions expressed in the content posted here are the personal opinions of the original authors, and do not necessarily reflect those of Qualcomm Incorporated or its subsidiaries (“Qualcomm”). The content is provided for informational purposes only and is not meant to be an endorsement or representation by Qualcomm or any other party. This site may also provide links or references to non-Qualcomm sites and resources. Qualcomm makes no representations, warranties, or other commitments whatsoever about any non-Qualcomm sites or third-party resources that may be referenced, accessible from, or linked to this site.