Hi,
I have a small application that is using framebuffer objects in a ping pong manner. That is, render to A, bind B and use A as texture, etc. I have noticed that my cpu time increases significantly with my texture resolution even though I have the very same number of draw calls. The measured cost happens in glDrawArrays. It is as if there was an implicit glFinish().
Can someone explain this behavior? I totally get the gpu implications (bandwidth issues, bubble in the pipeline) but not this "cpu synchronization".
Thanks.