Hi,
I'm trying to profile the different rendering steps in my application using OpenGL query objects with the GL_TIME_ELAPSED target specified in the GL_EXT_disjoint_timer_query extension.
However, the result seem rather strange. While the runtimes do increase with increasing geometric complexity, the values returned by the GL are much lower than expected: When using geometry complex enough to reduce the framerate to about 5fps, the time-to-frame according to the queries is still only ca. 5ms.
The same queries on a Tegra K1 (Nexus 9) produce consistent results. Could this be a driver bug or unfinished implementation? I thought, perhaps the results are not in nanoseconds as specified, but rather microseconds?
Thank you,
Jan