Is there a way to inialize the emulator on OSX to not be forced to use VAO? If I don't use VAO I get the following error from glDrawArrays and glDrawElements:
OSX GLES 3.0 VAO
Posted: Wed, 2014-10-22 22:13
Assertion failed: (0), function GetBytesPerElement, file ../../OpenGLES/oes3/src/entryHelp/NonNativeVtxProcessor.cpp, line 480
However the same drawing code work fine on other devices Simulator/Device (GLES2 and GLES3) and when initializing the emulator in GLES 2 mode using:
glfwWindowHint( GLFW_CONTEXT_VERSION_MAJOR, 1 );
glfwWindowHint( GLFW_CONTEXT_VERSION_MINOR, 0 );
glfwWindowHint( GLFW_OPENGL_PROFILE, GLFW_OPENGL_ANY_PROFILE );
But as soon as the emulator is initialzed in GLES3 mode using:
glfwWindowHint( GLFW_CONTEXT_VERSION_MAJOR, 3 );
glfwWindowHint( GLFW_CONTEXT_VERSION_MINOR, 3 );
glfwWindowHint( GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE );
glfwWindowHint( GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE );
or
glfwWindowHint( GLFW_CONTEXT_VERSION_MAJOR, 4 );
glfwWindowHint( GLFW_CONTEXT_VERSION_MINOR, 1 );
glfwWindowHint( GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE );
glfwWindowHint( GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE );
Basically the assertion occur there at 0x145d125:
libGLESv2.dylib`mglDrawElements(unsigned int, int, unsigned int, void const*):
0x145d0d0: pushl %ebp
0x145d0d1: movl %esp, %ebp
0x145d0d3: pushl %esi
0x145d0d4: subl $0x24, %esp
0x145d0d7: calll 0x145d0dc ; mglDrawElements(unsigned int, int, unsigned int, void const*) + 12
0x145d0dc: popl %esi
0x145d0dd: movl 0x14(%ebp), %eax
0x145d0e0: movl %eax, 0xc(%esp)
0x145d0e4: movl 0x10(%ebp), %eax
0x145d0e7: movl %eax, 0x8(%esp)
0x145d0eb: movl 0xc(%ebp), %eax
0x145d0ee: movl %eax, 0x4(%esp)
0x145d0f2: movl 0x8(%ebp), %eax
0x145d0f5: movl %eax, (%esp)
0x145d0f8: movl $0x0, 0x20(%esp)
0x145d100: movl $0x0, 0x1c(%esp)
0x145d108: movl $0x0, 0x18(%esp)
0x145d110: movl $0x0, 0x14(%esp)
0x145d118: movl $0x1, 0x10(%esp)
0x145d120: calll 0x149605c ; symbol stub for: mglDrawElements_2D(unsigned int, int, unsigned int, void const*, int, bool, bool, unsigned int, unsigned int)
0x145d125: leal 0x3b88b(%esi), %eax
0x145d12b: movl %eax, (%esp)
0x145d12e: calll 0x1496278 ; symbol stub for: LogError(char*)
0x145d133: addl $0x24, %esp
0x145d136: popl %esi
0x145d137: popl %ebp
0x145d138: ret
0x145d139: nopl (%eax)
VAO seems to become mandatory... Is there any way to avoid this behavior? Since when using a VAO glSubBufferData when binded to GL_ELEMENT_ARRAY is NOT working properly (even with GL_DYNAMIC_DRAW or GL_STREAM_DRAW mode) it does not work and generate an error... and basically a plain straightforward glDrawArrays is obviously faster...
VAOs, as far as I know, are part of the standard. You're always using one (the default), no matter what you do. According to Chapter 2.10 of the GLES 3.0 standard: