I have done some stuff with OpenGL and OpenGL ES in the past. I generally tried to stick with fixed function pipeline; I never had good reasons to dig deeply into features provided post-GL1.4 (most of the time, GL1.1) and OpenGL ES 1.x. So the client for an MMORPG I worked on (both the 2D and 3D) did not need anything more. Nor did the simplistic unfinished spacesim. Neither did the zombie squishing game for iPhone.
For a few months now, I’ve had access to one of the “major league” game engines (which shall remain unnamed for now!), so I had to expand my knowledge a bit. I didn’t succeed as much as I should have, but here are a few interesting things I have learned.
They may be obvious, but I’ve only learned about them very recently. And it wasn’t trivial to figure them out exactly, at least based on queries I typed in Google. “What does this OpenGL feature do and how do I put it into OpenGL ES?”
Internal texture formats
We all know the incantation:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
So I knew there was talk about the first GL_RGBA
involving an “internal format”, and the second GL_RGBA
and GL_UNSIGNED_BYTE
specified how the textureData
argument was, well, formatted. If we replaced them with GL_RGB
, we’d drop the alpha from the image. But what did all that mean?
Yesterday I ran into GL_RGB8
. And GL_RGBA16
. And GL_RGBA4
. And GL_RGB565
, and more. These can be passed as the first argument, and determine how the desktop OpenGL implementation will store the texture. If you pass GL_RGB
, you’re letting the implementation determine the optimal pixel component size in bits. If you get more detailed, you get more strict about the internal format you want the implementation to use for the pixels you passed.
Today I dug into a document I ran into yesterday, from 2007, which discussed that with OpenGL ES the implementation gets to decide, and tries to honor the internalFormat
argument based on the format
and type
arguments.
GL_BGRA
is an extension
On desktop, it got promoted to core OpenGL. On OpenGL ES, it’s called GL_BGRA_EXT
.
Oh, and GL_HALF_FLOAT_OES
does exist on OpenGL ES.
Protip: you do want to look into headers, especially the extension header (glext.h
, gl2ext.h
or such), whenever you can’t find if something exists and you’re sure it should.
Reverse formats
What the hell is GL_UNSIGNED_INT_8_8_8_8_REV
? How does it relate to GL_UNSIGNED_BYTE
and GL_UNSIGNED_INT_8_8_8_8
?
This is strictly as far as I understand it! My understanding may be wrong and I would love it if someone clarified this in the comments! (For reference, the APPLE_packed_pixels extension talks about these.)
Once again, I may be wrong. Read the above links.
Alright, so let’s imagine you passed the source format of GL_BGRA
in combination with the above. You’re specifying that the data you have passed has actually not been stored in the BGRA
order — but in the reverse. That’s what REV
means. Reverse. So you’re telling OpenGL you’re actually passing a texture stored in the ARGB
order.
Notably, that doesn’t exist in OpenGL ES 2.0, so one may end up using GL_UNSIGNED_SHORT_4_4_4_4_REV
. Without actually playing with that, I suspect that would mean also having to pass data with 16 bits per pixel, which kind of defeats the purpose of not having to convert ARGB
textures. You could bake the data at build time, sure, but still…
Just red, just red and green
There are GL_RG*
and GL_R*
formats. For example, GL_RG8
, GL_RG16
and GL_R8
. Yip yip.
More?
Of course, I learned even more things. And I didn’t dig into everything I want to know (what’s GL_LUMINANCE
and GL_LUMINANCE_ALPHA
as a texture format, and why do I want to use that?).
Less than a week before I start the new job. Looking forward to spending some of the remaining time after my Wednesday trip on studying some of the interesting things such as these OpenGL features…
–
via blog.vucica.net