Some things I learned about OpenGL and OpenGL ES

I have done some stuff with OpenGL and OpenGL ES in the past. I generally tried to stick with fixed function pipeline; I never had good reasons to dig deeply into features provided post-GL1.4 (most of the time, GL1.1) and OpenGL ES 1.x. So the client for an MMORPG I worked on (both the 2D and 3D) did not need anything more. Nor did the simplistic unfinished spacesim. Neither did the zombie squishing game for iPhone.

For a few months now, I’ve had access to one of the “major league” game engines (which shall remain unnamed for now!), so I had to expand my knowledge a bit. I didn’t succeed as much as I should have, but here are a few interesting things I have learned.

They may be obvious, but I’ve only learned about them very recently. And it wasn’t trivial to figure them out exactly, at least based on queries I typed in Google. “What does this OpenGL feature do and how do I put it into OpenGL ES?”

Internal texture formats

We all know the incantation:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);

So I knew there was talk about the first GL_RGBA involving an “internal format”, and the second GL_RGBA and GL_UNSIGNED_BYTE specified how the textureData argument was, well, formatted. If we replaced them with GL_RGB, we’d drop the alpha from the image. But what did all that mean?

Yesterday I ran into GL_RGB8. And GL_RGBA16. And GL_RGBA4. And GL_RGB565, and more. These can be passed as the first argument, and determine how the desktop OpenGL implementation will store the texture. If you pass GL_RGB, you’re letting the implementation determine the optimal pixel component size in bits. If you get more detailed, you get more strict about the internal format you want the implementation to use for the pixels you passed.

Today I dug into a document I ran into yesterday, from 2007, which discussed that with OpenGL ES the implementation gets to decide, and tries to honor the internalFormat argument based on the format and type arguments.

GL_BGRA is an extension

On desktop, it got promoted to core OpenGL. On OpenGL ES, it’s called GL_BGRA_EXT.

Oh, and GL_HALF_FLOAT_OES does exist on OpenGL ES.

Protip: you do want to look into headers, especially the extension header (glext.h, gl2ext.h or such), whenever you can’t find if something exists and you’re sure it should.

Reverse formats

What the hell is GL_UNSIGNED_INT_8_8_8_8_REV? How does it relate to GL_UNSIGNED_BYTE and GL_UNSIGNED_INT_8_8_8_8?

This is strictly as far as I understand it! My understanding may be wrong and I would love it if someone clarified this in the comments! (For reference, the APPLE_packed_pixels extension talks about these.)

Once again, I may be wrong. Read the above links.

Alright, so let’s imagine you passed the source format of GL_BGRA in combination with the above. You’re specifying that the data you have passed has actually not been stored in the BGRA order — but in the reverse. That’s what REV means. Reverse. So you’re telling OpenGL you’re actually passing a texture stored in the ARGB order.

Notably, that doesn’t exist in OpenGL ES 2.0, so one may end up using GL_UNSIGNED_SHORT_4_4_4_4_REV. Without actually playing with that, I suspect that would mean also having to pass data with 16 bits per pixel, which kind of defeats the purpose of not having to convert ARGB textures. You could bake the data at build time, sure, but still…

Just red, just red and green

There are GL_RG* and GL_R* formats. For example, GL_RG8, GL_RG16 and GL_R8. Yip yip.

More?

Of course, I learned even more things. And I didn’t dig into everything I want to know (what’s GL_LUMINANCE and GL_LUMINANCE_ALPHA as a texture format, and why do I want to use that?).

Less than a week before I start the new job. Looking forward to spending some of the remaining time after my Wednesday trip on studying some of the interesting things such as these OpenGL features…


via blog.vucica.net

3 thoughts on “Some things I learned about OpenGL and OpenGL ES

  1. Boran Car

    Shouldn't GL_UNSIGNED_INT_4_4_4_4_REV be GL_UNSIGNED_SHORT_4_4_4_4_REV btw?

    Historical part and reasoning behind BGRA:
    Let's start with image formats. So, BMP is actually BGR- not RGB- or -RGB or -BGR (thank you Intel + Microsoft) as big endian -RGB is actually BGR- in little endian (I'm avoiding A on purpose here as depending on what you take as your primary, you can end up with ABGR, RGBA, ARGB, BGRA).
    Now, when you read a PNG or a JPEG with some xx library, usually you end up with BGRA (I'm thinking of FreeImage here on an Intel processor). That's why, for starters, you format should be BGRA to avoid conversions and speed up the entire process (as it is completely driver dependent if it will honor your internalFormat and if it will do the conversion in software or in hardware). If you browse the net, everybody says that most of the time BGRA is faster and this is precisely because of this. (Transfers system memory -> GPU are done via DMA and I don't know if there is any standard way of doing endian conversions with the DMA). Same principle applies when reading from a GPU. Ofc, this is all valid only on Little Endian Machine (i.e. Intel) and Big Endian Machines can have different results. Bi-Endian machines (i.e. ARM, SPARC) are a different story.

    Now, here's the part I understand but can't give a definite answer so I'm hoping you will check and post back :):
    My understanding is that GL_UNSIGNED_INT_8_8_8_8 stores big-endian wise. So, GL_BGRA would be BGRA in memory on big-endian machines, ARGB on little-endian machines. Similarly GL_RGBA would be RGBA on big-endian machines and ABGR on little-endian machines.
    GL_UNSIGNED_INT_8_8_8_8_REV likewise, GL_BGRA would be ARGB on big-endian machines, BGRA on little endian machines so this should be your choice for Intel. GL_UNSIGNED_BYTE would just give each component a single byte. Look out for word alignment here as well as endianess.
    These things are quite easy to check with a 16 color pattern and see if the colors match or not. Do this on an Intel and ARM and report results in another post :).

    Now, for the LUMINANCE and other exotic formats:
    You'e probably heard of YUV etc… Basically, Y approximates LUMINANCE and the two terms are used interchangeably even though they are strictly not (and YUV should usually be YCbCr but that's another story). What it boils down to is video playback and YUV RGB conversions or avoiding them altogether.

    Again, OpenGL is quite high-level and some of these things might actually be implement and supported in hardware while some things may end up being implemented on the software side and slowing everything down with memory transfers (which might have to be synchronous). Or, in the case of APU, everything might be quite efficient as it uses the same bus and the same memory.

    Hope I could help with some facts and some hunches.
    BTW. Good luck with your new job. 🙂

    Reply
    1. Ivan Vučica Post author

      The _INT_4_... should certainly be _SHORT_4_...; thanks, I'll update that!

      I've never used libpng or libjpeg directly, so I don't know if what you say about the uncompressed output actually being provided as BGRA. I always ended up with RGBA-ordered pixels — both in SDL and in Core Graphics. The only insane format that I OTOH know of that does BGR is BMP.

      Your explanation on these formats being endianess-specific makes sense, but I'd have to read up more … sadly it's 1:15 GMT and I have to get up rather early, and I primarily wanted to stop delaying approval of your comment and say — happy birthday! (And sorry for being 1h15 late!)

      Thanks for chiming in, we should set up a Hangout some time.

      Reply
      1. Boran Car

        Exactly, you ended up with RGB because the BGR organization made it possible. This is the usual flow. You have library l reading jpeg, png etc data and giving the decompressed data in buffer b (technically, a (unsigned)char/byte array). If you read the array a as byte values you'd end up with BGRs. If you read the array as quad values, you'd end up with RGB (talk about confusion, ha?).

        I've looked up some info regarding DMA or GPU having endian swaps and here is something I could find:

        > Unfortunately, our Southern Islands GPUs no longer have facilities for
        > byte-swapping vertex / texture data on the fly.
        The DMA engine still supports endian swaps so if we used that for
        uploads like r600g now does, we could use the facilities there.

        Now I went further and checked those DMA engines to see to which extent it went:

        > At V3 Semiconductor we have a line of bridge chips that perform on-the-fly
        > endian conversion for data moving between the PCI bus and a local processor
        > bus, or vice versa (sorry for the plug; I just wanted to have substance for my
        > example).

        OK. I'm sure that you do. Great. Etc.

        Now, unless your bridge chips are transferring ONLY monotonic data, also
        useless.

        Just HOW does your bridge chip KNOW which fields in the data stream are
        little-endian 16 and should NOT be swapped, big-endian 16 and DO need
        to be swapped, and 8-bit ASCII text, should not be touched?

        Oh, and how about 32-bit big-endian? That's NOT a case of just doing
        adjacent 16-bit swaps. And 64-bit big endian integer?

        Does your bridge handle these cases? I doubt it. If it does, PLEASE let
        me know — it would be a MARVEL!

        No hardware can ever handle endian issues, UNLESS IT KNOWS the context of
        the data — what is big-endian, what is little endian, what is text data,
        what is 16/32/64 bits, and so on.

        Apparently, there was a rush to support both but the parties involved realized quite soon that it usually ended up with quite a disaster.

        Thanks for the birthday wishes. Even belated, it's always nice to get good wishes 🙂

        Yeah, definitely, or have one of those Google+ discussions again 😉

        Reply

Leave a Reply to Boran Car Cancel reply

Your email address will not be published.

 

What is 2 + 2 ?
Please leave these two fields as-is:
IMPORTANT! To be able to proceed, you need to solve the following simple math (so we know that you are a human) :-)

This site uses Akismet to reduce spam. Learn how your comment data is processed.