Category Archives: programming

Changing the default file manager away from Nautilus

Nautilus is opening the desktop, making the experience miserable under i3 when all you’re trying to do is ‘view files in directory’.

XFCE’s Thunar is lightweight, and it doesn’t try to create a desktop window.

To get apps such as Chrome to handle “open directory” commands using Thunar, teach xdg-mime about it:

$ sudo apt-get install thunar
$ xdg-mime query default inode/directory
$ xdg-mime default Thunar.desktop inode/directory  # yep, Thunar's desktop file is titlecased on Debian-derived systems
$ xdg-mime query default inode/directory
$ xdg-open .
...thunar opens...

This is sort-of a follow-up to my previous post on this topic. See also, Debian’s article about usage of MIME types.

Ubuntu, touchscreen and dualscreen: HD Touchscreen on the left, laptop on the right

If you read about calibrating touchscreen with X11 and are capable of simple addition and division, you won’t have a problem building the transformation matrix and applying it. I won’t cover the same details on how to set the matrix using xinput set-prop since I just want to write down the resulting matrix and, perhaps, save people some time if they have a setup similar to mine.

To get the name of the device, open Terminal and punch in:


In the output I found that my “Acer T232HL”‘s input component is actually called “Advanced Silicon S.A CoolTouch(TM) System”. I can see all of its properties with:

xinput list-props Advanced Silicon S.A CoolTouch(TM) System

Since I’ve set up the screen to be on the left site, that means top-left corner of my touch area is simply offset by 0,0, and bottom-right is 1920,1080. Considering that the total width of the usable X area is 1920+1280, I get 1920/3600=0.6 as the c0 constant on the above link.

My usable area’s height is max(1080, 800) (the respective y component of the resolutions of the two displays). Considering that touch area’s height is also 1080, then the c2 component does not need to be scaled and should be exactly 1.

I also have absolutely no offsets, so c1 and c3 stay 0.


xinput set-prop "Advanced Silicon S.A CoolTouch(TM) System" --type=float "Coordinate Transformation Matrix" 0.6 0 0 0 1 0 0 0 1
# And if needed, to make the display configured as described above:
xrandr --output DP-0 --auto --left-of LVDS-0

From the times when I was trying to get Linux to run on iPaq H3800 devices using the kernel and distributions published at the now-defunct, I remember the “tslib” which included a touchscreen calibration tool. I wonder if that can and should be used to calibrate the modern multitouch displays.

Update (April 5th 2014): I just dug out package ‘xinput_calibrator’ by random googling, but have not tried it out yet.

On my final project for university

I’m completing my studies at the Polytechnic of Zagreb this semester, and my final project will be a compositing window manager (as a follow up to my text on the same subject written for the ‘Report’ class).

In discussion with my mentor, I wanted to double check the well-known fact that the university will be the owner of the resulting work. Yes, the university will claim ownership of the resulting work.

The follow up question was whether the resulting project could be GPLed. In that way, I would be able to continue working on the project later on. The answer is — no, I can’t GPL it. I might be able to wrestle with the bureaucracy and get a special exemption, but I’ve decided not to.

To the best of my knowledge, same policies exist at the Faculty of Computing and Electrical Engineering at the University of Zagreb — widely considered one of the best university-level schools in Croatia, if not the best.

I’m highly disappointed by the Croatian universities’ policy of appropriating work I am forced to do for purposes of acquiring a degree.

I have nothing against appropriating the accompanying paper, which is something that will not evolve further once written. I am highly frustrated by the application of same standards on potentially useful, potentially fast changing program code.

As a result, and as a form of protest, the software part of my project will be experimental, proof-of-concept research-quality code, and I will not try too hard to make it maintainable long term. The paper and the project will not be intentionally worse, but they will also not be intentionally better than they could be. This is because I want a clear road without obstacles whenever I decide to create a well-structured compositing window manager; I don’t want any obstacles to being able to modify my own code.

I am certain that numerous student developers in countries with similar practices do the same. This results in enormous waste of time that would, in the academic spirit of information sharing, better serve contributing to free software ecosystem. Instead, who knows how much code is either useless in the real world, or — worse — is actually useful in the real world, but claimed by the universities as their product?

I would suggest the responsible individuals in Croatia to review the history of Google, including the part where BackRub and PageRank Googol Google was a government-funded research project by two PhD students, yet they managed to take it out of the university and start a big company around it.

I would also suggest a review of Ayn Rand’s Atlas Shrugged which, while flawed and too narrow-focused, does offer some food for thought on government appropriations of works. And in cases of academic software projects that would be GPLed anyway, the situation is even worse: we’re seeing an appropriation of a work that I would gladly share with the world and previously intended to do so.

Some things I learned about OpenGL and OpenGL ES

I have done some stuff with OpenGL and OpenGL ES in the past. I generally tried to stick with fixed function pipeline; I never had good reasons to dig deeply into features provided post-GL1.4 (most of the time, GL1.1) and OpenGL ES 1.x. So the client for an MMORPG I worked on (both the 2D and 3D) did not need anything more. Nor did the simplistic unfinished spacesim. Neither did the zombie squishing game for iPhone.

For a few months now, I’ve had access to one of the “major league” game engines (which shall remain unnamed for now!), so I had to expand my knowledge a bit. I didn’t succeed as much as I should have, but here are a few interesting things I have learned.

They may be obvious, but I’ve only learned about them very recently. And it wasn’t trivial to figure them out exactly, at least based on queries I typed in Google. “What does this OpenGL feature do and how do I put it into OpenGL ES?”

Internal texture formats

We all know the incantation:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);

So I knew there was talk about the first GL_RGBA involving an “internal format”, and the second GL_RGBA and GL_UNSIGNED_BYTE specified how the textureData argument was, well, formatted. If we replaced them with GL_RGB, we’d drop the alpha from the image. But what did all that mean?

Yesterday I ran into GL_RGB8. And GL_RGBA16. And GL_RGBA4. And GL_RGB565, and more. These can be passed as the first argument, and determine how the desktop OpenGL implementation will store the texture. If you pass GL_RGB, you’re letting the implementation determine the optimal pixel component size in bits. If you get more detailed, you get more strict about the internal format you want the implementation to use for the pixels you passed.

Today I dug into a document I ran into yesterday, from 2007, which discussed that with OpenGL ES the implementation gets to decide, and tries to honor the internalFormat argument based on the format and type arguments.

GL_BGRA is an extension

On desktop, it got promoted to core OpenGL. On OpenGL ES, it’s called GL_BGRA_EXT.

Oh, and GL_HALF_FLOAT_OES does exist on OpenGL ES.

Protip: you do want to look into headers, especially the extension header (glext.h, gl2ext.h or such), whenever you can’t find if something exists and you’re sure it should.

Reverse formats

What the hell is GL_UNSIGNED_INT_8_8_8_8_REV? How does it relate to GL_UNSIGNED_BYTE and GL_UNSIGNED_INT_8_8_8_8?

This is strictly as far as I understand it! My understanding may be wrong and I would love it if someone clarified this in the comments! (For reference, the APPLE_packed_pixels extension talks about these.)

Once again, I may be wrong. Read the above links.

Alright, so let’s imagine you passed the source format of GL_BGRA in combination with the above. You’re specifying that the data you have passed has actually not been stored in the BGRA order — but in the reverse. That’s what REV means. Reverse. So you’re telling OpenGL you’re actually passing a texture stored in the ARGB order.

Notably, that doesn’t exist in OpenGL ES 2.0, so one may end up using GL_UNSIGNED_SHORT_4_4_4_4_REV. Without actually playing with that, I suspect that would mean also having to pass data with 16 bits per pixel, which kind of defeats the purpose of not having to convert ARGB textures. You could bake the data at build time, sure, but still…

Just red, just red and green

There are GL_RG* and GL_R* formats. For example, GL_RG8, GL_RG16 and GL_R8. Yip yip.


Of course, I learned even more things. And I didn’t dig into everything I want to know (what’s GL_LUMINANCE and GL_LUMINANCE_ALPHA as a texture format, and why do I want to use that?).

Less than a week before I start the new job. Looking forward to spending some of the remaining time after my Wednesday trip on studying some of the interesting things such as these OpenGL features…

GSoC 2013: Final post

So, this year’s GSoC 2013 has reached the hard deadlAnd tine at 19:00 UTC (21:00 CEST).

And, I’ve managed to fix some critical bugs in the Opal-based backend of GNUstep. Huzzah!

What we now have is doublebuffering support, image support, font support, blitting from a ‘gstate’ to a ‘gstate’ (actually between their backing surfaces while taking their respective transform matrices into account). We also have an an assortment of ‘DPS’ methods.

Everything is made possible thanks to great work of Opal’s authors, especially Eric Wasylishen who helped by fixing some outstanding bugs in Opal as well as with some text rendering code in the Opal backend. He was also of extreme assistance by keeping me great company over XMPP during late hours full of frustration 🙂

Thanks also go to everyone who ever worked or contributed on GNUstep — particularly on gnustep-gui. Especially Fred Kiefer, my last year’s mentor, whose advice always proves invaluable; perhaps initially cryptic, but always invaluable.

Thanks also go to David Chisnall, my this year’s GSOC mentor who kindly welcomed everyone to Cambridge on this year’s GNUstep meeting, and tolerated my rare status reports. 🙂

Programs running under GNUstep theme under Opal backend: Ink, TextEdit, SystemPreferences.

Programs running under GNUstep theme under Opal backend: Ink, TextEdit, SystemPreferences.

Aside from the various delays caused by job interviews, summer camps and sicknesses (as well as my overall suckiness 🙂 — the most confusing thing to me was the concept of ‘gstate’. As it turns out, GNUstep loves to keep a lot of graphics state in a class derived from GSGState. It loves to be able to switch between gstates at will. It likes resetting matrix to the identity matrix whenever it wants. It likes setting custom matrices. It likes resetting the clipping path. In short — it loves to do many things that Core Graphics doesn’t want to let you do.

Which is why I got so confused when Fred Kiefer suggested that getting rid of GState may not be so easy. Core Graphics looked so logical to me and methods that gnustep-back implemented looked like they matched them so closely. Why are things in gnustep-back so complicated?

Turns out there is a good reason — switching and copying of gstates, which DPS permits and GNUstep uses in large amounts.

So in the end, I dumped the branch related to cutting out OpalGState out of my backend. Classes in the gsc/ folder implement enough logic for it to be worth keeping them as base classes for context, gstate, etc. And Eric was kind enough to implement a way to get GState out of Opal so I can save it in OpalGState class in the backend and apply to a context when appropriate. Whew.

Programs running under Gtk theme under Opal backend: Ink, TextEdit. In background: GNUstep theme SystemPreferences.

Programs running under Gtk theme under Opal backend: Ink, TextEdit. In background: GNUstep theme SystemPreferences.

Next thing that caused a lot of problems for me was doublebuffering support. First, I got stuck with incorrectly understanding the CGContextDrawImage() API. I thought it specified the source and painted at 0,0; it turned out it specified destination. And the source was specified by creating a duplicate image. (Well, not a duplicate; same image is still used, just a different cropping argument is set.)

So I used that, and failed horribly. As much as I tried to get it to work (many hours wasted shuffling the code around) — nothing. Turns out Opal, unfortunately, did NOT make use of the cropping specifier in the ‘subimage’. When that was fixed, getting doublebuffering to work was trivial.

Images were fun. During initial sprint, I got them to work — only if they were 32bit RGBA images. And only after a lot of tries. And when context’s -isDrawingOnScreen returns YES, images break. Turns out they are drawn into an offscreen context, and then blitted onscreen using -compositeGState:....

As I didn’t figure out how exactly an offscreen context is created, right now -isDrawingOnScreen returns NO. It’s a shameless lie, but until I get some time to debug what’s going wrong, it’ll stay this way. At least we get icons in System Preferences this way!

Programs running under Gtk theme under Opal backend: Ink, TextEdit, SystemPreferences.

Programs running under Gtk theme under Opal backend: Ink, TextEdit, SystemPreferences.

Final issue that I struggled for a week or two was -compositeGState:... (and its buddy -drawGState:...). Figuring out how it’s called and from where and why was messy, and the fact that it works at all by the deadline is a small miracle. How does one debug why an image is not being correctly painted into an offscreen context — for example, one for images? How does one figure out what offsets does -gui imagine I’m applying? Did I forget to apply device offset or did I forget to update CTM? Should I flip the passed .y coordinate? Is -compositeGState:... incorrectly sourcing the image from NSImage‘s backing OpalGState, or is -DPSimage:... incorrectly painting the image into that OpalGState‘s backing surface’s backing CGContext? Does the surface exist at all when -DPSimage... is painting into it? Does the surface have the backing CGContext set up? Did I correctly apply all properties to the -copyWithZone:‘d OpalGState? Maybe code is trying to call -drawGState:...? And the most important question of them all: why the heck does it not draw?!

So right now we have a mostly functional -compositeGState:... and committed. Why mostly? Well, some images don’t seem to be drawn. And as long as you scroll the NSScrollView by grabbing the scrollbar, you’ll get a correctly scrolled scrollview. (You can also try clicking on the last visible item and pressing the down arrow: it’ll work.)

Then try clicking on an arrow button of the scrollbar. Heh — turns out -gui is scrolling it in a different way. What way? I did not try figuring out yet (as I hit the deadline).

What else is verified to be missing or broken? The blinking cursor in textviews (e.g. in Ink and TextEdit). Round buttons. I have a tableview that I can’t doubleclick on without crashing the app. (Curiously, outlineview version of the same test app does not crash.) Sometimes, pen is not lifted in appropriate places — to see this, launch SystemPreferences and open the Themes panel. Popup buttons with selection that happen to have an ‘arrow’ on the right side — in SystemPreferences, open the Font panel and look on the left — will not be correctly painted when clicked on.

Aside from being generally unhappy with performance, I’m also especially unhappy with font performance; it does seem to be a sad result of doublebuffering (everything is an image, so X11 is unhappy with that).

I’m not looking forward to fixing what remains broken, but I’ll try. I’ll try. After all — we can’t have Core Animation integration without the Opal backend 🙂

All in all, this is certainly not ‘release’ quality — but it’s close to being actually usable. I’m somewhat pleased that the backend actually works, considering how many issues I had along the way, largest of which being myself and (again) not being fanatically devoted to GSOC, which is what Google intended the students to be 🙂