Tag Archives: gnu/linux

Ubuntu 13.10 live CD: Blank screen with EFI

Ubuntu 13.10 fails to start X11 on a Macbook Pro with retina display, and it fails to start X11 on VirtualBox when EFI mode is turned on. Even the failover mode fails. This has been tried with 64-bit version of Ubuntu 13.10. Machines: a 2013 Macbook Pro; and a virtual VirtualBox 4.3.6 machine configured for Ubuntu (64-bit) OS, with EFI turned on.

Ubuntu’s failover configuration tries to use vesa module, which is not available when running under native EFI or UEFI mode.

Let’s fix this by using fbdev module.

  1. Hit ctrl+alt+f1 to switch to console.
  2. Type sudo -i to become root.
  3. Now let’s fix the relevant files:
    cd /etc/X11/
    sed 's/Driver.*"vesa"/Driver "fbdev"/' xorg.conf.failsafe > xorg.conf
  1. Restart X11 et al: service lightdm restart
  2. If necessary, switch to the VT dedicated to X11: hit ctrl+alt+f7

Note that the screen will stay blank for a while longer; give the system some time to proceed.

Good luck!

Running a physically installed Ubuntu in VirtualBox on Mac

What follows includes my thoughts and notes more than definitive knowledge. I did manage to run Ubuntu in VirtualBox, but the process how I got there may include incorrect conclusions. Here goes.

  • VirtualBox 4.2.16 under OS X
  • OS X 10.8.4
  • Ubuntu 12.04 32-bit
  • Windows 8 32-bit
  • MacBook Unibody late 2009

(Yes, I presume you have the above setup; if you have something that’s 64-bit, unfortunately, I can’t help you there.)

DISCLAIMER

Everything in this post is dangerous to your data, even if you are an expert. Think thrice before doing anything, back up everything, triple-check everything I suggest you do for hidden assumptions that may be related to my machine.

Some steps may have inadvertently been omitted. I did not note everything I did, and I don’t feel like reproducing each and every of my steps.

I know this disclaimer will probably not dissuade you from trying this, but this includes particularly dangerous stuff and I don’t want anyone misunderstanding: I am NOT advising you to do this. I am NOT advising you to follow my advice. I am only documenting this process for my own private future use.

Mac specifics

Mac runs EFI instead of BIOS. This is quite similar to today’s UEFI machines, but does present its problems.

The partition table format used is GPT. Again, this is something normal and expected on today’s new UEFI machines, but something people may be unfamiliar with.

Somewhat specific to Mac, if one installs Windows, then the resulting partition table format includes both MBR and GPT. This is a result of the fact that 32-bit Windows don’t boot from GPT and don’t like EFI, and even more so, a result of the support for Windows on Macs being produced before -Longhorn- Vista was released. So the lowest common denominator is, in this case, 32-bit Windows XP; so, “BootCamp” produces a GPT+MBR hybrid partition table. (Experts seem to dislike hybrid MBRs.)

Under (U)EFI, the equivalent of BIOS does very little. It has support for FAT (usually FAT32), for loading additional drivers from the FAT partition, and for running .EFI binaries (mostly boot loaders, although in some cases these may be utility programs or shells).

On PCs, we will be mostly interested in 32-bit and 64-bit Intel .EFI binaries (i386 and x86_64 binaries). There are even universal binaries. Running an appropriate .EFI binary is how we boot an operating system. Since 3.3, Linux kernels seem to include an EFI stub; this means you can take a “vmlinux” binary and feed it directly to the EFI subsystem of an (U)EFI machine, without use of ELILO, GRUB2, or some other boot loader.

To expose the binary to the EFI system of the machine, you need to mount the FAT-formatted partition of type “EFI System Partition”, change to the \EFI directory on this partition, create a “vendor directory” (that’s the name of your OS or your OS vendor), and put “boot.efi” binary in that directory. Some EFI systems may expose other binaries as well, but this is what one is supposed to do.

Alternatively, systems may include a “Compatibility Support Module”. Macs do. This is a BIOS emulation layer and allows booting “legacy” operating systems. Microsoft says this must be turned off by default. Note that it is not a requirement; there is no need to include CSM. (VirtualBox does not.)

Please read around to familiarize yourself with EFI and GPT, and how they differ from BIOS and MBR. Trying to describe them here would be either a lot of work or would result in even more incorrect article than it’s already bound to be.

Easy parts and troublesome parts

VirtualBox supports physical partitions in the VMDK format. You need to use the command line to create the VMDK disk image, but it’s easily doable. It’s essentially a one-liner under normal, tested conditions.

Even when the conditions are not as normal, I’ve successfully ran BootCamp’d Windows XP previously. (Search my blog if you’re interested in hacks I had to do.)

That was easy: Windows XP deals with MBR, and MBR is trivial.

The problematic part with supporting Ubuntu is that its boot loader, GRUB2, actually appears to be too smart for its own good. Having installed the 32-bit version of Ubuntu 12.04, it has (by design) not deployed an EFI boot loader. Instead of deploying grub-efi, it has deployed grub-pc. This is great; the rEFIt boot loader that I use when booting the physical machine has picked up on this and allowed me to pick the Linux system. It does, unfortunately, depend on turning off the “bootable” flag in MBR with fdisk. This appears to be a limitation (or a feature?) of the Apple-provided CSM.

That’s all great. But when I tried to load this system with VirtualBox using default options, I got nothing. Zip. Nada. A black screen and that’s it.

Why? I have no idea. But I presume that GRUB2 picked up on GPT partition table and then got very confused by this on a non-EFI system. Why? Again, no idea. Switching to EFI got the setup to work — but only after installing grub-efi and deploying it to the appropriate place.

Creating virtual disk

You want to have a virtual disk that includes all relevant GPT information.

As I was originally playing with my previous MBR-based method, and the EFI+GPT method came from that, so does my script include some dding of GPT data: the first 40 sectors of the disk, and the last 40 sectors of the disk.

IMPORTANT: Always check device names (GPT partition IDs) with sudo diskutil list /dev/disk0. Always check MBR partition IDs with sudo frisk /dev/disk0. My setup includes Windows 8 and is very weird.

IMPORTANT: This script is NOT intended to be run as-is! Read it to learn what’s going on, triple check every single number, ensure you understand every single line, customize it for your machine and only then think twice before running anything below. Playing with partitions, with disk devices etc is dangerous. Ensure you have backups of everything that’s even remotely significant.

I personally have a Time Machine backup of my important data under OS X, every important project is stored on online code hosting, and everything else under physically installed Ubuntu and Windows 8 is not important. What’s your situation? Can you afford to lose data?

NOTE NOTE NOTE: As of OS X 10.9 Mavericks, Detection of disk size in blocks is broken in the script below. I haven’t updated it; when I needed to run it, I read it from a manually-ran fdisk and put it in appropriate place. (n.b. this could possibly be doable from native code: ioctl(fd, DKIOCGETPHYSICALBLOCKSIZE, &block_size); see http://stackoverflow.com/a/15947809/39974)

# ALWAYS check devices with:
# sudo diskutil list /dev/disk0

# ALWAYS check MBR partition IDs with:
# sudo fdisk /dev/disk0

EFISYSTEMPARTITIONOSXDEVICEID=1
RECOVERYPARTITIONOSXDEVICEID=3
LINUXPARTITIONOSXDEVICEID=4
LINUXPARTITIONMBRDEVICEID=3

sudo chmod 777 /dev/disk0s$EFISYSTEMPARTITIONSOXDEVICEID
sudo chmod 777 /dev/disk0s$RECOVERYPARTITIONSOXDEVICEID
sudo chmod 777 /dev/disk0s$LINUXPARTITIONSOXDEVICEID

sudo VBoxManage internalcommands createrawvmdk -filename PhysicalDisk.vmdk -rawdisk /dev/disk0 -partitions $EFISYSTEMPARTITIONOSXDEVICEID,$RECOVERYPARTITIONOSXDEVICEID,$LINUXPARTITIONOSXDEVICEID


# (primary mbr=1, primary gpt header=1, primary gpt table = 32, extra = 6) * 512
PRIMARY=40
sudo dd bs=512 count=$PRIMARY if=/dev/disk0 of=PhysicalDisk-pt.vmdk

# secondary gpt table=32 + secondary gpt table = 1
# see http://7bits.nl/blog/2012/01/02/mac-gpt-partition-table-recovery
SECONDARY=33
DISKSIZE=`diskutil info disk0 | grep "Total Size: .*exactly .* 512-Byte-Blocks"|sed 's/.*Total Size:.*exactly \(.*\) 512-Byte-Blocks)/\1/
'`
OFFSET=`calc $DISKSIZE-$SECONDARY`
OFFSET=`echo $OFFSET|sed 's/^ *//'` # ltrim

sudo dd bs=512 count=$SECONDARY iseek=$OFFSET oseek=$PRIMARY if=/dev/disk0 of=PhysicalDisk-pt.vmdk conv=notrunc

First, note the chmods. These will be required after each reboot. OS X only allows root to access the disks (for very important security reasons); VirtualBox does not run with root privileges. I don’t do this lightly; be VERY mindful that this is actually creating a local security hole, allowing user processes to read and even write to the disk.

Next, if you take a look at PhysicalDisk.vmdk, it’s a text file. You can see how various virtual disk sectors are mapped to various physical disk sectors, to “zero”, or to PhysicalDisk-pt.vmdk. (Please do check that you can find a section that matches this; if not, something went wrong in VBoxManage, and you should delete both .vmdk files.)

dd might not be necessary; but I ensure that whatever’s in GPT is not “accidentally” changed by VirtualBox’s VBoxManage.

Could one map even these 40 initial and 33 trailing sectors to the actual physical disk? Sure. But, why risk anything?

Installing rEFIt

Download rEFIt from its homepage. The install instructions say all you need to do with latest version, 0.14, is open the installer package and hit “Next” repeatedly.

Installing grub-efi

I decided to reuse EFI System Partition. I could have just as easily used the system partition; Apple ships a HFS+ driver, so the EFI subsystem can boot directly from the system partition.

The thing is, Ubuntu can’t write to the HFS partition, so it’s slightly easier to reuse the EFI System Partition.

What am I risking? Well, Apple might wipe this partition clean in an OS update. I hope they won’t.

IMPORTANT: The following can mess up GRUB. I can still boot using the “BIOS” GRUB2, but your mileage may vary.

What follows is inspired by Rod Smith’s EFI-Booting Ubuntu on a Mac.

  1. Boot physical Ubuntu.
  2. sudo apt-get install grub-efi – This removed grub-pc on my machine, although I still seem to have the ability to boot using BIOS. (Anything else would be… troublesome.)
  3. sudo mkdir /boot/efi – This is the place where we’ll mount the EFI System Partition.
  4. sudo mount /dev/sda1 /boot/efi
  5. sudo mkdir -p /boot/efi/EFI/Ubuntu – Apple doesn’t ship an \EFI folder. We’ll create it, along with the “vendor” directory for Ubuntu.
  6. sudo grub-install /dev/sda1 – This should install grub-efi to \EFI\Ubuntu.
  7. ls -al /boot/efi/EFI/Ubuntu – You should see two files from Ubuntu here.

It’s important to understand: 32-bit Ubuntu installs 32-bit GRUB2. This will not be bootable on a 64-bit capable Mac. This is solely useful for VirtualBox.

So, ensure that you can still use the BIOS GRUB2, or have an alternative boot method, or else you’re now converting your physical installation into a VirtualBox-only installation!

Creating virtual machine

I don’t have a script for this one. Go back to OS X, go to VirtualBox GUI and create an Ubuntu-type virtual machine. Don’t pick the 64-bit version; this changes the type of EFI that the virtual machine will use!

Pick the previously created PhysicalDisk.vmdk while creating the machine.

Now edit the settings. Right click on machine name, pick “Settings”, and change the machine to be an EFI machine on the System tab. So: right click [machine name]->Settings->System->Motherboard->Enable EFI (special OSes only).

Don’t boot yet! Did you chmod the disk devices? Remember, you rebooted. Please sudo chmod 777 all partition devices in /dev (and be mindful that this is a security hole you’re creating, which you might somehow avoid with UNIX user groups, but meh).

After this point, do not recreate the PhysicalDisk.vmdk without keeping in mind that this file includes disk image IDs in several places. VirtualBox keeps track of the disk images, and will NOT be happy if the ID changes.

So, done now? Great. Boot.

You’ll be shown the EFI shell. Hoorah!

Now, let’s change to the EFI System Partition‘s filesystem and boot GRUB2.

fs0:
cd EFI\Ubuntu
boot.efi

This should show you your physical machine’s GRUB menu and the booting should move on. Observe the disk light on the bottom of VirtualBox’s window; if it stops flickering for longer than 15 seconds, and Ubuntu does not boot, you can presume you have some sort of an issue.

Note that virtual machine does have different hardware than your physical machine; for example, NVIDIA graphics driver does not work for me. I get the console, but not X11. It would be trivial to fix (replace the selected driver with vesa or something similar in Xorg.conf) but I don’t care: I need to SSH into the machine and tunnel X11 to XQuartz on OS X. I don’t need Unity: I need the ability to work on my code and display the X windows.

So, this works for me. Huzzah!


Small updates

fstab

Add this to /etc/fstab (based on Rod Smith’s post, too):

/dev/sda1       /boot/efi       vfat    ro,fmask=133    0       0

Alternatively, change that ro to rw to get the partition to mount read-write; this may be important for grub updates.

grub-efi-amd64

Ubuntu 12.04 also ships with grub-efi-amd64.

sudo mount /boot/efi -o remount,rw # if not already mounted read-write
sudo apt-get install grub-efi-amd64

Don’t forget to change machine type to “Ubuntu (64-bit)” to update the EFI type.

Note, grub-efi-amd64 conflicts with grub-efi-ia32 and grub-efi, so you’ll end up losing the 32-bit version of the boot loader. This may or may not conflict with ability to boot from BIOS/CSM – I didn’t test this yet.

Routing IPv6 traffic through Debian pptpd into Hurricane Electric’s IPv6 tunnel

This is a repost of an answer I made to my own question on SuperUser (the “non-programmer” Stack Overflow) regarding setting up pptpd under Debian to route IPv6.

In the post, I’m also looking into using this under Mac OS X 10.8 Mountain Lion. I fully understand that PPTP is an insecure protocol and have separately also set up OpenVPN. However, I’m looking at this because PPTP is much more ubiquitous than OpenVPN and it’s easier to set up at both server and client side; no playing with certificate authorities, no playing with distributing configuration files to clients, etc. (Yes, I’m highly annoyed at the OpenVPN client for iOS not supporting the static key setup. Yes, I understand static key is less secure. No, I’m not dealing with stuff that require total and complete anonymity or encryption; I just want a VPN to work.)

This post does not deal with routing the segment through OS X once you got it to OS X.

This post only minimally deals with Windows as a client, because it Just Works™, and does not deal with GNU/Linux as a client, because it didn’t “magically” work under Ubuntu when I tried it, and I am not interested enough to figure out why.

Main goal here is documenting what an OS X user who has access to a Debian server with a public IP needs to do in order to get his OS X machine onto public IPv6 Internet without exposing it to public IPv4 Internet.

Client OS

Mac OS X does not particularly like IPv6 over PPP. Use the following after the connection has been set up:

sudo ipconfig set ppp0 AUTOMATIC-V6
sudo route add -inet6 default -interface ppp0

The prior seems to make OS X adhere to router advertisements; the latter adds a default route for IPv6. (Now, if only the certain-fruity-mobile-operating-system version of route provided -inet6, I’d be a happy wooden boy.)

Also take note that OS X will ignore whatever address was supposed to be negotiated over IPv6 and set up only a local address. This may interfere with routing towards OS X.

On the other hand, Windows 8 (of all systems!) has happily picked up the address sent over PPP, took note of the router advertisement, and overall configured itself flawlessly. PPTP really works nice in Windows.

Server

First thing I missed was that Hurricane Electric’s tunnel broker actually assigns TWO /64 prefixes; one is supposed to be solely for client use, while the other is intended for routing additional clients (such as the PPTP client). And if you need more addresses (or prefixes!), you can even get a /48 prefix. (With IPv6, this means there’s more bits for ‘your’ use; HE’s prefix takes ‘only’ 48 bits. So that provides you a few more bits to control before the auto-generated suffix, created from a MAC address or even created randomly, kicks in and takes over last 64 bits. You could theoretically wiggle and subnet even with only 64-bits to spare, but I’ve seen strange behavior on either Windows 8 or OS X, so I wouldn’t rely too much on that.)

Instead of configuring radvd directly and running it as a server — simply don’t configure it globally. That is, don’t run it as a service on Debian.

Instead, let’s follow Konrad Rosenbaum’s example, at Silmor.de, and have radvd configured after pppd creates the PPP interface.

  1. Set up your IPv6 connectivity. I use Hurricane Electric; I’ve configured it as follows:
    # hurricane electric tunnel
    # based on: http://www.tunnelbroker.net/forums/index.php?topic=1642.0
    auto he-ipv6
    iface he-ipv6 inet6 v4tunnel
        address 2001:470:UUUU:VVVV::2
        netmask 64
        endpoint  216.66.86.114
        ttl 255
        gateway 2001:470:UUUU:VVVV::1
        ## from http://lightyearsoftware.com/2011/02/configure-debian-as-an-ipv6-router/
        # I did not set up the routing of the /64 nor the /48 prefix here, but
        # this would ordinarily do it.  
        #up ip link set mtu 1280 dev he-ipv6
        #up route -6 add 2001:470:WWWW:VVVV::/64 he-ipv6
    
        # Note that Hurricane Electric provides different /64 IPv6 prefixes
        # for the client (UUUU:VVVV) and routing (WWWW:VVVV). 
        # And the /48 prefix is very different altogether.
    
  2. Install pptpd. (Of course, take note of PPTP’s insecurity as a protocol, and consider using OpenVPN or some other alternative.)

  3. Edit /etc/ppp/pptpd-options
    name pptpd
    refuse-pap
    refuse-chap
    refuse-mschap
    require-mschap-v2
    require-mppe-128
    proxyarp
    nodefaultroute
    lock
    nobsdcomp
    ipv6 ::1,::2
    

    Note the last line is different from the text in my question. You’re assigning some static addresses which may be respected by your client OS or not. (OS X seems to ignore them, but Windows uses them.)

  4. Create users for PPTP. Second column filters based on name argument in pptpd-options. Edit /etc/ppp/chap-secrets:
    ivucica pptpd AHyperSecretPasswordInPlainText 10.0.101.2 10.0.101.3 10.0.101.4
    

    You’re supposed to be able to replace the addresses with * instead of listing them manually. I did not try that out.

  5. Assign your PPTP users some IPv6 prefixes. NOTE: this is solely used by the script I’ll list below, which is derived from Konrad’s script.

    Edit /etc/ppp/ipv6-addr:

    ivucica:1234
    littlejohnny:1235
    
  6. Add new file /etc/ipv6-up.d/setupradvd:
    #!/bin/bash
    ADDR=$(grep ^$PEERNAME: /etc/ppp/ipv6-addr |cut -f 2 -d :)
    if test x$ADDR == x ; then
     echo "No IPv6 address found for user $PEERNAME"
     exit 0
    fi
    
    # We'll assign the user a /64 prefix.
    # I'm using a Hurricane Electric-assigned /48 prefix.
    
    # Operating systems seem to expect to be able to assign the 
    # last 64 bits of the address (based on ethernet MAC address
    # or some other identifier). So try to obtain a /48 prefix.
    
    # If you only have a /64 bit prefix, you can try to assign a
    # /80 prefix to your remote users. It works, but I'm only now
    # trying to enable these users to have routing.
    
    USERPREFIX=2001:470:XXXX:$ADDR
    USERPREFIXSIZE=64
    USERPREFIXOURADDRESS=1
    USERPREFIXUSERADDRESS=2
    
    # Add the address for your side of the tunnel to the PPP device.
    ifconfig $IFNAME add $USERPREFIX::$USERPREFIXOURADDRESS/$USERPREFIXSIZE
    
    # establish new route
    # (when a packet is directed toward user subnet, send it to user ip)
    route -6 add $USERPREFIX::/$USERPREFIXSIZE gw $USERPREFIX::$USERPREFIXUSERADDRESS
    
    #generate radvd config
    RAP=/etc/ppp/ipv6-radvd/$IFNAME
    RA=$RAP.conf
    echo interface $IFNAME >$RA
    echo '{ AdvSendAdvert on; MinRtrAdvInterval 5; MaxRtrAdvInterval 100;' >>$RA
    echo ' prefix' $USERPREFIX::/$USERPREFIXSIZE '{};' >>$RA
    
    # Instead of your DNS...
    #echo ' RDNSS $USERPREFIX::$USERPREFIXOURADDRESS {}; };' >>$RA
    # ...try assigning the Google DNS :)
    echo ' RDNSS 2001:4860:4860::8888 {}; }; ' >> $RA
    
    # The creation of radvd configuration could be more readable, but whatever.
    
    # Start radvd
    /usr/sbin/radvd -C $RA -p $RAP.pid
    
    exit 0
    

    Don’t forget to chmod the script to make it executable by pppd:

    chmod 755 /etc/ipv6-up.d/setupradvd
    
  7. The script spews radvd configuration into /etc/ppp/ipv6-radvd/… ensure that the folder exists!
    mkdir /etc/ppp/ipv6-radvd
    
  8. Also add /etc/ppp/ipv6-down.d/setupradvd (and make it executable!) — taken verbatim from Konrad:
    #!/bin/bash
    RAP=/etc/ppp/ipv6-radvd/$IFNAME
    kill `cat $RAP.pid` || true
    rm -f $RAP.*
    

    And

    chmod 755 /etc/ppp/ipv6-down.d/setupradvd
    

I have not tested using DHCPv6 to distribute the routing information, addresses or DNS information, especially since rtadv should be fulfilling these roles. It also would not help me, because as of Mountain Lion, OS X still does not ship with a DHCPv6 client (perhaps intentionally; nine out of ten dentists most of IPv6 experts agree that DHCP is evil).

Once again, please note Michael’s comments on PPTP security; consider using OpenVPN in production.

Yes, Konrad Rosenbaum also has a nice tutorial on IPv6 over OpenVPN. :-)

Saving PC sales

So today I’m reading that Dell may be stepping away from the consumer PC arena. What could a prospective PC retailer do to save sales?

While I’ll be ranting about Canonical, Ubuntu, Linux and GNOME a lot, please note that this is just a mention of a possible platform a device maker could have opted for. Same goes for Dell: I’m talking about them, but it mostly applies to others. And what I’m talking about is that people want an integrated (but powerful) solution that ‘just works’.

Figure out that people want the sleek and fancy. Steam is fancy. App Store is fancy. iTunes is fancy. Intel AppUp from 2011 was decidedly not fancy, and in fact, it was a prime example of the “old” way of doing things one the PC: let’s just pack random garbage in front of the customer and hope he’ll not only bite it, but happily chew it. (Just remember all the “photo handling” software that shipped with your digital camera, or “antivirus protection” demo software shipping with your shiny new PC.)

Thankfully, the latest version of AppUp from 2012 is a bit fancier, although still somewhat weird.

Figure out that people want to do things differently. How happy are users with the operating system you’re shipping? I personally like Windows 7 a lot lately (more on that later). But how integrated it is with your product? What does your product do? Is it just another box? Admittedly, it may be a neat, shiny box, but what does it do? Oh — this thing on it isn’t your product? Uh-huh, so you’re just another box-maker?

Hint-hint: end users like custom (but usable) stuff. At one point, aforementioned Dell has promoted Ubuntu on its machines. What they haven’t done is sit with Canonical and decide how to make Ubuntu the operating system for their machines. Not only that — they should have thought about how to make Dell’s laptop the machine for running Ubuntu.

Dell and Canonical could have figured out what exactly people want and how they want it done. In my previous life as a Linux user, I was quite “needy” and I desired customizability, shunning Ubuntu for Debian. But that’s not what people want. People want stuff to “just work”. I want it too nowadays. I also want a company to figure out how the user interface should work, and make it work that way instead of me. I want them to figure out what is the best way for me to achieve my goals.

And then I want them to proscribe that as sacred rules to developers on their platform. Then I want them to justify why those sacred rules exist. (The way NSDocument class works in Cocoa frameworks has recently allowed Apple to introduce “recent files” list for an application in Lion’s Expose for an application’s windows, as well in the Dock icon menu.)

I want those sacred rules to be sane and enabling to the developers, instead of arbitrary decisions slapped together by a bunch of monkeys. (And I’m not pointing fingers at a single platform or library here — but pretty much at most platforms and libraries out there.)

Figure out that people want to do stuff with their machines. After securing a deal with Canonical, Dell should have attempted to secure a deal with, for example, Adobe to port at least their flagship product Photoshop to Linux (or more specifically Ubuntu). There are bound to be many, many hurdles along the way. But instead of toying with The Gimp and waiting for them to actually make a tool that is usable by real people, getting Adobe to bring their product over would make the platform (and products) stand out and appeal to an audience. And if Adobe doesn’t want to cooperate, invest those profits in your long-term gain: look at Photoshop and replicate it under Linux, including keyboard shortcuts and whatnot.

Go and fix OpenOffice’s interface, or at least lift what you can in designing an office suite that works and looks as an office suite should. Or write your own — Apple surely did with iWork, and they worked on that even before iPhone and iPad were insanely profitable like today. Compared to today, iPhone was only mildly popular.

Can you see the big picture now? Can you see how a platform could have and should have come together to save, for example, Dell?

As Apple has built their OS on the strong base of BSD userland and Mach kernel, Dell and Canonical could have delivered integrated products based on GNU userland and Linux kernel. They should have worked on securing partnerships to deliver key products to what was (and is) a nascent desktop environment.

Apple did not use window compositing to bring you toys like a 3D cube, but to bring you tools to switch between windows and apps. Dell and Canonical should have and could have slimmed down Compiz. GNOME 3′s window manager is a nice experiment in this direction, but on the first look at it, it lifts off of Apple so blatantly in some ways that I can’t help think they should have and could have done better. It could have and should have been better than what Apple does.

Figure out how to cut the stuff out. As mentioned, I personally like Windows 7 a lot lately, but it hasn’t struck the good balance between exposing whatever a power user needs and hiding anything that a common user doesn’t need. It’s still too complex for a common user, and at the same time, any attempt at simplification and hiding stuff just means the actual stuff you need is now hidden behind menus and behind more menus and behind more menus. See: attempting to configure just slightly more complex wi-fi setup in Windows 7. Something is seriously wrong if it’s easier to change resolution and color depth in Windows 95 than it is in Windows 7.

Compiz needed to be cut and configured to sane defaults. Or it should have been thrown away and a custom manager should have been written.

As long as we stick to the UNIX principles wherever possible, I can take your window manager and throw it away. Or I could write my own settings app. But if you do a good enough job, I will not want to.

I currently am not inclined to go away from Mac, and the amount of customizations I do is minimal. Some people use custom app launchers, I’m satisfied with launching apps through Dock or Spotlight.

But in case I want to move away, I’m hoping GNUstep takes off and provides a viable way for people to port their OS X apps to other platforms. I hope for a healthy GNUstep ecosystem where people are free to share code, but also to sell the fruit of their labors.

But I am not really interested in moving away right now, because Apple delivers a good, complete, healthy ecosystem today, along with an integrated hardware+software stack where things like driver issues are rare and shocking events happening mostly to early adopters — definitely they are not common daily appearance for most users.

To save your sales, deliver a healthy, integrated hardware+platform+applications ecosystem. For a corporation as big as Dell, any investment into their own platform would have been an investment into long term future. It would have been diversification and it would be a way to stay unique long-term. And not doing a good job on creating a platform when you’re a multi-billion dollar company, especially in cases where you can already take other people’s work, should be inexcusable. In fact — I’m not sure if not even attempting to do it may be an even greater sin.

Make yourself stand out with an outstanding product that “just works”. Half-assed experiments with Linux just because it’s Linux and “free” won’t save you and will flop.

Delivering a complete product starting with a laptop designed around a platform (which may be based on Linux), and delivering a complete platform designed around your laptop is a good way to start.

Why GNU/Linux is not successful on desktops

I used Debian for a long time. I used it as a desktop OS. I did a lot of development and tinkering. I don’t have time for tinkering anymore, and I was lucky enough to get a Mac.

I was inspired to write this short outline of my views why GNU/Linux is, sadly, not right for an average user on the desktop, by a tweet from @ivan_gustin (retweeted by @ambivalentcase) that mentioned Linus Torvalds’ thoughts on the same subject from LinuxCon Europe 2011.

Let me point out: I WANT STRONG LINUX. I want freedom, I want power. What I don’t want is ground slipping below my feet.

Technical issues

Torvalds is spot on here. You can’t give the end-user a machine that might or might not work. Things are extremely improved here, and were already good back in 2006.

Greatest issue, however, are regressions and constant feeling of land moving under user’s feet. I can’t in good faith press that “update” button and be nearly certain that the machine will work after the update is performed. Ubuntu is doing very well here if you update within the same release. Next release may or may not work. However, I have had almost no upgrade of a graphics card driver that didn’t require some sort of tinkering in console afterwards and messing with Xorg.conf.

Linux is awesome because it recognizes almost every piece of software that you throw at it.

User interface keeps changing

Look at OS X Tiger. Then look at OS X Leopard. Then look at OS X Snow Leopard. Then look at OS X Lion.

If you go from one version to the next one, you get the user experience that is different, but not very different. Apart from performance issues, what was a major adoption blocker for Windows Vista? A radical reworking of the UI.

People don’t like changes to UI that are forced on them. If I just want, for example, a better mail client that let’s-say comes with Ubuntu 11.04, why am I expected to get used to Ubuntu Unity? If I just want the terminal to keep working and be improved, why do the GNOME team force me to upgrade not just the terminal, but the entire desktop, if I want a complete and integrated user experience? Oh, and that new desktop environment in GNOME 3 is, of course, completely different from GNOME 2.

And GNOME 3 and Ubuntu Unity are just reruns of the KDE 4 saga.

KDE 4 decided to make everything modular. GNOME 3 and Ubuntu Unity decided to develop a UI that is usable on both tablets and desktops. What happens next? Someone bright decides that desktop users must use a media center-like interface?

Features that disappear

I really don’t like it when a package suddenly gets removed from Debian. Remember XMMS? A beautiful clone of Winamp whose only flaw was that it was written with GTK1. Someone thought it was a good idea to rewrite it as a server that plays the music, and a client that controls the playing and called this XMMS2. End-user’s client was, of course, seriously flawed and buggy, and its only boon was that it was written with GTK2. (That didn’t help it, because it was unskinnable and it was ugly.) Debian removed original XMMS because it was “unmaintained”. Great move!

Same thing happened to BitchX. Users who realized this were directed to use other command line IRC clients, such as irssi, which is far less usable and far uglier. At least to me.

Things changing, changing, changing, moving around, breaking down

I’m not exactly a kid anymore. I don’t have time or will to continuously tinker with stuff I already tinkered with. I just want things to work.

I don’t want to worry about packages conflicting.
I don’t want to worry that after an upgrade, a package will be removed due to a conflict.
I don’t want to worry that after an upgrade, my graphics card and my wireless card suddenly stop working because, hey, the kernel was upgraded and previously installed modules no longer work with the new kernel.
I don’t want to worry that after an upgrade, I will have to relearn basically everything because some dimwit decided to “fix” what was not broken: the previously extremely usable UI.
I don’t want to worry that after an upgrade, no icons appear on my desktop.

Applications and features

Hello. Meet my friend, Anna. She’s a graphics artist. She heavily uses Photoshop on a Mac. She needs, at the very least, effects applied on layers. She also needs to work with other designers’ PSD files.

What option does she have on Linux? GIMP? Seriously?

I can use GIMP. I like GIMP. Unfortunately, Anna cannot.

Hello. Meet aunt Silvia. She’s a 60 year old woman taught to work with Microsoft’s Word 2007 for Windows.

Can she use either Word for Mac, or OpenOffice? Let me tell you, she cannot.

You cannot force her to use it. She does not want to learn, even if she will pirate the software. (She does not understand that your refusal to install pirated Windows and Office is a moral choice. She will think you are a jerk.)

Hello. Meet Tom. Tom writes a lot of documentation. Tom got used to one thing that Macs do really well: drag and drop. He grabs almost anything, drags it, presses F3 to show Exposé, points to a window, presses space so he doesn’t have to wait for the window to be zoomed into, and drops it.

It works great with, for example, word processing.

Screenshotting is also nice. Keyboard shortcut is extremely nasty and unergonomic, but being able to get any portion of a screen, or a nice image of a window (complete with a shadow) is just a few keytaps away, and it’s integrated into what would be a “window manager” under GNU/Linux or other X11 environments. You can easily get the screenshot in a file on the desktop, or you can get it into the clipboard for quick pasting into the mail client.

And under GNU/Linux, you have to worry if dragging-and-dropping, copying-and-pasting will even work between programs under same desktop environment — situation is way worse when you have to do content exchange between different environments. Can I be sure that an image put into clipboard under Konqueror will be pasteable into GIMP or Thunderbird?

Exposé doesn’t seem useful until you realize how useful it is with drag-drop, especially when you press space. I can’t remember any use for desktop cube effect except that it made me feel warm deep inside. (Yep, I still love desktop cube. Sometimes I perform ‘switch user’ on OS X just to watch the default desktop cube animation.)

Hello. Meet me. I’m a developer, and what I’m about to say is unfair. I love Objective-C. I love the concept of building the UI in an Interface Builder. I don’t need Objective-C per se, as long as I can use similar software building practices.

Java seems awfully close.

Except it isn’t. It’s static and restrictive where it shouldn’t be.

GNUstep folks are great, and they’re doing great job. But they don’t really have the resources to build a good IDE, nor prepare good introduction for new Objective-C developers. C++ developers have the very nice Code::Blocks/, which is wonderful, until you need to develop a GUI app.

I also want to sell my software. I love doing free software/open source work, but sometimes a person has to live off something. My end users expect good quality packaging, an easy to install app, and support that will last even if I don’t update the app manually. Developer portal for Ubuntu is an excellent step forward for distribution, but building a quality .deb package is still difficult. Intel’s AppUp is confusing and little-known. I’m also from Croatia — can you folks send me cash to my country? (AppUp had issues with that — or so I’m told.)

Hello. Meet Harry. He’s a 14-year old gamer who wants to play StarCraft II, Call of Duty: Modern Warfare 2 and Portal 2.

I’ve had mostly bad experiences with Wine. Sure, you can play singleplayer original StarCraft. You can even play it in multiplayer! However, what if something goes wrong? Also, have you tried playing using Battle.net?

Have you tried playing Rise of Nations? Have you tried any other old title that uses DirectPlay? Did you see how many titles work, but their installers don’t work?

What happens when Harry’s graphics card drivers don’t start? Does he want to worry whether or not he may update nVidia’s drivers? Does he want to worry where the installer will put the launch icons? What happens when the .desktop file (the shortcut for launching apps) doesn’t work? I can probably get the title to work with Wine, but will Harry be able to?

And let me tell you, Harry won’t be happy when the 60 EUR game he just bought doesn’t work and he has to tinker with the computer to get it to run.

Conclusion

Who is GNU/Linux for?

Enthusiasts and people who use computers only for Skype, surfing, mail.

That’s not the massive user base that pushes the desktops and desktop usage forward. User base that pushes desktops and desktop usage forward are home power users, gamers, business users, artists, developers. Even a person who can do word processing with one package cannot use it with another these days.

And you know what “people who use computers only for Skype, surfing, mail” can use?

A tablet or (shudder) a netbook.

Either an Android tablet, an iPad or a Chromebook will do for them.

Solution

* Stabilize the UI. (Apple faced backlash for its reworking of scrollbars.)
* Be careful before making drastic incompatibility moves. (Apple faced backlash for removing Rosetta, breaking TurboTax 2007.)
* Stabilize the ABI. (Apple often ships very old versions of libraries. Removing Rosetta allowed removal of old PowerPC-only code.)
* Help the developers. (You are a free software enthusiast and that’s fine. But don’t think everyone is. If I want to ship paid software, and I have a market, don’t lock me out.)

Don’t break stuff, don’t change stuff. Help the dev.

I admire Linux Game Publishing‘s struggle in face of licensing issues (for example, SDL’s LGPL), in face of incompatibility, in face of historic disregard for ABI. It’s difficult to package executables that are easily relocatable to different places in the filesystem – because, since the software is open source, you can simply recompile the program for the new location. (Dependency on dynamic libraries put in fixed locations is the biggest issue.)

Dropbox under Debian

To get Dropbox with Nautilus extension to run under Debian:

1. Download .tar.bz2 from Dropbox’s site.
2. sudo apt-get install libnautilus-extension-dev python-docutils
3. tar xvvfj nautilus-dropbox-0.6.7.tar.bz2
4. cd nautilus-dropbox-0.6.7
5. ./configure
6. make
7. sudo make install