Category Archives: Apple

How do headsets know they may trigger Google Assistant or Siri?

I don’t know what the Bose QC35-ii is doing: the Action button refuses to do anything unless it’s sure it’s talking to either Google Assistant or Alexa (no Siri mentioned in the app, interestingly).

I can’t get the 2021 version of the Star Trek TNG Bluetooth Combadge to trigger anything when connected to a Linux machine. The regular press is triggering KEY_PLAYCD and KEY_PAUSECD, thus mapping onto the relevant X events and interacting well with my desktop’s media players (particularly Chrome) — but doublepress, which normally activates Siri on my iPad, sends no input device events on the relevant /dev/input/event* special file. There’s just no traffic.

btmon is an interesting discovery, and it pointed me in the direction of the world of AT commands flowing as ACL Data on my local hci0 device. Many of the ones flowing are documented on Qt Extended’s modem emulator component documentation from 2009: it starts with the combadge sending AT+BRSF and seeing a response, then sending AT+CIDN and getting and response, and so on and on and on.

If I am reading everything right, the values returned are decimal numbers representing a binary mask. btmon output seems to indicate the combadge (‘hands-free’ device) claims it supports 127 (i.e. all 8 functionalities in the Modem Emulator docs), and the desktop (‘audio gateway’) says it supports 1536, which is binary 110 0000 0000, meaning the only bits that are set are in the reserved range from the perspective of the 2009 Modem Emulator documentation.

A list of flags can also be found in 2013 bluez test for HFP. Over there, one of the formerly ‘reserved’ bits is specified as being AG_CODEC_NEGOTIATION, but we can luckily find the other one in ChromiumOS’s source code: inside something called adhd (apparently, ChromiumOS Audio Daemon) and its cras component’s server part, the constants are in cras_hfp_slc.h. So, the other bit the desktop claims to support is AG_HF_INDICATORS, which also has nothing to do with remote control.

That source code also indicates we can read the Hands-Free Profile specification, the latest one being version 1.8 available on Bluetooth.com.

So, if I am interpreting everything correctly, the combadge says it supports “everything”, but the desktop doesn’t tell it back that it knows what voice recognition is. No wonder we’re not seeing any traffic.

So, we don’t quite need to support Apple-specific HFP commands such as AT+XAPL (bluetooth accessory identification), AT+APLSIRI (confirming the device supports specifically Siri) or AT+IPHONEACCEV (sharing battery level), which is nice. Both of the platforms documented by the combadge’s marketing materials and the manual (Google Now i.e. Assistant and Siri) document they support AT+BVRA from the Hands-Free Profile specification; see Google Assistant’s “Voice Activation Optimization” document for Bluetooth devices, as well as the “Accessory Design Guidelines for Apple Devices (release R16 talks about this in section 30.3.1).

Instead, it looks like we mainly need to trick the desktop to respond to combadge’s AT+BRSF request with a bitmask that includes the voice recognition bit, and move on from that, hoping the combadge starts emitting AT+BVRA, and that we can easily programmatically capture that!

But that’s a topic for another post.

Why I choose not to use WhatsApp, Viber et al

There are many messenger apps these days that have very similar features, and are widely used. I’d usually describe them as “modern” messengers. I choose not to use them. I sometimes get into discussions about why. I’ll update this post if I get new perspectives or if I find better ways to clarify my opinions.

Here are some anti-features from my perspective, widely (but not universally shared), that make me strongly prefer not using these messengers:

  • Uploading all contacts. Many modern messengers use your phone’s addressbook as the primary source for the contact list. This is, in principle, laudable. One source of contacts is a good idea1.

    However, to enable the distinction between contacts that do not use this messenger and those that do, the clients have opted to query their servers for this piece of information. To do so, they upload all your contacts, and see whether the phone number is connected to the service or not.

    Despite not having a signal of whether a contact is ‘weak’ or ‘strong’ (occasional and mainly formal interactions vs daily friendly interactions), messengers can use this to form social graphs. I don’t have a reason to believe they are doing this or exploiting this information, however, I’d prefer a smaller number of companies to have access to my contact list. I’m sure my contacts would prefer that as well.

    For this reason, I’ve chosen that the primary company that’ll have access to this is the one that already syncs my contacts and sees all my email: Google. This means I get restricted to Hangouts (which is mid-way between ‘classical’ and ‘modern’ messengers) or Allo (which is slick, but underused, and has other flaws from this list).

    • Workaround: Messengers, please let me choose not to upload all contacts. Please don’t tell me I cannot block iOS and Android from you grabbing all my contacts. Please let me share only some contacts with you, or manually enter the phone number I want to reach out to.
  • Phone numbers only, please. I use many devices. Counting off the top of my head, I use 82 ‘smart’ devices regularly and 43 sporadically. This is not counting all the operating systems I have on them: both my desktop and my old laptop have 3. How about browsers? Anything with OS X has at least Safari and Chrome.

    I change environments multiple times a day. I’ve changed countries. I could change my phone number. It’s not unreasonable that I expect my conversation to continue from one environment to the other. If I’m on my desktop, I strongly prefer not to have to take out my phone just to see that Jack has said “hi” without any other followup. And doing this while I’m dealing with a page or writing or updating a very convoluted test is very distracting. It could be an important message — should I really have to decide between 15s to get the phone and see the “hi”, making me distracted for the next 5-10min, or leaving a possibly important message unseen?

    Tying a messenger to one phone number and thus one device is ridiculous.

    • Non-workaround: Browser-based solutions. I could receive and send messages from my desktop — hurray? While I do want a web-based client to be available when I’m on a Chromebook, due to e2e they’re usually convoluted and require messages to go through your phone, only to go back through the provider’s servers (presumably re-encrypted) and to be presented in a web UI. I object to this convoluted solution on moral grounds. 🙂

      I also don’t expect I get to integrate with my desktop environment that well.

    • Non-workaround: Wearable devices. While I can see the messages quickly, I have to actually own one. My Moto 360 broke down long time ago, and I’m still waiting for a decent, affordable Android Wear 2.0 device to become available in Ireland. (If I need to get a new one, why not get a proper upgrade?)

    • Counter-example: iMessage. Yep, in place of a workaround, I’m giving a specific “modern messenger” solution. I can not only tie multiple phone numbers, but also multiple email addresses, all on multiple devices, to the same account. Messages.app (formerly iChat), an OS X feature which integrates with iMessage, is a desktop, non-browser solution that neatly integrates with the OS as well.

      I would use this messenger much more if iMessage was available on non-Apple devices and on the web.

  • Ubiquitous e2e. In principle, I like encryption. It does come with huge costs. Most messengers that implement it (well) become terrible with syncing message archives, and become terrible storing them for prolonged periods of time.

    They also have to decide where to store the keys. To keep the whole contraption secure, they often choose a storage mechanism that makes it hard to exfiltrate the keys. This is a good thing — except it prevents sync from working, and it makes it hard to introduce new devices (or browsers!) into the mix. And as I said, I use many, many devices.

    Situations where I actually, genuinely care about e2e enough to break message sync, message archiving, and make provisioning new devices for the same account difficult or impossible — those situations are very rare. I can think of maybe 5-10 cases over the past 3 years, and I can’t even recall the specifics. Cases where I wanted to find details of an old conversation, or where I wanted to continue an old discussion, those are far more frequent.

    • Counter-example: iMessage as a service is doing somewhat well here again. I am just guessing, but it seems like, once provisioned, a message will be encrypted for a particular device’s key in addition to all other devices. If a device is under-used, the key gets phased out. Messages get synced while a device is provisioned.

      Where it’s not doing so well is in-browser support. Apple recently introduced Business Chat and iCloud syncing for messages. It seems to let third-party providers create integrations with iMessage, including web based. It’s for businesses only, from what I can tell!

    • Counter-example: What about, say, WhatsApp’s web UI? Link to your phone, and have all messages go through it; a secure solution, but which I object to morally. I was going to say “I have no idea how message sync interacts with e2e with WhatsApp”, but for me it would be a non-problem with WhatsApp, as either I’d use web UI (which would presumably fetch messages from the phone), or I would not have message sync (as only one phone has a particular phone number). Possibly the key and messages get backed up to Google Drive on Android, but that solves the problem of “I’m changing the phone”, not “I’m using multiple phone numbers and non-phone devices concurrently”.

    • Workaround: What I’d really like to see happen is optional e2e. At the very least, let users agree not to e2e, and reap the benefits of message sync, nice and slick web UI, easier provisioning of new devices. When I use XMPP, I don’t bother at all turning on OTR, OMEMO, or OpenGPG (mechanisms supported in Conversations, top of the line messenger for Android) — but I strongly care about support for Message Carbons (“deliver messages to all online clients”) and Message Archive Management (“archive messages on the server and let clients request the archive”). I own my domain, so I get the benefit of not being tied to a single provider. Friends who use my secondary domain are also welcome to request archive export should they choose to spin up their own server — I’ll gladly spend the time providing them this data. (I’ll also delete the data from their archives on my server, as well, but otherwise I’d expect that I can keep my own records of these chats.)

I could also simply not worry about these problems.

For example, my personal social graph is not going to be important or even a useful source of information to sell me things4. That said, I don’t know what all the contacts in my addressbook are up to. Do I want everyone to tie me to them? It probably does not matter, but I choose to draw the line there.

I could also choose to use the messengers only on one device, and ignore notifications that come while I am focused. I could choose to accept e2e and all the downsides it brings to the sync table. I could choose to use iMessage with my Apple-toting buddies (hint: there aren’t many!). I could choose to install Facebook Messenger, tolerate battery drain, and tolerate having an additional company have access to my communications.

All that said… I don’t get that many benefits from any of these messengers. I can easily reach people I care with XMPP, Hangouts or even SMS. If SMS fails, I can, occasionally, even reactivate the Facebook account and reach out to people using Facebook Messenger on the desktop. I don’t have a good reason to compromise, or to figure out a workaround such as setting up an XMPP transport for WhatsApp. People who happen to be using WhatsApp — I can reach them through SMS as well, and often through Hangouts as well.


  1. Android allowed the apps to do it the other way around, too; applications should integrate with the Contacts app. In practice, social network apps, even if they integrated with Android’s contacts, chose to remove the integration many years ago. This is disappointing. 
  2. Phones: Nexus 6P (personal), iPhone 7 (work). Tablets: iPad Air (personal). Computers: desktop, Macbook Pro 2016, Digital Ocean VPS (personal), workstation, HP Chromebook (work). Other: Nvidia Shield Android TV, Samsung 6400 TV, QNAP TS-509 NAS w/ debian (personal). 
  3. Phones: Nexus 5, Jolla (personal). Tablets: original iPad w/ iOS 5.1.1 (personal), Nexus 7 (work). Computers: Macbook unibody late 2009, Chromebook (personal). 
  4. I mean, I rarely buy exactly the same product just because a friend has it. 

A European's experience in [NOT] buying a MacBook Pro in US

I’m visiting NYC this week. I’ve come to US thinking hard whether or not I want to buy a MacBook Pro. I decided that yes, I do. Now I changed my mind. Why?

I wanted to do a couple of things. Use employee discount, pay with my Irish card, and I absolutely required a European-style keyboard layout. This last thing is the ultimate reason why I decided to skip it.

Apple is excellent if you are an average customer. Walk into one of the stores (there’s one 5 minutes of walk from me), pick up what you want, walk out happy. I could have done that. I can still do that. I don’t want to.

First, I want my discount. It saves me a non-insignificant an interesting amount of money. I doubt I can apply the discount from within the store. [UPDATE: Americans love to apply sales tax AFTER quoting you the price. So my total savings are a bit less than expected. Still, they exist. And my estimates include Irish VAT, because I’m nice like that. Smugglers would really save a lot.]

Second, I want a European-style keyboard (British at the very least). I have a US layout wireless keyboard, and if I switch to Croatian layout in software, I cannot type < and > on it.

Turns out that Apple does not stock laptops with different layouts in their stores. Turns out that I need to wait 1-3 business days for them to build the custom laptop (!!!) and then another 1-3 business days for them to expedite-ship it. Oh you want standard shipping? 3-5 days. You dared to desire to pick it up at the store (which should actually be easier for them)? Well, feelin’ bad for you buddy — it’s as if you picked standard shipping.

Maybe you wanted to pay with your Irish card? No go — they want your billing address phone number to be a US number, and your billing address zip code to be a US zip code and a US address.

Especially if you include the fact I’d need to deal with Irish customs people as well, that’s enough divine signs telling me I should skip this purchase. Maybe I change my mind, but I doubt it.

What kind of bullshit is not stocking some British-layout MBPs somewhere in NYC? What kind of bullshit is “it takes 1-3 days to replace a keyboard”? With minimum training, it could be done in-store, even if you couldn’t stock 2 variants in each store. They don’t sell often enough? Don’t keep 50 of them, but do keep 2 of them.

And I was so looking forward to a Mac with a decent GPU.

Running a physically installed Ubuntu in VirtualBox on Mac

What follows includes my thoughts and notes more than definitive knowledge. I did manage to run Ubuntu in VirtualBox, but the process how I got there may include incorrect conclusions. Here goes.

  • VirtualBox 4.2.16 under OS X
  • OS X 10.8.4
  • Ubuntu 12.04 32-bit
  • Windows 8 32-bit
  • MacBook Unibody late 2009

(Yes, I presume you have the above setup; if you have something that’s 64-bit, unfortunately, I can’t help you there.)

DISCLAIMER

Everything in this post is dangerous to your data, even if you are an expert. Think thrice before doing anything, back up everything, triple-check everything I suggest you do for hidden assumptions that may be related to my machine.

Some steps may have inadvertently been omitted. I did not note everything I did, and I don’t feel like reproducing each and every of my steps.

I know this disclaimer will probably not dissuade you from trying this, but this includes particularly dangerous stuff and I don’t want anyone misunderstanding: I am NOT advising you to do this. I am NOT advising you to follow my advice. I am only documenting this process for my own private future use.

Mac specifics

Mac runs EFI instead of BIOS. This is quite similar to today’s UEFI machines, but does present its problems.

The partition table format used is GPT. Again, this is something normal and expected on today’s new UEFI machines, but something people may be unfamiliar with.

Somewhat specific to Mac, if one installs Windows, then the resulting partition table format includes both MBR and GPT. This is a result of the fact that 32-bit Windows don’t boot from GPT and don’t like EFI, and even more so, a result of the support for Windows on Macs being produced before -Longhorn- Vista was released. So the lowest common denominator is, in this case, 32-bit Windows XP; so, “BootCamp” produces a GPT+MBR hybrid partition table. (Experts seem to dislike hybrid MBRs.)

Under (U)EFI, the equivalent of BIOS does very little. It has support for FAT (usually FAT32), for loading additional drivers from the FAT partition, and for running .EFI binaries (mostly boot loaders, although in some cases these may be utility programs or shells).

On PCs, we will be mostly interested in 32-bit and 64-bit Intel .EFI binaries (i386 and x86_64 binaries). There are even universal binaries. Running an appropriate .EFI binary is how we boot an operating system. Since 3.3, Linux kernels seem to include an EFI stub; this means you can take a “vmlinux” binary and feed it directly to the EFI subsystem of an (U)EFI machine, without use of ELILO, GRUB2, or some other boot loader.

To expose the binary to the EFI system of the machine, you need to mount the FAT-formatted partition of type “EFI System Partition”, change to the \EFI directory on this partition, create a “vendor directory” (that’s the name of your OS or your OS vendor), and put “boot.efi” binary in that directory. Some EFI systems may expose other binaries as well, but this is what one is supposed to do.

Alternatively, systems may include a “Compatibility Support Module”. Macs do. This is a BIOS emulation layer and allows booting “legacy” operating systems. Microsoft says this must be turned off by default. Note that it is not a requirement; there is no need to include CSM. (VirtualBox does not.)

Please read around to familiarize yourself with EFI and GPT, and how they differ from BIOS and MBR. Trying to describe them here would be either a lot of work or would result in even more incorrect article than it’s already bound to be.

Easy parts and troublesome parts

VirtualBox supports physical partitions in the VMDK format. You need to use the command line to create the VMDK disk image, but it’s easily doable. It’s essentially a one-liner under normal, tested conditions.

Even when the conditions are not as normal, I’ve successfully ran BootCamp’d Windows XP previously. (Search my blog if you’re interested in hacks I had to do.)

That was easy: Windows XP deals with MBR, and MBR is trivial.

The problematic part with supporting Ubuntu is that its boot loader, GRUB2, actually appears to be too smart for its own good. Having installed the 32-bit version of Ubuntu 12.04, it has (by design) not deployed an EFI boot loader. Instead of deploying grub-efi, it has deployed grub-pc. This is great; the rEFIt boot loader that I use when booting the physical machine has picked up on this and allowed me to pick the Linux system. It does, unfortunately, depend on turning off the “bootable” flag in MBR with fdisk. This appears to be a limitation (or a feature?) of the Apple-provided CSM.

That’s all great. But when I tried to load this system with VirtualBox using default options, I got nothing. Zip. Nada. A black screen and that’s it.

Why? I have no idea. But I presume that GRUB2 picked up on GPT partition table and then got very confused by this on a non-EFI system. Why? Again, no idea. Switching to EFI got the setup to work — but only after installing grub-efi and deploying it to the appropriate place.

Creating virtual disk

You want to have a virtual disk that includes all relevant GPT information.

As I was originally playing with my previous MBR-based method, and the EFI+GPT method came from that, so does my script include some dding of GPT data: the first 40 sectors of the disk, and the last 40 sectors of the disk.

IMPORTANT: Always check device names (GPT partition IDs) with sudo diskutil list /dev/disk0. Always check MBR partition IDs with sudo frisk /dev/disk0. My setup includes Windows 8 and is very weird.

IMPORTANT: This script is NOT intended to be run as-is! Read it to learn what’s going on, triple check every single number, ensure you understand every single line, customize it for your machine and only then think twice before running anything below. Playing with partitions, with disk devices etc is dangerous. Ensure you have backups of everything that’s even remotely significant.

I personally have a Time Machine backup of my important data under OS X, every important project is stored on online code hosting, and everything else under physically installed Ubuntu and Windows 8 is not important. What’s your situation? Can you afford to lose data?

NOTE NOTE NOTE: As of OS X 10.9 Mavericks, Detection of disk size in blocks is broken in the script below. I haven’t updated it; when I needed to run it, I read it from a manually-ran fdisk and put it in appropriate place. (n.b. this could possibly be doable from native code: ioctl(fd, DKIOCGETPHYSICALBLOCKSIZE, &block_size); see http://stackoverflow.com/a/15947809/39974)

# ALWAYS check devices with:
# sudo diskutil list /dev/disk0

# ALWAYS check MBR partition IDs with:
# sudo fdisk /dev/disk0

EFISYSTEMPARTITIONOSXDEVICEID=1
RECOVERYPARTITIONOSXDEVICEID=3
LINUXPARTITIONOSXDEVICEID=4
LINUXPARTITIONMBRDEVICEID=3

sudo chmod 777 /dev/disk0s$EFISYSTEMPARTITIONSOXDEVICEID
sudo chmod 777 /dev/disk0s$RECOVERYPARTITIONSOXDEVICEID
sudo chmod 777 /dev/disk0s$LINUXPARTITIONSOXDEVICEID

sudo VBoxManage internalcommands createrawvmdk -filename PhysicalDisk.vmdk -rawdisk /dev/disk0 -partitions $EFISYSTEMPARTITIONOSXDEVICEID,$RECOVERYPARTITIONOSXDEVICEID,$LINUXPARTITIONOSXDEVICEID


# (primary mbr=1, primary gpt header=1, primary gpt table = 32, extra = 6) * 512
PRIMARY=40
sudo dd bs=512 count=$PRIMARY if=/dev/disk0 of=PhysicalDisk-pt.vmdk

# secondary gpt table=32 + secondary gpt table = 1
# see http://7bits.nl/blog/2012/01/02/mac-gpt-partition-table-recovery
SECONDARY=33
DISKSIZE=`diskutil info disk0 | grep "Total Size: .*exactly .* 512-Byte-Blocks"|sed 's/.*Total Size:.*exactly \(.*\) 512-Byte-Blocks)/\1/
'`
OFFSET=`calc $DISKSIZE-$SECONDARY`
OFFSET=`echo $OFFSET|sed 's/^ *//'` # ltrim

sudo dd bs=512 count=$SECONDARY iseek=$OFFSET oseek=$PRIMARY if=/dev/disk0 of=PhysicalDisk-pt.vmdk conv=notrunc

First, note the chmods. These will be required after each reboot. OS X only allows root to access the disks (for very important security reasons); VirtualBox does not run with root privileges. I don’t do this lightly; be VERY mindful that this is actually creating a local security hole, allowing user processes to read and even write to the disk.

Next, if you take a look at PhysicalDisk.vmdk, it’s a text file. You can see how various virtual disk sectors are mapped to various physical disk sectors, to “zero”, or to PhysicalDisk-pt.vmdk. (Please do check that you can find a section that matches this; if not, something went wrong in VBoxManage, and you should delete both .vmdk files.)

dd might not be necessary; but I ensure that whatever’s in GPT is not “accidentally” changed by VirtualBox’s VBoxManage.

Could one map even these 40 initial and 33 trailing sectors to the actual physical disk? Sure. But, why risk anything?

Installing rEFIt

Download rEFIt from its homepage. The install instructions say all you need to do with latest version, 0.14, is open the installer package and hit “Next” repeatedly.

Installing grub-efi

I decided to reuse EFI System Partition. I could have just as easily used the system partition; Apple ships a HFS+ driver, so the EFI subsystem can boot directly from the system partition.

The thing is, Ubuntu can’t write to the HFS partition, so it’s slightly easier to reuse the EFI System Partition.

What am I risking? Well, Apple might wipe this partition clean in an OS update. I hope they won’t.

IMPORTANT: The following can mess up GRUB. I can still boot using the “BIOS” GRUB2, but your mileage may vary.

What follows is inspired by Rod Smith’s EFI-Booting Ubuntu on a Mac.

  1. Boot physical Ubuntu.
  2. sudo apt-get install grub-efi – This removed grub-pc on my machine, although I still seem to have the ability to boot using BIOS. (Anything else would be… troublesome.)
  3. sudo mkdir /boot/efi – This is the place where we’ll mount the EFI System Partition.
  4. sudo mount /dev/sda1 /boot/efi
  5. sudo mkdir -p /boot/efi/EFI/Ubuntu – Apple doesn’t ship an \EFI folder. We’ll create it, along with the “vendor” directory for Ubuntu.
  6. sudo grub-install /dev/sda1 – This should install grub-efi to \EFI\Ubuntu.
  7. ls -al /boot/efi/EFI/Ubuntu – You should see two files from Ubuntu here.

It’s important to understand: 32-bit Ubuntu installs 32-bit GRUB2. This will not be bootable on a 64-bit capable Mac. This is solely useful for VirtualBox.

So, ensure that you can still use the BIOS GRUB2, or have an alternative boot method, or else you’re now converting your physical installation into a VirtualBox-only installation!

Creating virtual machine

I don’t have a script for this one. Go back to OS X, go to VirtualBox GUI and create an Ubuntu-type virtual machine. Don’t pick the 64-bit version; this changes the type of EFI that the virtual machine will use!

Pick the previously created PhysicalDisk.vmdk while creating the machine.

Now edit the settings. Right click on machine name, pick “Settings”, and change the machine to be an EFI machine on the System tab. So: right click [machine name]->Settings->System->Motherboard->Enable EFI (special OSes only).

Don’t boot yet! Did you chmod the disk devices? Remember, you rebooted. Please sudo chmod 777 all partition devices in /dev (and be mindful that this is a security hole you’re creating, which you might somehow avoid with UNIX user groups, but meh).

After this point, do not recreate the PhysicalDisk.vmdk without keeping in mind that this file includes disk image IDs in several places. VirtualBox keeps track of the disk images, and will NOT be happy if the ID changes.

So, done now? Great. Boot.

You’ll be shown the EFI shell. Hoorah!

Now, let’s change to the EFI System Partition‘s filesystem and boot GRUB2.

fs0:
cd EFI\Ubuntu
boot.efi

This should show you your physical machine’s GRUB menu and the booting should move on. Observe the disk light on the bottom of VirtualBox’s window; if it stops flickering for longer than 15 seconds, and Ubuntu does not boot, you can presume you have some sort of an issue.

Note that virtual machine does have different hardware than your physical machine; for example, NVIDIA graphics driver does not work for me. I get the console, but not X11. It would be trivial to fix (replace the selected driver with vesa or something similar in Xorg.conf) but I don’t care: I need to SSH into the machine and tunnel X11 to XQuartz on OS X. I don’t need Unity: I need the ability to work on my code and display the X windows.

So, this works for me. Huzzah!


Small updates

fstab

Add this to /etc/fstab (based on Rod Smith’s post, too):

/dev/sda1       /boot/efi       vfat    ro,fmask=133    0       0

Alternatively, change that ro to rw to get the partition to mount read-write; this may be important for grub updates.

grub-efi-amd64

Ubuntu 12.04 also ships with grub-efi-amd64.

sudo mount /boot/efi -o remount,rw # if not already mounted read-write
sudo apt-get install grub-efi-amd64

Don’t forget to change machine type to “Ubuntu (64-bit)” to update the EFI type.

Note, grub-efi-amd64 conflicts with grub-efi-ia32 and grub-efi, so you’ll end up losing the 32-bit version of the boot loader. This may or may not conflict with ability to boot from BIOS/CSM – I didn’t test this yet.

iCloud Bookmarks: WebDAV + XBEL format

Just a few small notes and random thoughts.

iCloud Bookmarks are stored in WebDAV, as evidenced by the name of the syncing process: SafariDAVClient. Some googling revealed that you can find out the URL by opening the ~/Library/Safari/Bookmarks.plist file. (If you don’t have Xcode, convert it using plutil -convert xml1.) Each bookmark has a remote XBEL file associated with it; so just cut the part of the URL up to and including .../bookmarks.

Now you have URL you can browse using a WebDAV client. I didn’t have any success with Cyberduck; I kept getting ‘Unauthorized’ error. I presume it’s confused by the @ symbol in the username (the email address, y’know), but I don’t know.

Mac OS X’s built in WebDAV client (also known as ‘Finder’s WebDAV client’) can correctly mount the remote filesystem; but you can’t view the file list in Finder. Once it’s mounted, listing it through Terminal works very well.

It’s interesting that XBEL appears to originated within the Python community, as evidenced in its XML namespace: http://www.python.org/topics/xml/xbel/.

Apple really, really, REALLY likes to use existing standards wherever possible. The original push notification system was done via XMPP, for example, but noone dug deep into it; and FaceTime is suspected to be RTP deep inside. The strategy may be embrace, extend and lock down, but it’s ok. If I start having problem with it, the data is still accessible, I can still take it out.

And not just because they use WebDAV and XBEL for their cloud storage, but because I actually have the data on my local machine. Which brings me to another topic: knowing they use WebDAV and XBEL would have been useless, since you ordinarily would not be touching that directly. It would have been far better to actually mess with Safari, whose bookmark sync support is more-or-less documented. Via SyncServices.framework. Except it is not. Since that magical underused technology has been deprecated in 10.7.

iPhone 5 is a transition device

So you’re Apple.

And not the magical, mythical, always inspired Apple. You’re Apple that has been a market leader for a while, and is now being overtaken by new devices. You’re the Apple that is clinging to a philosophy that has served you well: don’t make your customers inconvenienced, and keep backward compatibility as long as possible. Change stuff when needed, and do so gradually.

Try not to break stuff

Developers often get this treatment from Apple. You are gradually eased out of your old practices and “encouraged” to adopt new ones. There are rarely any sudden changes. Even UDID was not suddenly thrown out from the OS; it was first deprecated, and then thrown out of iOS6 after months and months of easing developers into new practices. I can’t really remember significant mistreatments of developers, aside from weird App Store rejections and insistence on quite limiting sandboxing model. (Even there, line had to be drawn somewhere in order to make everyone happier and enforce better development practices.)

Aside from developers, Apple also doesn’t want end users, its primary customer base, to be inconvenienced. iOS 6 broke several things in several apps I work on when I first launched them in the Simulator, and I completely understand what went wrong and that most of what Apple changed is for the better. (I’m unsure how is force-crashing the app when no auto-rotation target is successfully calculated counts as a change for the better, though.)

However, in order not to break existing apps, Apple seems to have employed the model from OS X where the behaviors occasionally change based on the SDK the app was built on. I’m pretty convinced that several classes that ship in UIKit are loaded differently based on whether or not iOS 6 SDK was used in compiling the apps.

Don’t reject change

However, there’s a thing that Apple does that is also easy to notice: they are not resistant to change. Microsoft has tried very hard to keep backward compatibility as far back into the history as possible. Apple has taken many opportunities to shed itself of legacy code and practices. Mac OS to Mac OS X was a chance to throw away some legacy Macintosh Toolkit practices and establish Carbon as a nearly-Toolkit-but-not-quite replacement, with somewhat better structured code. PowerPC to Intel transition was a chance to break some code, but not a lot. Intel to Intel 64-bit was a chance to throw away most of Carbon and to introduce a new, but incompatible, runtime for Objective-C that is extended to this day without breaking existing apps. (One of major things this new runtime brought was non-fragile ABI, which basically means Apple can add stuff into Objective-C classes without breaking stuff.)

Apple has recently deprecated development for armv6 devices, and with Xcode 4.5 and iOS 6 decided to completely do away with support for it. This is surely inconveniencing some developers who, like me, up until then used iPhone or iPhone 3G as their main iOS device, and inconveniencing users who are no longer going to be able to get new versions of apps. And unless they backed up the .ipa files, they won’t be able to install versions that did support their device.

They are not afraid to change stuff. They are careful to ease developers in. And the myth that Apple magically “knows” what users want has been dispelled recently. And of course, it’s only logical that Apple, its engineers, as well as its management, do live in the same world that we do.

They see what people want because they observe. They may end up deciding on price point based on profit margins, profit now instead of profit later, and they may end up on deciding what goes into the device based on what can they can realistically construct best. But they definitely, definitely do watch what people want.

People want bigger devices

Remember 2007 and 2008, even 2009? Remember how people complained about iPhone’s size?

Jump to 2011 and 2012. iPad has come, other tablets have come, and people have seen that they, in reality, want bigger devices. Something rare has happened: Apple has missed what people want. (G4 Cube was another instance, and their insane prices in ‘less wealthy’ countries may be other instances.)

A controversial subject in ‘certain’ countries in the world, evolution theory, does in fact work. Perhaps we cannot go to the past to see that Earth was, in fact, inhabited by life forms that changed in response to environmental changes and that mutated; perhaps denialists can produce odd explanations for the abundance of evidence we have in favor of evolution theory.

But ‘genetic algorithms‘ definitely do show that mutation, recombination, and overall improvement in small steps by extolling beneficial adaptation, ignoring harmless ones, and shunning the harmful ones actually does work.

Android has flooded the market and evolved.

Adaptation

With Android, there were a lot of mutations. It has started with a poor product, and slowly grown. Various screen sizes, various input devices, various launchers, using stock versus using modified OS — all these things were mixed, matched, recombined, changed, and in the end we got devices like Galaxy S3 and Galaxy Note.

I have not owned an Android device, nor do I foresee that in the nearby future I will. I have played only a little while with several tablets, and my sister has an Android phone. I’m utterly unimpressed by the environment itself. I’m occasionally impressed by what openness has allowed, while at the same time unimpressed by flooded Android Market Google Play, with applications not being vetted, with applications that suspiciously request far more permissions that they need, with abhorrent and unnecessary NFC…

Yet a good point is being made.

All these changes, mutations, mixes and matches have produced several outstanding devices, and widened our horizons.

Android may be an initially stolen (and an initially poorly stolen, at that) product. What amounts to industrial espionage by a certain former board member at Apple has produced a vibrant smartphone market that even Apple will learn from. Notification Center is one example where Android took the lead, where a developer for jailbroken iOS systems has created a Notification Center, and where he ended up being hired to replicate it at Apple in an even better way than he did on jailbroken 4.x systems.

(Note that iOS itself has taken a lot of, shall we say, ‘hints’ from the old Palm OS, from Pocket PC and Windows Mobile — primarily packing a lot of UI ideas into a highly animated and fluent, NeXT-related UNIX-based operating system.)

What’s the future bringing? Screen size changes!

Apple has done a great thing for developers when the retina screen came out. The change was so radical that they needed all developers to upscale apps to 2x their size. So they did the same thing Palm did and that sometimes Microsoft did: they simply said that a pixel is not a pixel is not a pixel, but that a pixel is instead a ‘point’. They did something different, too: they simply said that the upscaling will be solely at 2x rate, not 1.5x, not 2.5x. They also made use of sub-‘point’ precision easy (albeit occasionally inconsistent, if you dig deep enough — for example, CATiledLayer does some weird stuff).

However, it turns out that people also want screens of different sizes, aspect ratios et al. It turns out that even people who are hard-core Apple fanboys clamor for a bigger iPhone for easier reading, even though they immediately say that a bigger size would “not fit the hand as well as the current size”. (And I don’t agree, considering that — surprise, surprise! — people have different hand sizes, hence the current size does not fit as well.)

How would you bring different screen sizes?

So on OS X, developers have been trained for a long time to use the “struts and springs” model to make the user interface elements (views, we call ’em) respond to size of their parent view. For example, when a window resizes, many sub-elements resize as well.

iOS also has the struts-and-springs model available. Either in Interface Builder or manually via the -setAutoresizingMask: method, developer can quite easily make the UI resizable.

Except many people never bothered more than absolutely necessary; for example, due to designing without navbar when the navbar could actually appear in runtime. They never had to, because — due to inflexibility of the window — they never had to play with multiple screen sizes. One of the only two places where the screen size changed radically were the transition to iPad, which in itself required a radically different user interface model (and use of different nib-files was VERY encouraged).

The other place where window and view sizes varied was the very, VERY underused TV-out display support for every UIWindow — which existed since iOS 3.2.

Training developers to use autoresizing masks

So if you ever wanted to train people to use autoresizing masks in preparation for a variable screen size, how would you approach the problem, Apple-style?

Obviously, you first need to deprecate the old-style development; that is, iOS app development with minimal autoresizing support. If you can’t actually force devs to use it (and with autoresizing masks, you can’t), then nudge them in the right direction.

One way is to explore what else is blocking good use of autoresizing and make sure you fix that problem. In case you can’t actually fix the problem elegantly, still provide the solution as a way to nudge developers to notice you really, really want to emphasize autoresizing.

Final step involves actual introduction of a device that uses a radically different screen size, without inconveniencing users by blocking operation of old apps.

What are the first two steps?

First step involved release of iPhone 5. This was a definite wake-up call that you really, really need to iron out those few places where you managed to mess up autoresizing. You don’t need to fix anything on the x axis, but y axis is fair game. On launch day of iPhone 5, you realized that the rumors were true, and whatever stuff you read about Apple never changing the resolution except by scaling by powers of two were — wrong. You, the dev, now needed to adapt to everything other platforms already had to do: resolutions change, and not just by scaling in powers of two. DPI (or, perhaps, PPP or pixels per ‘point’) do change in powers-of-two, but not the resolutions. In fact, you realize that you were groomed for the resolution staying fixed at 480×320 points, but that suddenly that’s no longer true.

Now you are not in the world of 480×320, but in the world of 586×320, and your every assumption is fair game.

However, if you don’t ship that one file called Default-568h.png (or, as Apple nudges you to call it, Default-568h@2x.png), you’re getting a pass. We won’t screw your app over just yet. This is just an introduction… but you’d better adapt! You don’t want black bars on top and bottom of user’s device. And think wider* too: iPad users get the black box around the iPhone apps as well as lower resolution of the app. Don’t you think iPhone 5 users deserve better, just like iPad users?

* See what I did there?

What’s the second step? Something called “automatic layout” being force-fed upon developers by default in Xcode 4.5. No matter that it’s as usable as a brick — it is quite powerful, and if the dev takes the time to correctly build the UI using automatic layout, then the localization becomes quite better. It’s completely unusable in the current state, though, so I suspect a lot of devs (especially those who don’t widely localize their apps) will stick with the old-but-reliable struts-and-springs model.

As I said, the second step may not involve creation of something usable. It does involve nudging devs in the right direction.

Future devices

So the third step is, of course, wholehearted acceptance of what Apple already knows, but developers may only suspect: variable resolution displays.

Apple has heard folks who clamor about Galaxy Note’s size. Apple has realized that there’s a sweet spot between iPhone and iPad screen sizes.

iPad mini is a further proof that Apple is willing to play with screen sizes. It didn’t play with the resolution yet, but I feel it’s only a matter of time before Apple releases a non-4:3 iPad. Perhaps with a built in phone.

Perhaps a phone which can also run UIUserInterfaceIdiomPad applications adapted for the new resolution?

Maybe we’ll also see a Default.png mini-revolution, where we will specify to Apple how they should resize the Default.png image, or allow devs to run some very simple Core Graphics code upon app installation in order to draw the image, instead of shipping various images for various resolutions and orientations? For a completely universal app in complete compliance with best practices for Default.png, an app that is iPhone 5 and iPad 3-aware which still supports iPhone 3GS — you have to ship no less than 5 images. If my prediction comes true, it’s only going to get worse.

I have absolutely no idea how Android solves the problem, nor am I really interested.

How hard is it to adapt?

I don’t think that adapting will be too hard. What one must take into account is that, all of a sudden, we will have to make wider use of resizable-image-with-cap-insets support introduced in iOS 5, and enhanced in iOS 6. I suspect Interface Builder will be enhanced in order to permit easier creation of such images (which currently have to be constructed in source code). Aside from ‘just’ skinning buttons, although even that was avoidable by having the app’s artist export each and every button with static text already painted on it, we’ll find out that suddenly there are a lot more backgrounds we have to skin, including the table view backgrounds.

But I’m almost sure that this revolution will happen.

If anything, Scott Forstall leaving Apple is probably going to make this easier. Who knows — perhaps he was one of the people pushing for never changing the UI resolution, based on his judgement of appropriateness of the platform for withstanding such a change. Perhaps he didn’t even care, and the decision was made by another manager or engineer.

Whatever the case — the point of this entire rant is that varying screen resolutions are coming. Perhaps not as wildly as was the case with Android, but we will see them change. We may even see a sort of convergence of iPhone and iPad, with iPad pushing out the iPod touch from the game.

It would be in line with lineup simplification that Apple likes to do every couple of years. So just as the Mac Pro is probably about to die, perhaps iPod touch will die, too.

A possible future lineup?

Professional Consumer
Mobile iPhone iPad
Desktop iMac Mac mini
Portable MacBook Pro MacBook Air
TV Apple TV HD Apple TV

What do you think?