Building Mach-O bundles which work on both macOS and iOS

This post contains notes on how I automated my CI to generate a Mach-O bundle containing code targetting macOS and x86_64, and iOS and ARMv7.

Setting up a working cross toolchain on macOS

The only way to get the iphoneOS SDK seems to be to install XCode, so a simple xcode-select --install won’t be enough. We can install XCode the regular way via the App Store or, if you only have an SSH access, by downloading the installer on Apple’s developer portal.

The good news is that’s all you need to do. XCode ships with a full cross toolchain working for every one of Apple’s OS, and multiple architectures, including ARM and x86_64.. No need to build the cross toolchain by yourself.

Setting up the correct sysroot

You need to tell the toolchain where are the headers and libraries for your target system. It’s the sysroot in GCC and Clang parlance.

$ clang ... -sysroot <path> ...

You can use the command xcode-select -p to get the path’s prefix. Since we’re interested in macOS and iphoneOS, we would have:

  • <base>/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk
  • <base>/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS12.1.sdk

You shouldn’t need to manually set any sysroot when building for macOS on your personal macOS machine. It’s useful when you want to build against a specific version of the MacOSX SDK.

Specifying what architecture to build for

When generating binaries for the Mach-O ABI, you can add architectures by chaining calls to the -arch switch. In my case I was interested in ARMv7 and x86_64.

In projects where you have the same sysroot for both architectures, you can just do the following:

$ clang ... -arch armv7 -arch x86_64 ...

As I said earlier, you have two distinct sysroots when targetting iphoneOS and macOS. So you can’t build the final bundle in a single command.

Merging Mach-O containers

What I did was generating both executables separately by reusing a common Makefile, and then merging them together with the command lipo.

$ lipo -create -output <output-path> <input-path>...

In my case I had something like that:

$ lipo -create -output MyMiddleware.bundle \
  MyMiddleware.macOS-x86_64.bundle \
  MyMiddleware.iphoneOS-ARMv7.bundle

Accessing an application's default credentials from a container running in GCE

[This is an adapted version of an answer I posted on StackOverflow.]

Containers running inside Google Compute Engine (GCE for short) are already authenticated and can simply ask GCE for a new access token when the last one has expired. The application running inside the container just has to detect whether it’s running from within GCE, and if it does, then fetch the new access token from a special URI.

I analyzed Ruby’s open source implementation, that can be accessed here.

Where to find the access token

GCE’s official documentation states that one can obtain the access token at multiple places, and that these places have to be checked in a specific order:

  1. The environment variable GOOGLE_APPLICATION_CREDENTIALS is checked. If this variable is specified it should point to a file that defines the credentials. […]

  2. If you have installed the Google Cloud SDK on your machine and have run the command gcloud auth application-default login, your identity can be used as a proxy to test code calling APIs from that machine.

  3. If you are running in Google App Engine production, the built-in service account associated with the application will be used.

  4. If you are running in Google Compute Engine production, the built-in service account associated with the virtual machine instance will be used.
  5. If none of these conditions is true, an error will occur.

That’s exactly what the Ruby implementation does in its method get_application_default:

  1. the GOOGLE_APPLICATION_CREDENTIALS environment variable is checked,
  2. then the PATH is checked,
  3. then the default path /etc/google/auth is checked,
  4. finally, if still nothing and on a compute instance, a new access token is fetched.
def get_application_default(scope = nil, options = {})
  creds = DefaultCredentials.from_env(scope) ||
          DefaultCredentials.from_well_known_path(scope) ||
          DefaultCredentials.from_system_default_path(scope)
  return creds unless creds.nil?
  raise NOT_FOUND_ERROR unless GCECredentials.on_gce?(options)
  GCECredentials.new
end

Detecting GCE environment

The on_gce? method shows how to check whether we are on GCE by sending a GET (or HEAD) HTTP request to http://169.254.169.254. If there is a Metadata-Flavor: Google header in the response, then it’s probably GCE.

def on_gce?(options = {})
  c = options[:connection] || Faraday.default_connection
  resp = c.get(COMPUTE_CHECK_URI) do |req|
    # Comment from: oauth2client/client.py
    #
    # Note: the explicit `timeout` below is a workaround. The underlying
    # issue is that resolving an unknown host on some networks will take
    # 20-30 seconds; making this timeout short fixes the issue, but
    # could lead to false negatives in the event that we are on GCE, but
    # the metadata resolution was particularly slow. The latter case is
    # "unlikely".
    req.options.timeout = 0.1
  end
  return false unless resp.status == 200
  return false unless resp.headers.key?('Metadata-Flavor')
  return resp.headers['Metadata-Flavor'] == 'Google'
rescue Faraday::TimeoutError, Faraday::ConnectionFailed
  return false
end

Fetching an access token directly from Google

If the default credentials could not be found on the filesystem and the application is running on GCE, we can ask a new access token without any prior authentication. This is possible because of the default service account, that is created automatically when GCE is enabled in a project.

The fetch_access_token method shows how, from a GCE instance, we can get a new access token by simply issuing a GET request to http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token.

def fetch_access_token(options = {})
  c = options[:connection] || Faraday.default_connection
  c.headers = { 'Metadata-Flavor' => 'Google' }
  resp = c.get(COMPUTE_AUTH_TOKEN_URI)
  case resp.status
  when 200
    Signet::OAuth2.parse_credentials(resp.body,
                                     resp.headers['content-type'])
  when 404
    raise(Signet::AuthorizationError, NO_METADATA_SERVER_ERROR)
  else
    msg = "Unexpected error code #{resp.status}" + UNEXPECTED_ERROR_SUFFIX
    raise(Signet::AuthorizationError, msg)
  end
end

Here is a curl command to illustrate:

$ curl \
  http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/token \
  -H 'accept: application/json' \
  -H 'Metadata-Flavor: Google'

Setting up Arch Linux for Android development

Install Android Studio

You can install the android-studio AUR package.

$ wget https://aur.archlinux.org/cgit/aur.git/snapshot/android-studio.tar.gz
$ tar -xzf android-studio.tar.gz
$ cd android-studio
$ make-pkg -sri

Add udev rule

In order for adb to work, one needs to create the appropriate permissions by adding special udev rules.

Create the file /etc/udev/rules/51-android.rules with the following content:

SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", MODE="0666", GROUP="adbuser"
SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", ATTR{idProduct}=="6765", SYMLINK+="android_adb"
SUBSYSTEM=="usb", ATTR{idVendor}=="05c6", ATTR{idProduct}=="6765", SYMLINK+="android_fastboot"

You need to create the adbuser group and add yourself to it.

Here I used the codes 05c6 for Qualcomm and 6765 for OnePlus One. These codes can be found on the internet. One can also find the codes using lsusb:

$ lsusb
# ...
Bus 001 Device 007: ID 05c6:6765 Qualcomm, Inc.
# ...

Once the rules created you need to reload udev rules. The documentation I used used udevadm but it didn’t work for me. Instead I did the following:

$ sudo systemctl restart systemd-udevd.service

Notes on OpenJDK

Android Studio warns about being slow or unstable with OpenJDK. One might want to install Oracle’s JDK instead. I’m giving a try to OpenJDK for now.