Gemini: First thoughts

I just got my hands on my Gemini yesterday. I have been looking forward to this campaign delivering for over a year (I was so excited I managed to be backer #5). It’s a spiritual successor to the Psion devices I grew up with and that makes it a must have for me - even if my ultraportable Mac and iPhone have seemingly eroded any real need. So here are my first thoughts on the Pocket Computers Gemini.

The elephant in the room?

A lot of the campaign backers seemed surprised and confused by the form factor. It uses mobile phone hardware and so technically can make phone calls (if you get the 4G edition). But it’s true parentage is the palmtop. It’s a clamshell, a teeny tiny ARM laptop. But people have been horrified you can’t see caller ID without opening it and that there isn’t a 32MP super camera slapped on the front.

Personally even if the Gemini was perfect and the software was complete it probably wouldn’t ever be my ‘daily driver’ mobile phone. But it might mean I stopped carrying my laptop around to meetings.


It was very well packaged. Almost Apple level. The hinge mechanism on the box has a magnetic lock so its easy to open but also closes securely. The only thing that went wrong was Hermes - as you might expect - played football with it damaging one corner. Luckily the hardware itself survived.

Packaging 1

Packaging 2

The screen itself has a screen protector. It was so well applied I didn’t notice it at first. With it on the touch screen worked well but had a weird texture to it. Eventually i noticed a hole in it and removed it - this was surprisingly hard work. I can’t decide whether the screen protector was supposed to stay on and mine was defective or whether they need to work on making it easier to remove.


I tend to have realistic expectations with crowdfunding campaigns. This is a small team and whilst they have an ok budget they have a tight deadline and very very demanding backers. There is so much that the team has had to pull together to get here. In this context the hardware has blown me away. It is far better than I ever expected.

Before first boot

My Gemini feels solid and doesn’t have that ‘cheap android’ feel to it that i’e come to expect from most phone manufacturers. It’s got a satisfying weight to it and is well made.

The keyboard might not be to everyones taste but my initial feedback is well done Planet Computers. I’ve been typing on it for less than an hour and i’ve already got a decent typing speed on it. I wrote this blog post on it and other than occasionally wondering where a symbol was hiding I did not find myself pining for a full size keyboard at all.

The screen is lovely. The size is weird, and on top of that there is the odd alignment that I thought might be upsetting. But really I barely notice, I’m fine with it.

The LEDs on the outside are cute and I hope they stay in the next revision but so far I don’t have a practical use for them.

Charging only works on the left hand side. I used it with a USB-A Anker charger and it charged just fine. Apparently the USB-C chargers might be problematic but I can’t confirm this (thanks mediatek).

I used a USB A-C adaptor and tried using a mouse. It worked just fine.

I’ve not had it long enough to get a feel for battery life yet.


The stock software is Android. I’m struggling so far to rationalise whether things that frustrate me are just Android (i’m an Apple fanboy) or genuine rough edges that need fixes. I haven’t tried any other OS yet.

The Gemini boot graphics are gorgeous. I’m almost sad that I won’t see them very often.

First boot

The initial setup was buggy. The biggest issue was just that Android really didn’t like being landscape.

I had thought that Material UI was a way to make web apps feel like crappy phone applications (thats certainly what I use it for), but it turns out it makes native phone apps feel like crappy web apps too (that one is on Google).

I’m used to a single home button (and now 0 buttons on the iPhone X) so the ever present trio of buttons on the right hand side is annoying. I hope they can go away and be replaced by physical buttons / gemini app bar buttons.

The Gemini launcher is a great compromise at delivering the physical launcher that we had on Psion devices.

App bar

I haven’t figured out how to de-google things yet. Hopefully I can remove the google search window and chrome itself.


Some of the Gemini specific components like the updater are clearly still work in progress, but so far they do not fit in well with the rest of the device.



Love it, despite the rough edges.

The hardware is pretty fantastic really. There are definitely things i’d ask for in v2 (bigger screen, less bezel!) but what is there is great.

The software is clearly still being built. I wouldn’t tell non-techies to get one. But it has definitely found a place in my ‘everyday carry’.

I’d love to see a device like this running GNOME and working with GNOME Builder.

Docker meet firewall - finally an answer

One of the most annoying things with Docker has been how it interacts with iptables. And ufw. And firewalld. Most firewall solutions on Linux assume they are the source of truth. But increasingly thats not a sensible assumption. This inevitably leads to collisions - restarting the firewall or Docker will end up clobbering something. Whilst it was possible to get it working, it was a pain. And always a bit dirty. I don’t want to have to restart Docker after tweaking my firewall! Recently a new solution has presented itself and it looks like things are going to get a lot better:

In Docker 17.06 and higher, you can add rules to a new table called DOCKER-USER, and these rules will be loaded before any rules Docker creates automatically. This can be useful if you need to pre-populate iptables rules that need to be in place before Docker runs.

You can read more about it in the pull request that added it.

So how do we make use of that? Searching for an answer is still hard - there are 3 years of people scrambling to work around the issue and not many posts like this one yet. But by the end of this post you will have an iptables based firewall that doesn’t clobber Docker when you apply it. Docker won’t clobber it either. And it will make it easier to write rules that apply to non-container ports and container ports alike.

Starting from an Ubuntu 16.04 VM that has Docker installed but has never had an explicit firewall setup before. If you’ve had any other sort of Docker firewall before, undo those changes. Docker should be allowed to do its own iptables rules. Don’t change the FORWARD chain to ACCEPT from DROP. There is no need any more. On a clean environment before any of our changes this is what iptables-save looks like:

$ sudo iptables-save
# Generated by iptables-save v1.6.0 on Tue Aug 15 04:02:08 2017
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s ! -o docker0 -j MASQUERADE
-A POSTROUTING -s ! -o br-68428f03a4d1 -j MASQUERADE
-A POSTROUTING -s ! -o docker_gwbridge -j MASQUERADE
-A POSTROUTING -s -d -p tcp -m tcp --dport 9200 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER -i br-68428f03a4d1 -j RETURN
-A DOCKER -i docker_gwbridge -j RETURN
-A DOCKER ! -i br-68428f03a4d1 -p tcp -m tcp --dport 9200 -j DNAT --to-destination
# Completed on Tue Aug 15 04:02:08 2017
# Generated by iptables-save v1.6.0 on Tue Aug 15 04:02:08 2017
:INPUT ACCEPT [174:13281]
:OUTPUT ACCEPT [138:16113]
:DOCKER - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-68428f03a4d1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-68428f03a4d1 -j DOCKER
-A FORWARD -i br-68428f03a4d1 ! -o br-68428f03a4d1 -j ACCEPT
-A FORWARD -i br-68428f03a4d1 -o br-68428f03a4d1 -j ACCEPT
-A FORWARD -o docker_gwbridge -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker_gwbridge -j DOCKER
-A FORWARD -i docker_gwbridge ! -o docker_gwbridge -j ACCEPT
-A FORWARD -i docker_gwbridge -o docker_gwbridge -j DROP
-A DOCKER -d ! -i br-68428f03a4d1 -o br-68428f03a4d1 -p tcp -m tcp --dport 9200 -j ACCEPT
-A DOCKER-ISOLATION -i br-68428f03a4d1 -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o br-68428f03a4d1 -j DROP
-A DOCKER-ISOLATION -i docker_gwbridge -o docker0 -j DROP
-A DOCKER-ISOLATION -i docker0 -o docker_gwbridge -j DROP
-A DOCKER-ISOLATION -i docker_gwbridge -o br-68428f03a4d1 -j DROP
-A DOCKER-ISOLATION -i br-68428f03a4d1 -o docker_gwbridge -j DROP

The main points to note are that INPUT has been left alone by Docker and that there is (as documented) a DOCKER-USER chain that has been set up for us. All traffic headed to a container goes to the FORWARD chain and this lets DOCKER-USER filter that traffic before the Docker rules are applied.

A firewall that doesn’t smoosh Docker iptables rules

So a super simple firewall. Create a new /etc/iptables.conf that looks like this:

:FILTERS - [0:0]
:DOCKER-USER - [0:0]


-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp --icmp-type any -j ACCEPT


-A FILTERS -m state --state NEW -s
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 23 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A FILTERS -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A FILTERS -j REJECT --reject-with icmp-host-prohibited


You can load it into the kernel with:

iptables-restore -n /etc/iptables.conf

That -n flag is crucial to avoid breaking Docker.

Whats going on here?

  • This firewall avoids touching areas Docker is likely to interfere with. You can restart Docker over and over again and it will not harm or hinder our rules in INPUT, DOCKER-USER or FILTERS.

  • We explicitly flush INPUT, DOCKER-USER and FILTERS. This means we don’t end up smooshing 2 different versions of our iptables.conf together. Normally this is done implicitly by iptables-restore. But its that implicit flush that that clobbers the rules that Docker manages. So we will only ever load this config with iptables-restore -n /etc/iptables.conf. The -n flag turns off the implicit global flush and only does our manual explicit flush. The Docker rules are preserved - no more restarting Docker when you change your firewall.

  • We have an explicit FILTERS chain. This is used by the INPUT chain. But Docker traffic actually goes via the FORWARD chain. And thats why ufw has always been problematic. This is where DOCKER-USER comes in. We just add a rule that passes any traffic from the external physical network interface to our FILTERS chain. This means that when I want to allow my home IP (in this example access to every port I update the FILTERS chain once. I don’t have to add a rule in INPUT and a rule in DOCKER-USER. I don’t have to think about which part of the firewall a rule will or won’t work in. My FILTERS chain is the place to go.

Starting the firewall at boot

You can load this firewall at boot with systemd. Add a new unit - /etc/system/system/iptables.service:

Description=Restore iptables firewall rules

ExecStart=/sbin/iptables-restore -n /etc/iptables.conf


And enable it:

$ sudo systemctl enable --now iptables

The firewall is now active, and it didn’t smoosh your docker managed iptables rules. You can reboot and the firewall will come up as it is right now.

Updating the firewall

Pop open the firwall in your favourite text editor, add or remove a rule from the FILTERS section, then reload the firewall with:

$ sudo systemctl restart iptables

Starting services on hotplug

I want systemd to start a service when a USB device is plugged in and stop it when i remove it.

Use systemctl to get a list of units:

$ systemctl
UNIT                                       LOAD   ACTIVE SUB       DESCRIPTION
sys-subsystem-net-devices-gamelink0.device loaded active plugged   PL25A1 Host-Host Bridge

There’s no configuration required here - the .device unit just appears in response to udev events without any configuration. I’ve previously set up udev rules so that my USB host-to-host cable is consistently named gamelink0, but even without that it would show up under its default name.

The simplest way is just to take advantage of WantedBy. In gamelink.service:

$ cat /etc/systemd/systemd/gamelink.service
Description = Gamelink cable autoconf

ExecStart=/usr/bin/python3 /home/john/

WantedBy = sys-subsystem-net-devices-gamelink0.device

And then install it:

$ systemctl enable gamelink
Created symlink from /etc/systemd/system/sys-subsystem-net-devices-gamelink0.device.wants/gamelink.service to /etc/systemd/system/gamelink.service.

The WantedBy directive tells systemctl enable to drop the symlink in a .wants directory for the device. Whenever a new unit becomes active systemd will look in .wants to see what other related services needs to be started, and that applies to .device units just as much as .service units or .target units. That behaviour is all we need to start our daemon on hotplug.

The BindsTo directive lets us stop the service when the .device unit goes away (i.e. the device is unplugged). Used in conjunction with After we ensure that the service may never be in active state without a specific device unit also in active state.

Multi-core twisted with systemd socket activation

With a stateless Twisted app you can scale by adding more instances. Unless you are explicitly offloading to subprocesses you will often have spare cores on the same box as your existing instance. But to exploit them you end up running haproxy or faking a load balancer with iptables.

With the SO_REUSEPORT socket flag multiple processes can listen on the same port, but this isn’t available from twisted (yet). But with systemd and socket activation we can use it today.

As a proof of concept we’ll make a 4 core HTTP static web service. In a fresh Ubuntu 16.04 VM install python-twisted-web.

In /etc/systemd/system/[email protected]:

Description=Socket for worker %i

ListenStream = 8080
ReusePort = yes
Service = [email protected]%i.service

WantedBy =

And in /etc/systemd/system/[email protected]

Description=Worker %i
[email protected]%i.socket

ExecStart=/usr/bin/twistd --nodaemon --logfile=- --pidfile= web --port systemd:domain=INET:index:0 --path /tmp

Then to get 4 cores:

$ systemctl enable --now [email protected]
$ systemctl enable --now [email protected]
$ systemctl enable --now [email protected].socket
$ systemctl enable --now [email protected]

Lets test it. In a python shell:

import urllib
import time

while True:

And in another terminal you can tail the logs with journalctl:

$ sudo journalctl -f -u [email protected]*.service
Apr 26 02:43:51 ubuntu twistd[10441]: 2017-04-26 02:43:51-0700 [-] - - - [26/Apr/2017:09:43:51 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"
Apr 26 02:43:52 ubuntu twistd[10441]: 2017-04-26 02:43:52-0700 [-] - - - [26/Apr/2017:09:43:52 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"
Apr 26 02:43:53 ubuntu twistd[10444]: 2017-04-26 02:43:53-0700 [-] - - - [26/Apr/2017:09:43:53 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"
Apr 26 02:43:54 ubuntu twistd[10452]: 2017-04-26 02:43:54-0700 [-] - - - [26/Apr/2017:09:43:54 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"
Apr 26 02:43:55 ubuntu twistd[10452]: 2017-04-26 02:43:55-0700 [-] - - - [26/Apr/2017:09:43:55 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"
Apr 26 02:43:56 ubuntu twistd[10447]: 2017-04-26 02:43:56-0700 [-] - - - [26/Apr/2017:09:43:56 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"
Apr 26 02:43:57 ubuntu twistd[10450]: 2017-04-26 02:43:57-0700 [-] - - - [26/Apr/2017:09:43:57 +0000] "GET / HTTP/1.0" 200 2081 "-" "Python-urllib/1.17"

As you can see the twisted[pid] changes as different cores handle requests.

If you deploy new code you can systemctl restart [email protected]*.service to restart all cores.

systemctl enable will mean the 4 cores are available on next boot, too.

Building the Linux kernel on a Mac inside Docker: Attempt #2

Todays failure is about xargs. For whatever reason inside a qemu-user-static environment inside docker it can no longer do its part to help build a kernel:

  CLEAN   arch/arm/boot
/usr/bin/xargs: rm: Argument list too long
Makefile:1502: recipe for target 'clean' failed
make[2]: *** [clean] Error 126
make[2]: Leaving directory '/src/debian/linux-source-4.7.0/usr/src/linux-source-4.7.0'
debian/ruleset/targets/ recipe for target 'debian/stamp/install/linux-source-4.7.0' failed
make[1]: *** [debian/stamp/install/linux-source-4.7.0] Error 2
make[1]: Leaving directory '/src'
debian/ruleset/ recipe for target 'kernel_source' failed

It looks like I need to patch the kernels Makefile to workaround some limit qemu is introducing.