September 22, 2016

Comments about OARS and CSM age ratings

I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

Suggestions (especially references) welcome. Thanks!

September 21, 2016

Microsoft aren't forcing Lenovo to block free operating systems
There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

(Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

comment count unavailable comments
GNOME Software and Age Ratings

After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.

screenshot-from-2016-09-21-10-22-36

The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

September 20, 2016

libinput and the Lenovo T460 series trackstick

First a definition: a trackstick is also called trackpoint, pointing stick, or "that red knob between G, H, and B". I'll be using trackstick here, because why not.

This post is the continuation of libinput and the Lenovo T450 and T460 series touchpads where we focused on a stalling pointer when moving the finger really slowly. Turns out the T460s at least, possibly others in the *60 series have another bug that caused a behaviour that is much worse but we didn't notice for ages as we were focusing on the high-precision cursor movement. Specifically, the pointer would just randomly stop moving for a short while (spoiler alert: 300ms), regardless of the movement speed.

libinput has built-in palm detection and one of the things it does is to disable the touchpad when the trackstick is in use. It's not uncommon to rest the hand near or on the touchpad while using the trackstick and any detected touch would cause interference with the pointer motion. So events from the touchpad are ignored whenever the trackpoint sends events. [1]

On (some of) the T460s the trackpoint sends spurious events. In the recording I have we have random events at 9s, then again 3.5s later, then 14s later, then 2s later, etc. Each time, our palm detection could would assume the trackpoint was in use and disable the touchpad for 300ms. If you were using the touchpad while this was happening, the touchpad would suddenly stop moving for 300ms and then continue as normal. Depending on how often these spurious events come in and the user's current caffeination state, this was somewhere between odd, annoying and infuriating.

The good news is: this is fixed in libinput now. libinput 1.5 and the upcoming 1.4.3 releases will have a fix that ignores these spurious events and makes the touchpad stalls a footnote of history. Hooray.

[1] we still allow touchpad physical button presses, and trackpoint button clicks won't disable the touchpad

September 19, 2016

Understanding evdev

This post explains how the evdev protocol works. After reading this post you should understand what evdev is and how to interpret evdev event dumps to understand what your device is doing. The post is aimed mainly at users having to debug a device, I will thus leave out or simplify some of the technical details. I'll be using the output from evemu-record as example because that is the primary debugging tool for evdev.

What is evdev?

evdev is a Linux-only generic protocol that the kernel uses to forward information and events about input devices to userspace. It's not just for mice and keyboards but any device that has any sort of axis, key or button, including things like webcams and remote controls. Each device is represented as a device node in the form of /dev/input/event0, with the trailing number increasing as you add more devices. The node numbers are re-used after you unplug a device, so don't hardcode the device node into a script. The device nodes are also only readable by root, thus you need to run any debugging tools as root too.

evdev is the primary way to talk to input devices on Linux. All X.Org drivers on Linux use evdev as protocol and libinput as well. Note that "evdev" is also the shortcut used for xf86-input-evdev, the X.Org driver to handle generic evdev devices, so watch out for context when you read "evdev" on a mailing list.

Communicating with evdev devices

Communicating with a device is simple: open the device node and read from it. Any data coming out is a struct input_event, defined in /usr/include/linux/input.h:


struct input_event {
struct timeval time;
__u16 type;
__u16 code;
__s32 value;
};
I'll describe the contents later, but you can see that it's a very simple struct.

Static information about the device such as its name and capabilities can be queried with a set of ioctls. Note that you should always use libevdevto interact with a device, it blunts the few sharp edges evdev has. See the libevdev documentation for usage examples.

evemu-record, our primary debugging tool for anything evdev is very simple. It reads the static information about the device, prints it and then simply reads and prints all events as they come in. The output is in machine-readable format but it's annotated with human-readable comments (starting with #). You can always ignore the non-comment bits. There's a second command, evemu-describe, that only prints the description and exits without waiting for events.

Relative devices and keyboards

The top part of an evemu-record output is the device description. This is a list of static properties that tells us what the device is capable of. For example, the USB mouse I have plugged in here prints:


# Input device name: "PIXART USB OPTICAL MOUSE"
# Input device ID: bus 0x03 vendor 0x93a product 0x2510 version 0x110
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
# Event code 2 (SYN_MT_REPORT)
# Event code 3 (SYN_DROPPED)
# Event code 4 ((null))
# Event code 5 ((null))
# Event code 6 ((null))
# Event code 7 ((null))
# Event code 8 ((null))
# Event code 9 ((null))
# Event code 10 ((null))
# Event code 11 ((null))
# Event code 12 ((null))
# Event code 13 ((null))
# Event code 14 ((null))
# Event type 1 (EV_KEY)
# Event code 272 (BTN_LEFT)
# Event code 273 (BTN_RIGHT)
# Event code 274 (BTN_MIDDLE)
# Event type 2 (EV_REL)
# Event code 0 (REL_X)
# Event code 1 (REL_Y)
# Event code 8 (REL_WHEEL)
# Event type 4 (EV_MSC)
# Event code 4 (MSC_SCAN)
# Properties:
The device name is the one (usually) set by the manufacturer and so are the vendor and product IDs. The bus is one of the "BUS_USB" and similar constants defined in /usr/include/linux/input.h. The version is often quite arbitrary, only a few devices have something meaningful here.

We also have a set of supported events, categorised by "event type" and "event code" (note how type and code are also part of the struct input_event). The type is a general category, and /usr/include/linux/input-event-codes.h defines quite a few of those. The most important types are EV_KEY (keys and buttons), EV_REL (relative axes) and EV_ABS (absolute axes). In the output above we can see that we have EV_KEY and EV_REL set.

As a subitem of each type we have the event code. The event codes for this device are self-explanatory: BTN_LEFT, BTN_RIGHT and BTN_MIDDLE are the left, right and middle button. The axes are a relative x axis, a relative y axis and a wheel axis (i.e. a mouse wheel). EV_MSC/MSC_SCAN is used for raw scancodes and you can usually ignore it. And finally we have the EV_SYN bits but let's ignore those, they are always set for all devices.

Note that an event code cannot be on its own, it must be a tuple of (type, code). For example, REL_X and ABS_X have the same numerical value and without the type you won't know which one is which.

That's pretty much it. A keyboard will have a lot of EV_KEY bits set and the EV_REL axes are obviously missing (but not always...). Instead of BTN_LEFT, a keyboard would have e.g. KEY_ESC, KEY_A, KEY_B, etc. 90% of device debugging is looking at the event codes and figuring out which ones are missing or shouldn't be there.

Exercise: You should now be able to read a evemu-record description from any mouse or keyboard device connected to your computer and understand what it means. This also applies to most special devices such as remotes - the only thing that changes are the names for the keys/buttons. Just run sudo evemu-describe and pick any device in the list.

The events from relative devices and keyboards

evdev is a serialised protocol. It sends a series of events and then a synchronisation event to notify us that the preceeding events all belong together. This synchronisation event is EV_SYN SYN_REPORT, is generated by the kernel, not the device and hence all EV_SYN codes are always available on all devices.

Let's have a look at a mouse movement. As explained above, half the line is machine-readable but we can ignore that bit and look at the human-readable output on the right.


E: 0.335996 0002 0000 0001 # EV_REL / REL_X 1
E: 0.335996 0002 0001 -002 # EV_REL / REL_Y -2
E: 0.335996 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
This means that within one hardware event, we've moved 1 device unit to the right (x axis) and two device units up (y axis). Note how all events have the same timestamp (0.335996).

Let's have a look at a button press:


E: 0.656004 0004 0004 589825 # EV_MSC / MSC_SCAN 589825
E: 0.656004 0001 0110 0001 # EV_KEY / BTN_LEFT 1
E: 0.656004 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
E: 0.727002 0004 0004 589825 # EV_MSC / MSC_SCAN 589825
E: 0.727002 0001 0110 0000 # EV_KEY / BTN_LEFT 0
E: 0.727002 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
For button events, the value 1 signals button pressed, button 0 signals button released.

And key events look like this:


E: 0.000000 0004 0004 458792 # EV_MSC / MSC_SCAN 458792
E: 0.000000 0001 001c 0000 # EV_KEY / KEY_ENTER 0
E: 0.000000 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
E: 0.560004 0004 0004 458976 # EV_MSC / MSC_SCAN 458976
E: 0.560004 0001 001d 0001 # EV_KEY / KEY_LEFTCTRL 1
E: 0.560004 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
[....]
E: 1.172732 0001 001d 0002 # EV_KEY / KEY_LEFTCTRL 2
E: 1.172732 0000 0000 0001 # ------------ SYN_REPORT (1) ----------
E: 1.200004 0004 0004 458758 # EV_MSC / MSC_SCAN 458758
E: 1.200004 0001 002e 0001 # EV_KEY / KEY_C 1
E: 1.200004 0000 0000 0000 # ------------ SYN_REPORT (0) ----------
Mostly the same as button events. But wait, there is one difference: we have a value of 2 as well. For key events, a value 2 means "key repeat". If you're on the tty, then this is what generates repeat keys for you. In X and Wayland we ignore these repeat events and instead use XKB-based key repeat.

Now look at the keyboard events again and see if you can make sense of the sequence. We have an Enter release (but no press), then ctrl down (and repeat), followed by a 'c' press - but no release. The explanation is simple - as soon as I hit enter in the terminal, evemu-record started recording so it captured the enter release too. And it stopped recording as soon as ctrl+c was down because that's when it was cancelled by the terminal. One important takeaway here: the evdev protocol is not guaranteed to be balanced. You may see a release for a key you've never seen the press for, and you may be missing a release for a key/button you've seen the press for (this happens when you stop recording). Oh, and there's one danger: if you record your keyboard and you type your password, the keys will show up in the output. Security experts generally reocmmend not publishing event logs with your password in it.

Exercise: You should now be able to read a evemu-record events list from any mouse or keyboard device connected to your computer and understand the event sequence.This also applies to most special devices such as remotes - the only thing that changes are the names for the keys/buttons. Just run sudo evemu-record and pick any device listed.

Absolute devices

Things get a bit more complicated when we look at absolute input devices like a touchscreen or a touchpad. Yes, touchpads are absolute devices in hardware and the conversion to relative events is done in userspace by e.g. libinput. The output of my touchpad is below. Note that I've manually removed a few bits to make it easier to grasp, they will appear later in the multitouch discussion.


# Input device name: "SynPS/2 Synaptics TouchPad"
# Input device ID: bus 0x11 vendor 0x02 product 0x07 version 0x1b1
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
# Event code 2 (SYN_MT_REPORT)
# Event code 3 (SYN_DROPPED)
# Event code 4 ((null))
# Event code 5 ((null))
# Event code 6 ((null))
# Event code 7 ((null))
# Event code 8 ((null))
# Event code 9 ((null))
# Event code 10 ((null))
# Event code 11 ((null))
# Event code 12 ((null))
# Event code 13 ((null))
# Event code 14 ((null))
# Event type 1 (EV_KEY)
# Event code 272 (BTN_LEFT)
# Event code 325 (BTN_TOOL_FINGER)
# Event code 328 (BTN_TOOL_QUINTTAP)
# Event code 330 (BTN_TOUCH)
# Event code 333 (BTN_TOOL_DOUBLETAP)
# Event code 334 (BTN_TOOL_TRIPLETAP)
# Event code 335 (BTN_TOOL_QUADTAP)
# Event type 3 (EV_ABS)
# Event code 0 (ABS_X)
# Value 2919
# Min 1024
# Max 5112
# Fuzz 0
# Flat 0
# Resolution 42
# Event code 1 (ABS_Y)
# Value 3711
# Min 2024
# Max 4832
# Fuzz 0
# Flat 0
# Resolution 42
# Event code 24 (ABS_PRESSURE)
# Value 0
# Min 0
# Max 255
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 28 (ABS_TOOL_WIDTH)
# Value 0
# Min 0
# Max 15
# Fuzz 0
# Flat 0
# Resolution 0
# Properties:
# Property type 0 (INPUT_PROP_POINTER)
# Property type 2 (INPUT_PROP_BUTTONPAD)
# Property type 4 (INPUT_PROP_TOPBUTTONPAD)
We have a BTN_LEFT again and a set of other buttons that I'll explain in a second. But first we look at the EV_ABS output. We have the same naming system as above. ABS_X and ABS_Y are the x and y axis on the device, ABS_PRESSURE is an (arbitrary) ranged pressure value.

Absolute axes have a bit more state than just a simple bit. Specifically, they have a minimum and maximum (not all hardware has the top-left sensor position on 0/0, it can be an arbitrary position, specified by the minimum). Notable here is that the axis ranges are simply the ones announced by the device - there is no guarantee that the values fall within this range and indeed a lot of touchpad devices tend to send values slightly outside that range. Fuzz and flat can be safely ignored, but resolution is interesting. It is given in units per millimeter and thus tells us the size of the device. in the above case: (5112 - 1024)/42 means the device is 97mm wide. The resolution is quite commonly wrong, a lot of axis overrides need the resolution changed to the correct value.

The axis description also has a current value listed. The kernel only sends events when the value changes, so even if the actual hardware keeps sending events, you may never see them in the output if the value remains the same. In other words, holding a finger perfectly still on a touchpad creates plenty of hardware events, but you won't see anything coming out of the event node.

Finally, we have properties on this device. These are used to indicate general information about the device that's not otherwise obvious. In this case INPUT_PROP_POINTER tells us that we need a pointer for this device (it is a touchpad after all, a touchscreen would instead have INPUT_PROP_DIRECT set). INPUT_PROP_BUTTONPAD means that this is a so-called clickpad, it does not have separate physical buttons but instead the whole touchpad clicks. Ignore INPUT_PROP_TOPBUTTONPAD because it only applies to the Lenovo *40 series of devices.

Ok, back to the buttons: aside from BTN_LEFT, we have BTN_TOUCH. This one signals that the user is touching the surface of the touchpad (with some in-kernel defined minimum pressure value). It's not just for finger-touches, it's also used for graphics tablet stylus touchpes (so really, it's more "contact" than "touch" but meh).

The BTN_TOOL_FINGER event tells us that a finger is in detectable range. This gives us two bits of information: first, we have a finger (a tablet would have e.g. BTN_TOOL_PEN) and second, we may have a finger in proximity without touching. On many touchpads, BTN_TOOL_FINGER and BTN_TOUCH come in the same event, but others can detect a finger hovering over the touchpad too (in which case you'd also hope for ABS_DISTANCE being available on the touchpad).

Finally, the BTN_TOOL_DOUBLETAP up to BTN_TOOL_QUINTTAP tell us whether the device can detect 2 through to 5 fingers on the touchpad. This doesn't actually track the fingers, it merely tells you "3 fingers down" in the case of BTN_TOOL_TRIPLETAP.

Exercise: Look at your touchpad's description and figure out if the size of the touchpad is correct based on the axis information [1]. Check how many fingers your touchpad can detect and whether it can do pressure or distance detection.

The events from absolute devices

Events from absolute axes are not really any different than events from relative devices which we already covered. The same type/code combination with a value and a timestamp, all framed by EV_SYN SYN_REPORT events. Here's an example of me touching the touchpad:


E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 3335 # EV_ABS / ABS_X 3335
E: 0.000001 0003 0001 3308 # EV_ABS / ABS_Y 3308
E: 0.000001 0003 0018 0069 # EV_ABS / ABS_PRESSURE 69
E: 0.000001 0001 0145 0001 # EV_KEY / BTN_TOOL_FINGER 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.021751 0003 0018 0070 # EV_ABS / ABS_PRESSURE 70
E: 0.021751 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +21ms
E: 0.043908 0003 0000 3334 # EV_ABS / ABS_X 3334
E: 0.043908 0003 0001 3309 # EV_ABS / ABS_Y 3309
E: 0.043908 0003 0018 0065 # EV_ABS / ABS_PRESSURE 65
E: 0.043908 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +22ms
E: 0.052469 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.052469 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.052469 0001 0145 0000 # EV_KEY / BTN_TOOL_FINGER 0
E: 0.052469 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
In the first event you see BTN_TOOL_FINGER and BTN_TOUCH set (this touchpad doesn't detect hovering fingers). An x/y coordinate pair and a pressure value. The pressure changes in the second event, the third event changes pressure and location. Finally, we have BTN_TOOL_FINGER and BTN_TOUCH released on finger up, and the pressure value goes back to 0. Notice how the second event didn't contain any x/y coordinates? As I said above, the kernel only sends updates on absolute axes when the value changed.

Ok, let's look at a three-finger tap (again, minus the ABS_MT_ bits):


E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2149 # EV_ABS / ABS_X 2149
E: 0.000001 0003 0001 3747 # EV_ABS / ABS_Y 3747
E: 0.000001 0003 0018 0066 # EV_ABS / ABS_PRESSURE 66
E: 0.000001 0001 014e 0001 # EV_KEY / BTN_TOOL_TRIPLETAP 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.034209 0003 0000 2148 # EV_ABS / ABS_X 2148
E: 0.034209 0003 0018 0064 # EV_ABS / ABS_PRESSURE 64
E: 0.034209 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +34ms
[...]
E: 0.138510 0003 0000 4286 # EV_ABS / ABS_X 4286
E: 0.138510 0003 0001 3350 # EV_ABS / ABS_Y 3350
E: 0.138510 0003 0018 0055 # EV_ABS / ABS_PRESSURE 55
E: 0.138510 0001 0145 0001 # EV_KEY / BTN_TOOL_FINGER 1
E: 0.138510 0001 014e 0000 # EV_KEY / BTN_TOOL_TRIPLETAP 0
E: 0.138510 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +23ms
E: 0.147834 0003 0000 4287 # EV_ABS / ABS_X 4287
E: 0.147834 0003 0001 3351 # EV_ABS / ABS_Y 3351
E: 0.147834 0003 0018 0037 # EV_ABS / ABS_PRESSURE 37
E: 0.147834 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
E: 0.157151 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.157151 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.157151 0001 0145 0000 # EV_KEY / BTN_TOOL_FINGER 0
E: 0.157151 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
In the first event, the touchpad detected all three fingers at the same time. So get BTN_TOUCH, x/y/pressure and BTN_TOOL_TRIPLETAP set. Note that the various BTN_TOOL_* bits are mutually exclusive. BTN_TOOL_FINGER means "exactly 1 finger down" and you can't have exactly 1 finger down when you have three fingers down. In the second event x and pressure update (y has no event, it stayed the same).

In the event after the break, we switch from three fingers to one finger. BTN_TOOL_TRIPLETAP is released, BTN_TOOL_FINGER is set. That's very common. Humans aren't robots, you can't release all fingers at exactly the same time, so depending on the hardware scanout rate you have intermediate states where one finger has left already, others are still down. In this case I released two fingers between scanouts, one was still down. It's not uncommon to see a full cycle from BTN_TOOL_FINGER to BTN_TOOL_DOUBLETAP to BTN_TOOL_TRIPLETAP on finger down or the reverse on finger up.

Exercise: test out the pressure values on your touchpad and see how close you can get to the actual announced range. Check how accurate the multifinger detection is by tapping with two, three, four and five fingers. (In both cases, you'll likely find that it's very much hit and miss).

Multitouch and slots

Now we're at the most complicated topic regarding evdev devices. In the case of multitouch devices, we need to send multiple touches on the same axes. So we need an additional dimension and that is called multitouch slots (there is another, older multitouch protocol that doesn't use slots but it is so rare now that you don't need to bother).

First: all axes that are multitouch-capable are repeated as ABS_MT_foo axis. So if you have ABS_X, you also get ABS_MT_POSITION_X and both axes have the same axis ranges and resolutions. The reason here is backwards-compatibility: if a device only sends multitouch events, older programs only listening to the ABS_X etc. events won't work. Some axes may only be available for single-touch (ABS_MT_TOOL_WIDTH in this case).

Let's have a look at my touchpad, this time without the axes removed:


# Input device name: "SynPS/2 Synaptics TouchPad"
# Input device ID: bus 0x11 vendor 0x02 product 0x07 version 0x1b1
# Supported events:
# Event type 0 (EV_SYN)
# Event code 0 (SYN_REPORT)
# Event code 1 (SYN_CONFIG)
# Event code 2 (SYN_MT_REPORT)
# Event code 3 (SYN_DROPPED)
# Event code 4 ((null))
# Event code 5 ((null))
# Event code 6 ((null))
# Event code 7 ((null))
# Event code 8 ((null))
# Event code 9 ((null))
# Event code 10 ((null))
# Event code 11 ((null))
# Event code 12 ((null))
# Event code 13 ((null))
# Event code 14 ((null))
# Event type 1 (EV_KEY)
# Event code 272 (BTN_LEFT)
# Event code 325 (BTN_TOOL_FINGER)
# Event code 328 (BTN_TOOL_QUINTTAP)
# Event code 330 (BTN_TOUCH)
# Event code 333 (BTN_TOOL_DOUBLETAP)
# Event code 334 (BTN_TOOL_TRIPLETAP)
# Event code 335 (BTN_TOOL_QUADTAP)
# Event type 3 (EV_ABS)
# Event code 0 (ABS_X)
# Value 5112
# Min 1024
# Max 5112
# Fuzz 0
# Flat 0
# Resolution 41
# Event code 1 (ABS_Y)
# Value 2930
# Min 2024
# Max 4832
# Fuzz 0
# Flat 0
# Resolution 37
# Event code 24 (ABS_PRESSURE)
# Value 0
# Min 0
# Max 255
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 28 (ABS_TOOL_WIDTH)
# Value 0
# Min 0
# Max 15
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 47 (ABS_MT_SLOT)
# Value 0
# Min 0
# Max 1
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 53 (ABS_MT_POSITION_X)
# Value 0
# Min 1024
# Max 5112
# Fuzz 8
# Flat 0
# Resolution 41
# Event code 54 (ABS_MT_POSITION_Y)
# Value 0
# Min 2024
# Max 4832
# Fuzz 8
# Flat 0
# Resolution 37
# Event code 57 (ABS_MT_TRACKING_ID)
# Value 0
# Min 0
# Max 65535
# Fuzz 0
# Flat 0
# Resolution 0
# Event code 58 (ABS_MT_PRESSURE)
# Value 0
# Min 0
# Max 255
# Fuzz 0
# Flat 0
# Resolution 0
# Properties:
# Property type 0 (INPUT_PROP_POINTER)
# Property type 2 (INPUT_PROP_BUTTONPAD)
# Property type 4 (INPUT_PROP_TOPBUTTONPAD)
We have an x and y position for multitouch as well as a pressure axis. There are also two special multitouch axes that aren't really axes. ABS_MT_SLOT and ABS_MT_TRACKING_ID. The former specifies which slot is currently active, the latter is used to track touch points.

Slots are a static property of a device. My touchpad, as you can see above ony supports 2 slots (min 0, max 1) and thus can track 2 fingers at a time. Whenever the first finger is set down it's coordinates will be tracked in slot 0, the second finger will be tracked in slot 1. When the finger in slot 0 is lifted, the second finger continues to be tracked in slot 1, and if a new finger is set down, it will be tracked in slot 0. Sounds more complicated than it is, think of it as an array of possible touchpoints.

The tracking ID is an incrementing number that lets us tell touch points apart and also tells us when a touch starts and when it ends. The two values are either -1 or a positive number. Any positive number means "new touch" and -1 means "touch ended". So when you put two fingers down and lift them again, you'll get a tracking ID of 1 in slot 0, a tracking ID of 2 in slot 1, then a tracking ID of -1 in both slots to signal they ended. The tracking ID value itself is meaningless, it simply increases as touches are created.

Let's look at a single tap:


E: 0.000001 0003 0039 0387 # EV_ABS / ABS_MT_TRACKING_ID 387
E: 0.000001 0003 0035 2560 # EV_ABS / ABS_MT_POSITION_X 2560
E: 0.000001 0003 0036 2905 # EV_ABS / ABS_MT_POSITION_Y 2905
E: 0.000001 0003 003a 0059 # EV_ABS / ABS_MT_PRESSURE 59
E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2560 # EV_ABS / ABS_X 2560
E: 0.000001 0003 0001 2905 # EV_ABS / ABS_Y 2905
E: 0.000001 0003 0018 0059 # EV_ABS / ABS_PRESSURE 59
E: 0.000001 0001 0145 0001 # EV_KEY / BTN_TOOL_FINGER 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.021690 0003 003a 0067 # EV_ABS / ABS_MT_PRESSURE 67
E: 0.021690 0003 0018 0067 # EV_ABS / ABS_PRESSURE 67
E: 0.021690 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +21ms
E: 0.033482 0003 003a 0068 # EV_ABS / ABS_MT_PRESSURE 68
E: 0.033482 0003 0018 0068 # EV_ABS / ABS_PRESSURE 68
E: 0.033482 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +12ms
E: 0.044268 0003 0035 2561 # EV_ABS / ABS_MT_POSITION_X 2561
E: 0.044268 0003 0000 2561 # EV_ABS / ABS_X 2561
E: 0.044268 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +11ms
E: 0.054093 0003 0035 2562 # EV_ABS / ABS_MT_POSITION_X 2562
E: 0.054093 0003 003a 0067 # EV_ABS / ABS_MT_PRESSURE 67
E: 0.054093 0003 0000 2562 # EV_ABS / ABS_X 2562
E: 0.054093 0003 0018 0067 # EV_ABS / ABS_PRESSURE 67
E: 0.054093 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 0.064891 0003 0035 2569 # EV_ABS / ABS_MT_POSITION_X 2569
E: 0.064891 0003 0036 2903 # EV_ABS / ABS_MT_POSITION_Y 2903
E: 0.064891 0003 003a 0059 # EV_ABS / ABS_MT_PRESSURE 59
E: 0.064891 0003 0000 2569 # EV_ABS / ABS_X 2569
E: 0.064891 0003 0001 2903 # EV_ABS / ABS_Y 2903
E: 0.064891 0003 0018 0059 # EV_ABS / ABS_PRESSURE 59
E: 0.064891 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +10ms
E: 0.073634 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.073634 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.073634 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.073634 0001 0145 0000 # EV_KEY / BTN_TOOL_FINGER 0
E: 0.073634 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +9ms
We have a tracking ID (387) signalling finger down, as well as a position plus pressure. then some updates and eventually a tracking ID of -1 (signalling finger up). Notice how there is no ABS_MT_SLOT here - the kernel buffers those too so while you stay in the same slot (0 in this case) you don't see any events for it. Also notice how you get both single-finger as well as multitouch in the same event stream. This is for backwards compatibility [2]

Ok, time for a two-finger tap:


E: 0.000001 0003 0039 0496 # EV_ABS / ABS_MT_TRACKING_ID 496
E: 0.000001 0003 0035 2609 # EV_ABS / ABS_MT_POSITION_X 2609
E: 0.000001 0003 0036 3791 # EV_ABS / ABS_MT_POSITION_Y 3791
E: 0.000001 0003 003a 0054 # EV_ABS / ABS_MT_PRESSURE 54
E: 0.000001 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.000001 0003 0039 0497 # EV_ABS / ABS_MT_TRACKING_ID 497
E: 0.000001 0003 0035 3012 # EV_ABS / ABS_MT_POSITION_X 3012
E: 0.000001 0003 0036 3088 # EV_ABS / ABS_MT_POSITION_Y 3088
E: 0.000001 0003 003a 0056 # EV_ABS / ABS_MT_PRESSURE 56
E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2609 # EV_ABS / ABS_X 2609
E: 0.000001 0003 0001 3791 # EV_ABS / ABS_Y 3791
E: 0.000001 0003 0018 0054 # EV_ABS / ABS_PRESSURE 54
E: 0.000001 0001 014d 0001 # EV_KEY / BTN_TOOL_DOUBLETAP 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.012909 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.012909 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.012909 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.012909 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.012909 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.012909 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.012909 0001 014d 0000 # EV_KEY / BTN_TOOL_DOUBLETAP 0
E: 0.012909 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +12ms
This was a really quick two-finger tap that illustrates the tracking IDs nicely. In the first event we get a touch down, then an ABS_MT_SLOT event. This tells us that subsequent events belong to the other slot, so it's the other finger. There too we get a tracking ID + position. In the next event we get an ABS_MT_SLOT to switch back to slot 0. Tracking ID of -1 means that touch ended, and then we see the touch in slot 1 ended too.

Time for a two-finger scroll:


E: 0.000001 0003 0039 0557 # EV_ABS / ABS_MT_TRACKING_ID 557
E: 0.000001 0003 0035 2589 # EV_ABS / ABS_MT_POSITION_X 2589
E: 0.000001 0003 0036 3363 # EV_ABS / ABS_MT_POSITION_Y 3363
E: 0.000001 0003 003a 0048 # EV_ABS / ABS_MT_PRESSURE 48
E: 0.000001 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.000001 0003 0039 0558 # EV_ABS / ABS_MT_TRACKING_ID 558
E: 0.000001 0003 0035 3512 # EV_ABS / ABS_MT_POSITION_X 3512
E: 0.000001 0003 0036 3028 # EV_ABS / ABS_MT_POSITION_Y 3028
E: 0.000001 0003 003a 0044 # EV_ABS / ABS_MT_PRESSURE 44
E: 0.000001 0001 014a 0001 # EV_KEY / BTN_TOUCH 1
E: 0.000001 0003 0000 2589 # EV_ABS / ABS_X 2589
E: 0.000001 0003 0001 3363 # EV_ABS / ABS_Y 3363
E: 0.000001 0003 0018 0048 # EV_ABS / ABS_PRESSURE 48
E: 0.000001 0001 014d 0001 # EV_KEY / BTN_TOOL_DOUBLETAP 1
E: 0.000001 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +0ms
E: 0.027960 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.027960 0003 0035 2590 # EV_ABS / ABS_MT_POSITION_X 2590
E: 0.027960 0003 0036 3395 # EV_ABS / ABS_MT_POSITION_Y 3395
E: 0.027960 0003 003a 0046 # EV_ABS / ABS_MT_PRESSURE 46
E: 0.027960 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.027960 0003 0035 3511 # EV_ABS / ABS_MT_POSITION_X 3511
E: 0.027960 0003 0036 3052 # EV_ABS / ABS_MT_POSITION_Y 3052
E: 0.027960 0003 0000 2590 # EV_ABS / ABS_X 2590
E: 0.027960 0003 0001 3395 # EV_ABS / ABS_Y 3395
E: 0.027960 0003 0018 0046 # EV_ABS / ABS_PRESSURE 46
E: 0.027960 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +27ms
E: 0.051720 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.051720 0003 0035 2609 # EV_ABS / ABS_MT_POSITION_X 2609
E: 0.051720 0003 0036 3447 # EV_ABS / ABS_MT_POSITION_Y 3447
E: 0.051720 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.051720 0003 0036 3080 # EV_ABS / ABS_MT_POSITION_Y 3080
E: 0.051720 0003 0000 2609 # EV_ABS / ABS_X 2609
E: 0.051720 0003 0001 3447 # EV_ABS / ABS_Y 3447
E: 0.051720 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +24ms
[...]
E: 0.272034 0003 002f 0000 # EV_ABS / ABS_MT_SLOT 0
E: 0.272034 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.272034 0003 002f 0001 # EV_ABS / ABS_MT_SLOT 1
E: 0.272034 0003 0039 -001 # EV_ABS / ABS_MT_TRACKING_ID -1
E: 0.272034 0001 014a 0000 # EV_KEY / BTN_TOUCH 0
E: 0.272034 0003 0018 0000 # EV_ABS / ABS_PRESSURE 0
E: 0.272034 0001 014d 0000 # EV_KEY / BTN_TOOL_DOUBLETAP 0
E: 0.272034 0000 0000 0000 # ------------ SYN_REPORT (0) ---------- +30ms
Note that "scroll" is something handled in userspace, so what you see here is just a two-finger move. Everything in there i something we've already seen, but pay attention to the two middle events: as updates come in for each finger, the ABS_MT_SLOT changes before the upates are sent. The kernel filter for identical events is still in effect, so in the third event we don't get an update for the X position on slot 1. The filtering is per-touchpoint, so in this case this means that slot 1 position x is still on 3511, just as it was in the previous event.

That's all you have to remember, really. If you think of evdev as a serialised way of sending an array of touchpoints, with the slots as the indices then it should be fairly clear. The rest is then just about actually looking at the touch positions and making sense of them.

Exercise: do a pinch gesture on your touchpad. See if you can track the two fingers moving closer together. Then do the same but only move one finger. See how the non-moving finger gets less updates.

That's it. There are a few more details to evdev but much of that is just more event types and codes. The few details you really have to worry about when processing events are either documented in libevdev or abstracted away completely. The above should be enough to understand what your device does, and what goes wrong when your device isn't working. Good luck.

[1] If not, file a bug against systemd's hwdb and CC me so we can put corrections in
[2] We treat some MT-capable touchpads as single-touch devices in libinput because the MT data is garbage

September 17, 2016

systemd.conf 2016 Workshop Tickets Available

Tickets for systemd 2016 Workshop day still available!

We still have a number of ticket for the workshop day of systemd.conf 2016 available. If you are a newcomer to systemd, and would like to learn about various systemd facilities, or if you already know your way around, but would like to know more: this is the best chance to do so. The workshop day is the 28th of September, one day before the main conference, at the betahaus in Berlin, Germany. The schedule for the day is available here. There are five interesting, extensive sessions, run by the systemd hackers themselves. Who better to learn systemd from, than the folks who wrote it?

Note that the workshop day and the main conference days require different tickets. (Also note: there are still a few tickets available for the main conference!).

Buy a ticket here.

See you in Berlin!

September 16, 2016

synaptics pointer acceleration

libinput's touchpad acceleration is the cause for a few bugs and outcry from a quite vocal (maj|in)ority. A common suggestion is "make it like the synaptics driver". So I spent a few hours going through the pointer acceleration code to figure out what xf86-input-synaptics actually does (I don't think anyone knows at this point) [1].

If you just want the TLDR: synaptics doesn't use physical distances but works in device units coupled with a few magic factors, also based on device units. That pretty much tells you all that's needed.

Also a disclaimer: the last time some serious work was done on acceleration was in 2008/2009. A lot of things have changed since and since the server is effectively un-testable, we ended up with the mess below that seems to make little sense. It probably made sense 8 years ago and given that most or all of the patches have my signed-off-by it must've made sense to me back then. But now we live in the glorious future and holy cow it's awful and confusing.

Synaptics has three options to configure speed: MinSpeed, MaxSpeed and AccelFactor. The first two are not explained beyond "speed factor" but given how accel usually works let's assume they all somewhoe should work as a multiplication on the delta (so a factor of 2 on a delta of dx/dy gives you 2dx/2dy). AccelFactor is documented as "acceleration factor for normal pointer movements", so clearly the documentation isn't going to help clear any confusion.

I'll skip the fact that synaptics also has a pressure-based motion factor with four configuration options because oh my god what have we done. Also, that one is disabled by default and has no effect unless set by the user. And I'll also only handle default values here, I'm not going to get into examples with configured values.

Also note: synaptics has a device-specific acceleration profile (the only driver that does) and thus the acceleration handling is split between the server and the driver.

Ok, let's get started. MinSpeed and MaxSpeed default to 0.4 and 0.7. The MinSpeed is used to set constant acceleration (1/min_speed) so we always apply a 2.5 constant acceleration multiplier to deltas from the touchpad. Of course, if you set constant acceleration in the xorg.conf, then it overwrites the calculated one.

MinSpeed and MaxSpeed are mangled during setup so that MaxSpeed is actually MaxSpeed/MinSpeed and MinSpeed is always 1.0. I'm not 100% why but the later clipping to the min/max speed range ensures that we never go below a 1.0 acceleration factor (and thus never decelerate).

The AccelFactor default is 200/diagonal-in-device-coordinates. On my T440s it's thus 0.04 (and will be roughly the same for most PS/2 Synaptics touchpads). But on a Cyapa with a different axis range it is 0.125. On a T450s it's 0.035 when booted into PS2 and 0.09 when booted into RMI4. Admittedly, the resolution halfs under RMI4 so this possibly maybe makes sense. Doesn't quite make as much sense when you consider the x220t which also has a factor of 0.04 but the touchpad is only half the size of the T440s.

There's also a magic constant "corr_mul" which is set as:


/* synaptics seems to report 80 packet/s, but dix scales for
* 100 packet/s by default. */
pVel->corr_mul = 12.5f; /*1000[ms]/80[/s] = 12.5 */
It's correct that the frequency is roughly 80Hz but I honestly don't know what the 100packet/s reference refers to. Either way, it means that we always apply a factor of 12.5, regardless of the timing of the events. Ironically, this one is hardcoded and not configurable unless you happen to know that it's the X server option VelocityScale or ExpectedRate (both of them set the same variable).

Ok, so we have three factors. 2.5 as a function of MaxSpeed, 12.5 because of 80Hz (??) and 0.04 for the diagonal.

When the synaptics driver calculates a delta, it does so in device coordinates and ignores the device resolution (because this code pre-dates devices having resolutions). That's great until you have a device with uneven resolutions like the x220t. That one has 75 and 129 units/mm for x and y, so for any physical movement you're going to get almost twice as many units for y than for x. Which means that if you move 5mm to the right you end up with a different motion vector (and thus acceleration) than when you move 5mm south.

The core X protocol actually defines who acceleration is supposed to be handled. Look up the man page for XChangePointerControl(), it sets a threshold and an accel factor:

The XChangePointerControl function defines how the pointing device moves. The acceleration, expressed as a fraction, is a multiplier for movement. For example, specifying 3/1 means the pointer moves three times as fast as normal. The fraction may be rounded arbitrarily by the X server. Acceleration only takes effect if the pointer moves more than threshold pixels at once and only applies to the amount beyond the value in the threshold argument.
Of course, "at once" is a bit of a blurry definition outside of maybe theoretical physics. Consider the definition of "at once" for a gaming mouse with 500Hz sampling rate vs. a touchpad with 80Hz (let us fondly remember the 12.5 multiplier here) and the above description quickly dissolves into ambiguity.

Anyway, moving on. Let's say the server just received a delta from the synaptics driver. The pointer accel code in the server calculates the velocity over time, basically by doing a hypot(dx, dy)/dtime-to-last-event. Time in the server is always in ms, so our velocity is thus in device-units/ms (not adjusted for device resolution).

Side-note: the velocity is calculated across several delta events so it gets more accurate. There are some checks though so we don't calculate across random movements: anything older than 300ms is discarded, anything not in the same octant of movement is discarded (so we don't get a velocity of 0 for moving back/forth). And there's two calculations to make sure we only calculate while the velocity is roughly the same and don't average between fast and slow movements. I have my doubts about these, but until I have some more concrete data let's just say this is accurate (altough since the whole lot is in device units, it probably isn't).

Anyway. The velocity is multiplied with the constant acceleration (2.5, see above) and our 12.5 magic value. I'm starting to think that this is just broken and would only make sense if we used a delta of "event count" rather than milliseconds.

It is then passed to the synaptics driver for the actual acceleration profile. The first thing the driver does is remove the constant acceleration again, so our velocity is now just v * 12.5. According to the comment this brings it back into "device-coordinate based velocity" but this seems wrong or misguided since we never changed into any other coordinate system.

The driver applies the accel factor (0.04, see above) and then clips the whole lot into the MinSpeed/MaxSpeed range (which is adjusted to move MinSpeed to 1.0 and scale up MaxSpeed accordingly, remember?). After the clipping, the pressure motion factor is calculated and applied. I skipped this above but it's basically: the harder you press the higher the acceleration factor. Based on some config options. Amusingly, pressure motion has the potential to exceed the MinSpeed/MaxSpeed options. Who knows what the reason for that is...

Oh, and btw: the clipping is actually done based on the accel factor set by XChangePointerControl into the acceleration function here. The code is


double acc = factor from XChangePointerControl();
double factor = the magic 0.04 based on the diagonal;

accel_factor = velocity * accel_factor;
if (accel_factor > MaxSpeed * acc)
accel_factor = MaxSpeed * acc;
So we have a factor set by XChangePointerControl() but it's only used to determine the maximum factor we may have, and then we clip to that. I'm missing some cross-dependency here because this is what the GUI acceleration config bits hook into. Somewhere this sets things and changes the acceleration by some amount but it wasn't obvious to me.

Alrighty. We have a factor now that's returned to the server and we're back in normal pointer acceleration land (i.e. not synaptics-specific). Woohoo. That factor is averaged across 4 events using the simpson's rule to smooth out aprupt changes. Not sure this really does much, I don't think we've ever done any evaluation on that. But it looks good on paper (we have that in libinput as well).

Now the constant accel factor is applied to the deltas. So far we've added the factor, removed it (in synaptics), and now we're adding it again. Which also makes me wonder whether we're applying the factor twice to all other devices but right now I'm past the point where I really want to find out . With all the above, our acceleration factor is, more or less:


f = units/ms * 12.5 * (200/diagonal) * (1.0/MinSpeed)
and the deltas we end up using in the server are

(dx, dy) = f * (dx, dy)
But remember, we're still in device units here (not adjusted for resolution).

Anyway. You think we're finished? Oh no, the real fun bits start now. And if you haven't headdesked in a while, now is a good time.

After acceleration, the server does some scaling because synaptics is an absolute device (with axis ranges) in relative mode [2]. Absolute devices are mapped into the whole screen by default but when they're sending relative events, you still want a 45 degree line on the device to map into 45 degree cursor movement on the screen. The server does this by adjusting dy in-line with the device-to-screen-ratio (taking device resolution into account too). On my T440s this means:


touchpad x:y is 1:1.45 (16:11)
screen is 1920:1080 is 1:177 (16:9)

dy scaling is thus: (16:11)/(16:9) = 9:11 -> y * 11/9
dx is left as-is. Now you have the delta that's actually applied to the cursor. Except that we're in device coordinates, so we map the current cursor position to device coordinates, then apply the delta, then map back into screen coordinates (i.e. pixels). You may have spotted the flaw here: when the screen size changes, the dy scaling changes and thus the pointer feel. Plug in another monitor, and touchpad acceleration changes. Also: the same touchpad feels different on laptops when their screen hardware differs.

Ok, let's wrap this up. Figuring out what the synaptics driver does is... "tricky". It seems much like a glorified random number scheme. I'm not planning to implement "exactly the same acceleration as synaptics" in libinput because this would be insane and despite my best efforts, I'm not that yet. Collecting data from synaptics users is almost meaningless, because no two devices really employ the same acceleration profile (touchpad axis ranges + screen size) and besides, there are 11 configuration options that all influence each other.

What I do plan though is collect more motion data from a variety of touchpads and see if I can augment the server enough that I can get a clear picture of how motion maps to the velocity. If nothing else, this should give us some picture on how different the various touchpads actually behave.

But regardless, please don't ask me to "just copy the synaptics code".

[1] fwiw, I had this really great idea of trying to get behind all this, with diagrams and everything. But then I was printing json data from the X server into the journal to be scooped up by sed and python script to print velocity data. And I questioned some of my life choices.
[2] why the hell do we do this? because synaptics at some point became a device that announce the axis ranges (seemed to make sense at the time, 2008) and then other things started depending on it and with all the fixes to the server to handle absolute devices in relative mode (for tablets) we painted ourselves into a corner. Synaptics should switch back to being a relative device, but last I tried it breaks pointer acceleration and that a) makes the internets upset and b) restoring the "correct" behaviour is, well, you read the article so far, right?

September 14, 2016

LVFS and ODRS are down

The LVFS firmware server and ODRS reviews server are down because my credit card registered with OpenShift expired. I’ve updated my credit card details, paid the pending invoice and still can’t start any server. I rang customer service who asked me to send an email and have heard nothing back.

screenshot-from-2016-09-14-17-34-59

I have backups a few days old, but this whole situation is terrible on so many levels.

EDIT: cdaley has got everything back working again, it appears I found a corner case in the code that deals with payments.

September 09, 2016

Input threads in the X server

A great new feature has been merged during this 1.19 X server development cycle: we're now using threads for input [1]. Previously, there were two options for how an input driver would pass on events to the X server: polling or from within the signal handler. Polling simply adds all input devices' file descriptors to a select(2) loop that is processed in the mainloop of the server. The downside here is that if the server is busy rendering something, your input is delayed until that rendering is complete. Historically, polling was primarily used by the keyboard driver because it just doesn't matter much when key strokes are delayed. Both because you need the client to render them anyway (which it can't when it's busy) and possibly also because we're just so bloody used to typing delays.

The signal handler approach circumvented the delays by installing a SIGIO handler for each input device fd and calling that when any input occurs. This effectively interrupts the process until the signal handler completes, regardless of what the server is currently busy with. A great solution to provide immediate visible cursor movement (hence it is used by evdev, synaptics, wacom, and most of the now-retired legacy drivers) but it comes with a few side effects. First of all, because the main process is interrupted, the bit where we read the events must be completely separate to the bit where we process the events. That's easy enough, we've had an input event queue in the server for as long as I've been involved with X.Org development (~2006). The drivers push events into the queue during the signal handler, in the main loop the server reads them and processes them. In a busy server that may be several seconds after the pointer motion was performed on the screen but hey, it still feels responsive.

The bigger issue with the use of a signal handler is: you can't use malloc [2]. Or anything else useful. Look at the man page for signal(7), it literally has a list of allowed functions. This leads to two weird side-effects: one is that you have to pre-allocate everything you may ever need for event processing, the other is that you need to re-implement any function that is not currently async signal safe. The server actually has its own implementation of printf for this reason (for error logging). Let's just say this is ... suboptimal. Coincidentally, libevdev is mostly async signal safe for that reason too. It also means you can't use any libraries, because no-one [3] is insane enough to make libraries async signal-safe.

We were still mostly "happy" with it until libinput came along. libinput is a full input stack and expecting it to work within a signal handler is the somewhere between optimistic, masochistic and sadistic. The xf86-input-libinput driver doesn't use the signal handler and the side effect of this is that a desktop with libinput didn't feel as responsive when the server was busy rendering.

Keith Packard stepped in and switched the server from the signal handler to using input threads. Or more specifically: one input thread on top of the main thread. That thread controls all the input device's file descriptors and continuously reads events off them. It otherwise provides the same functionality the signal handler did before: visible pointer movement and shoving events into the event queue for the main thread to process them later. But of course, once you switch to threads, problems have 2 you now. A signal handler is "threading light", only one code path can be interrupted and you know you continue where you left off. So synchronisation primitives are easier than in threads where both code paths continue independently. Keith replaced the previous xf86BlockSIGIO() calls with corresponding input_lock() and input_unlock() calls and all the main drivers have been switched over. But some interesting race conditions kept happening. But as of today, we think most of these are solved.

The best test we have at this point is libinput's internal test suite. It creates roughly 5000 devices within about 4 minutes and thus triggers most code paths to do with device addition and removal, especially the overlaps between devices sending events before/during/after they get added and/or removed. This is the largest source of possible errors as these are the code paths with the most amount of actual simultaneous access to the input devices by both threads. But what the test suite can't test is normal everyday use. So until we get some more code maturity, expect the occasional crash and please do file bug reports. They'll be hard to reproduce and detect, but don't expect us to run into the same race conditions by accident.

[1] Yes, your calendar is right, it is indeed 2016, not the 90s or so
[2] Historical note: we actually mostly ignored this until about 2010 or so when glibc changed the malloc implementation and the server was just randomly hanging whenever we tried to malloc from within the signal handler. Users claimed this was bad UX, but I think it's right up there with motif.
[3] yeah, yeah, I know, there's always exceptions.

September 06, 2016

Fedora: Cinnamon, MATE and the broken GNOME touchpad panel

On Fedora, if you have mate-desktop or cinnamon-desktop installed, your GNOME touchpad configuration panel won't work (see Bug 1338585). Both packages install a symlink to assign the synaptics driver to the touchpad. But GNOME's control-center does not support synaptics anymore, so no touchpad is detected. Note that the issue occurs regardless of whether you use MATE/Cinnamon, merely installing it is enough.

Unfortunately, there is no good solution to this issue. Long-term both MATE and Cinnamon should support libinput but someone needs to step up and implement it. We don't support run-time driver selection in the X server, so an xorg.conf.d snippet is the only way to assign a touchpad driver. And this means that you have to decide whether GNOME's or MATE/Cinnamon's panel is broken at X start-up time.

If you need the packages installed but you're not actually using Mate/Cinnamon itself, remove the following symlinks (whichever is present on your system):


# rm /etc/X11/xorg.conf.d/99-synaptics-mate.conf
# rm /etc/X11/xorg.conf.d/99-synaptics-cinnamon.conf
# rm /usr/share/X11/xorg.conf.d/99-synaptics-mate.conf
# rm /usr/share/X11/xorg.conf.d/99-synaptics-cinnamon.conf
The /usr/share paths are the old ones and have been replaced with the /etc/ symlinks in cinnamon-desktop-3.0.2-2.fc25 and mate-desktop-1.15.1-4.fc25 and their F24 equivalents.

libinput and the Lenovo T450 and T460 series touchpads

I'm using T450 and T460 as reference but this affects all laptops from the Lenovo *50 and *60 series. The Lenovo T450 and T460 have the same touchpad hardware, but unfortunately it suffers from what is probably a firmware issue. On really slow movements, the pointer has a halting motion. That effect disappears when the finger moves faster.

The observable effect is that of a pointer stalling, then jumping by 20 or so pixels. We have had a quirk for this in libinput since March 2016 (see commit a608d9) and detect this at runtime for selected models. In particular, what we do is look for a sequence of events that only update the pressure values but not the x/y position of the finger. This is a good indication that the bug triggers. While it's possible to trigger pressure changes alone, triggering several in a row without a change in the x/y coordinates is extremely unlikely. Remember that these touchpads have a resolution of ~40 units per mm - you cannot hold your finger that still while changing pressure [1]. Once we see those pressure changes only we reset the motion history we keep for each touch. The next event with an x/y coordinate will thus not calculate the delta to the previous position and not trigger a move. The event after that is handled normally again. This avoids the extreme jumps but there isn't anything we can do about the stalling - we never get the event from the kernel. [2]

Anyway. This bug popped up again elsewhere so this time I figured I'll analyse the data more closely. Specifically, I wrote a script that collected all x/y coordinates of a touchpad recording [3] and produced a black and white image of all device coordinates sent. This produces a graphic that's interesting but not overly useful:


Roughly 37000 touchpad events. You'll have to zoom in to see the actual pixels.
I modified the script to assume a white background and colour any x/y coordinate that was never hit black. So an x coordinate of 50 would now produce a vertical 1 pixel line at 50, a y coordinate of 70 a horizontal line at 70, etc. Any pixel that remains white is a coordinate that is hit at some point, anything black was unreachable. This produced more interesting results. Below is the graphic of a short, slow movement right to left.

A single short slow finger movement
You can clearly see the missing x coordinates. More specifically, there are some events, then a large gap, then events again. That gap is the stalling cursor where we didn't get any x coordinates. My first assumption was that it may be a sensor issue and that some areas on the touchpad just don't trigger. So what I did was move my finger around the whole touchpad to try to capture as many x and y coordinates as possible.

Let's have look at the recording from a T440 first because it doesn't suffer from this issue:


Sporadic black lines indicating unused coordinates but the center is purely white, indicating every device unit was hit at some point
Ok, looks roughly ok. The black areas are irregular, on the edges and likely caused by me just not covering those areas correctly. In the center it's white almost everywhere, that's where the most events were generated. And now let's compare this to a T450:

A visible grid of unreachable device units
The difference is quite noticeable, especially if you consider that the T440 recording had under 15000 events, the T450 recording had almost 37000. The T450 has a patterned grid of unreachable positions. But why? We currently use the PS/2 protocol to talk to the device but we should be using RMI4 over SMBus instead (which is what Windows has done for a while and luckily the RMI4 patches are on track for kernel 4.9). Once we talk to the device in its native protocol we see a resolution of ~20 units/mm and it looks like the T440 output:

With RMI4, the grid disappears
Ok, so the problem is not missing coordinates in the sensor and besides, at the resolution the touchpad has a single 'pixel' not triggering shouldn't be much of a problem anyway.

Maybe the issue had to do with horizontal movements or something? The next approach was for me to move my finger slowly from one side to the left. That's actually hard to do consistently when you're not a robot, so the results are bound to be slightly different. On the T440:


The x coordinates are sporadic with many missing ones, but the y coordinates are all covered
You can clearly see where the finger moved left to right. The big black gaps on the x coordinates mostly reflect me moving too fast but you can see how the distance narrows, indicating slower movements. Most importantly: vertically, the strip is uniformly white, meaning that within that range I hit every y coordinate at least once. And the recording from the T450:

Only one gap in the y range, sporadic gaps in the x range
Well, still looks mostly the same, so what is happening here? Ok, last test: This time an extremely slow motion left to right. It took me 87 seconds to cover the touchpad. In theory this should render the whole strip white if all x coordinates are hit. But look at this:

An extremely slow finger movement
Ok, now we see the problem. This motion was slow enough that almost every x coordinate should have been hit at least once. But there are large gaps and most notably: larger gaps than in the recording above that was a faster finger movement. So what we have here is not an actual hardware sensor issue but that the firmware is working against us here, filtering things out. Unfortunately, that's also the worst result because while hardware issues can usually be worked around, firmware issues are a lot more subtle and less predictable. We've also verified that newer firmware versions don't fix this and trying out some tweaks in the firmware didn't change anything either.

Windows is affected by this too and so is the synaptics driver. But it's not really noticeable on either and all reports so far were against libinput, with some even claiming that it doesn't manifest with synaptics. But each time we investigated in more detail it turns out that the issue is still there (synaptics uses the same kernel data after all) but because of different acceleration methods users just don't trigger it. So my current plan is to change the pointer acceleration to match something closer to what synaptics does on these devices. That's hard because synaptics is mostly black magic (e.g. synaptics' pointer acceleration depends on screen resolution) and hard to reproduce. Either way, until that is sorted at least this post serves as a link to point people to.

Many thanks to Andrew Duggan from Synaptics and Benjamin Tissoires for helping out with the analysis and testing of all this.

[1] Because pressing down on a touchpad flattens your finger and thus changes the shape slightly. While you can hold a finger still, you cannot control that shape
[2] Yes, predictive movement would be possible but it's very hard to get this right
[3] These are events as provided by the kernel and unaffected by anything in the userspace stack

September 02, 2016

Fedora 25 and Additional Software Sources

I was asked to produce a checklist for applications that we want to show up in GNOME Software in Fedora 25. In this post I’ll refer to applications as graphical programs, rather than other system add-on components like drivers and codecs (which the next post will talk about). There is a big checklist, which really is the bare minimum that the distributor has to provide so that the application is listed correctly. If any of these points is causing problems or is confusing, please let me know and I’ll do my best to help.

So, these things really have to be done:

  • Verify that you ship a .desktop file for each built application, and that these keys exist: Name, Comment, Icon, Categories, Keywords and Exec and that desktop-file-validate correctly validates the file.
  • Verify that there is a PNG (with transparent background) or SVG icon is installed in /usr/share/icons, /usr/share/icons/hicolor/*/apps/*, or /usr/share/${app_name}/icons/* and is at least 64×64 in size.
  • At least one valid AppData file with the suffix .appdata.xml file must be installed into /usr/share/appdata with an <id> that matches the name of the .desktop file, e.g. gimp.appdata.xml. Ideally the name of both the desktop file and appdata should be reverse DNS, e.g. com.hughski.ColorHug.desktop rather than colorhug-client.desktop although this isn’t critically important.
  • Include several 16:9 aspect screenshots in the AppData file along with a compelling translated description made up of multiple paragraphs. Make sure you follow the style guide, which can be tested using appstream-util validate foo.appdata.xml
  • Make sure that there are not two applications installed with one package; in this case split up the package so that there are multiple subpackages or mark one of the .desktop files as NoDisplay=true. Make sure the application-subpackages depend on any -common subpackage and deal with upgrades (perhaps using a metapackage) if you’ve shipped the application before.
  • Make sure your application is visible in the example.xml.gz file when running appstream-builder on the binary rpm(s).
  • Make sure the AppStream metadata is regenerated when the application is updated in the repo, for more details see an entire blog post on this
  • Ensure that enabled_metadata=1 is set in the .repo file. This means that PackageKit will automatically download just the application metadata even when the repository is disabled.

August 31, 2016

New xserver driver sort order - evdev < libinput < (synaptics|wacom|...)

In the X server, the input driver assignment is handled by xorg.conf.d snippets. Each driver assigns itself to the type of devices it can handle and the driver that actually loaded is simply the one that sorts last. Historically, we've had the evdev driver sort low and assign itself to everything. synaptics, wacom and the other few drivers that matter sorted higher than evdev and thus assigned themselves to the respective device.

When xf86-input-libinput first came out 2 years ago, we used a higher sort order than all other drivers to assign it to (almost) all devices. This was of course intentional because we believe that libinput is the best input stack around, the odd bug non-withstanding. Now it has matured a fair bit and we had a lot more exposure to various types of hardware. We've been quirking and fixing things like crazy and libinput is much better for it.

Two things were an issue with this approach though. First, overriding xf86-input-libinput required manual intervention, usually either copying or symlinking an xorg.conf.d snippet. Second, even though we were overriding the default drivers, we still had them installed everywhere. Now it's time to start properly retiring the old drivers.

The upstream approach for this is fairly simple: the xf86-input-libinput xorg.conf.d snippet will drop in sort order to sit above evdev. evdev remains as the fallback driver for miscellaneous devices where evdev's blind "forward everything" approach is sufficient. All other drivers will sort higher than xf86-input-libinput and will thus override the xf86-input-libinput assignment. The new sort order is thus:

  • evdev
  • libinput
  • synaptics, wacom, vmmouse, joystick
evdev and libinput are generic drivers, the others are for specific devices or use-cases. To use a specific driver other than xf86-input-libinput, you now only have to install it. To fall back to xf86-inputlibinput, you uninstall it. No more manual xorg.conf.d snippets symlinking.

This has an impact on distributions and users. Distributions should ensure that other drivers are never installed by default unless requested by the user (or some software). And users need to be aware that having a driver other than xf86-input-libinput installed may break things. For example, recent GNOME does not support the synaptics driver anymore, installing it will cause the control panel's touchpad bits to stop working. So there'll be a messy transition period but once things are settled, the solution to most input-related driver bugs will be "install/remove driver $foo" as opposed to the current symlink/copy/write an xorg.conf.d snippet.

August 26, 2016

radv: status update or is dota2 working yet?
Clickbait titles for the win!

First up, massive thanks to my major co-conspirator on radv, Bas Nieuwenhuizen, for putting in so much effort on getting radv going.

So where are we at?

Well this morning I finally found the last bug that was causing missing rendering on Dota 2. We were missing support for a compressed texture format that dota2 used. So currently dota 2 renders, I've no great performance comparison to post yet because my CPU is 5 years old, and can barely get close to 30fps with GL or Vulkan. I think we know of a couple of places that could be bottlenecking us on the CPU side. The radv driver is currently missing hyper-z (90% done), fast color clears and DCC, which are all GPU side speedups in theory. Also running the phoronix-test-suite dota2 tests works sometimes, hangs in a thread lock sometimes, or crashes sometimes. I think we have some memory corruption somewhere that it collides with.

Other status bits: the Vulkan CTS test suite contains 114598 tests, a piglit run a few hours before I fixed dota2 was at:
[114598/114598] skip: 50388, pass: 62932, fail: 1193, timeout: 2, crash: 83 - |/-\

So that isn't too bad a showing, we know some missing features are accounting for some of fails. A lot of the crashes are an assert in CTS hitting, that I don't think is a real problem.

We render most of the Sascha Willems demos fine.

I've tested the Talos Principle as well, the texture fix renders a lot more stuff on the screen, but we are still seeing large chunks of blackness where I think there should be trees in-game, the menus etc all seem to load fine.

All this work is on the semi-interesting branch of
https://github.com/airlied/mesa

It only has been tested on VI AMD GPUs, Polaris worked previously but something derailed it, but we should fix it once we get the finished bisect. CIK GPUs kinda work with the amdgpu kernel driver loaded. SI GPUs are nowhere yet.

Here's a screenshot:
Priorities in security
I read this tweet a couple of weeks ago:

and it got me thinking. Security research is often derided as unnecessary stunt hacking, proving insecurity in things that are sufficiently niche or in ways that involve sufficient effort that the realistic probability of any individual being targeted is near zero. Fixing these issues is basically defending you against nation states (who (a) probably don't care, and (b) will probably just find some other way) and, uh, security researchers (who (a) probably don't care, and (b) see (a)).

Unfortunately, this may be insufficient. As basically anyone who's spent any time anywhere near the security industry will testify, many security researchers are not the nicest people. Some of them will end up as abusive partners, and they'll have both the ability and desire to keep track of their partners and ex-partners. As designers and implementers, we owe it to these people to make software as secure as we can rather than assuming that a certain level of adversary is unstoppable. "Can a state-level actor break this" may be something we can legitimately write off. "Can a security expert continue reading their ex-partner's email" shouldn't be.

comment count unavailable comments

August 25, 2016

Summer Talks, PurpleEgg

I recently gave talks at Flock in Krakow and GUADEC in Karlsruhe:

Flock: What’s Fedora’s Alternative to vi httpd.conf Video Slides: PDF ODP
GUADEC: Reworking the desktop distribution Video Slides: PDF ODP

The topics were different but related: The Flock talk talked about how to make things better for a developer using Fedora Workstation as their development workstation, while the GUADEC talk was about the work we are doing to move Fedora to a model where the OS is immutable and separate from applications. A shared idea of the two talks is that your workstation is not your development environment environment. Installing development tools, language runtimes, and header files as part of your base operating system implies that every project you are developing wants the same development environment, and that simply is not the case.

At both talks, I demo’ed a small project I’ve been working on with codename of PurpleEgg (I didn’t have that codename yet at Flock – the talk instead talks about “NewTerm” and “fedenv”.) PurpleEgg is about creating easily creating containerized environments dedicated to a project, and about integrating those projects into the desktop user interface in a natural, slick way.

The command line client to PurpleEgg is called pegg:

[otaylor@localhost ~]$ pegg create django mydjangosite
[otaylor@localhost ~]$ cd ~/Projects/mydjangosite
[otaylor@localhost mydangjosite]$  pegg shell
[[mydjangosite]]$ python manage.py runserver
August 24, 2016 - 19:11:36
Django version 1.9.8, using settings 'mydjangosite.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

“pegg create” step did the following steps:

  • Created a directory ~/Projects/mydjangosite
  • Created a file pegg.yaml with the following contents:
base: fedora:24
packages:
- python3-virtualenv
- python3-django
  • Created a Docker image is the Fedora 24 base image plus the specified packages
  • Created a venv/ directory in the specified directory and initialized a virtual environment there
  • Ran ‘django-admin startproject’ to create the standard Django project

pegg shell

  • Checked to see if the Docker image needed updating
  • Ran a bash prompt inside the Docker image with a customized prompt
  • Activated the virtual environment

The end result is that, without changing the configuration of the host machine at all, in a few simple commands we got to a place where we can work on a Django project just as it is documented upstream.

But with the PurpleEgg application installed, you get more: you get search results in the GNOME Activities Overview for your projects, and when you activate a search result, you see a window like:

PurpleEgg-screenshot

We have a terminal interface specialized for our project:

  • We already have the pegg environment activated
  • New tabs also open within that environment
  • The prompt is uncluttered with relevant information moved to the header bar
  • If the project is checked into Git, the header bar also tracks the Git branch

There’s a fair bit more that could be done: a GUI for creating and importing projects as in GNOME Builder, GUI integration for Vagrant and Docker, configuring frequently used commands in pegg.yaml, etc.

At the most basic, the idea is that server-side development is terminal-centric and also somewhat specialized – different languages and frameworks have different ways of doing things. PurpleEgg embraces working like that, but adds just enough conventions so that we can make things better for the developer – just because the developer wants a terminal doesn’t mean that all we can give them is a big pile of terminals.

PurpleEgg codedump is here. Not warrantied to be fit for any purpose.


August 24, 2016

Getting S3 Statistics using S3stat

I’ve been using Amazon S3 as a CDN for the LVFS metadata for a few weeks now. It’s been working really well and we’ve shifted a huge number of files in that time already. One thing that made me very anxious was the bill that I was going to get sent by Amazon, as it’s kinda hard to work out the total when you’re serving cough millions of small files rather than a few large files to a few people. I also needed to keep track of which files were being downloaded for various reasons and the Amazon tools make this needlessly tricky.

I signed up for the free trial of S3stat and so far I’ve been pleasantly surprised. It seems to do a really good job of graphing the spend per day and also allowing me to drill down into any areas that need attention, e.g. looking at the list of 404 codes various people are causing. It was fairly easy to set up, although did take a couple of days to start processing logs (which is all explained in the set up). Amazon really should be providing something similar.

Screenshot from 2016-08-24 11-29-51

For people providing less than 200,000 hits per day it’s only $10, which seems pretty reasonable. For my use case (bazillions of small files) it rises to a little-harder-to-justify $50/month.

I can’t justify the $50/month for the LVFS, but luckily for me they have a Cheap Bastard Plan (their words, not mine!) which swaps a bit of advertising for a free unlimited license. Sounds like a fair swap, and means it’s available for a lot of projects where $600/yr is better spent elsewhere.

Devo Firmware Updating

Does anybody have a Devo RC transmitter I can borrow for a few weeks? I need model 6, 6S, 7E, 8, 8S, 10, 12, 12S, F7 or F12E — it doesn’t actually have to work, I just need the firmware upload feature for testing various things. Please reshare/repost if you’re in any UK RC groups that could help. Thanks!

August 18, 2016

Updating Firmware on 8Bitdo Game Controllers

I’ve spent a few days adding support for upgrading the firmware of the various wireless 8Bitdo controllers into fwupd. In my opinion, the 8Bitdo hardware is very well made and reasonably priced, and also really good retro fun.

Although they use a custom file format for firmware, and also use a custom flashing protocol (seriously hardware people, just use DFU!) it was quite straightforward to integrate into fwupd. I’ve created a few things to make this all work:

  • a small libebitdo library in fwupd
  • a small ebitdo-tool binary that talks to the device and can flash a vendor supplied .dat file
  • a ebitdo fwupd provider that uses libebitdo to flash the device
  • a firmware repo that contains all the extra metadata for the LVFS

I guess I need to thank the guys at 8Bitdo; after asking a huge number of questions they open sourced their OS-X and Windows flashing tools, and also allowed me to distribute the firmware binary on the LVFS. Doing both of those things made it easy to support the hardware.

Screenshot from 2016-08-18 10-36-56

The result of all this is that you can now do fwupd update when the game-pad is plugged in using the USB cable (not just connected via bluetooth) and the firmware will be updated to the latest version. Updates will show in GNOME Software, and the world is one step being closer to being awesome.

August 15, 2016

Preliminary systemd.conf 2016 Schedule

A Preliminary systemd.conf 2016 Schedule is Now Available!

We have just published a first, preliminary version of the systemd.conf 2016 schedule. There is a small number of white slots in the schedule still, because we're missing confirmation from a small number of presenters. The missing talks will be added in as soon as they are confirmed.

The schedule consists of 5 workshops by high-profile speakers during the workshop day, 22 exciting talks during the main conference days, followed by one full day of hackfests.

Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

August 11, 2016

Microsoft's compromised Secure Boot implementation
There's been a bunch of coverage of this attack on Microsoft's Secure Boot implementation, a lot of which has been somewhat confused or misleading. Here's my understanding of the situation.

Windows RT devices were shipped without the ability to disable Secure Boot. Secure Boot is the root of trust for Microsoft's User Mode Code Integrity (UMCI) feature, which is what restricts Windows RT devices to running applications signed by Microsoft. This restriction is somewhat inconvenient for developers, so Microsoft added support in the bootloader to disable UMCI. If you were a member of the appropriate developer program, you could give your device's unique ID to Microsoft and receive a signed blob that disabled image validation. The bootloader would execute a (Microsoft-signed) utility that verified that the blob was appropriately signed and matched the device in question, and would then insert it into an EFI Boot Services variable[1]. On reboot, the boot loader reads the blob from that variable and integrates that policy, telling later stages to disable code integrity validation.

The problem here is that the signed blob includes the entire policy, and so any policy change requires an entirely new signed blob. The Windows 10 Anniversary Update added a new feature to the boot loader, allowing it to load supplementary policies. These must also be signed, but aren't tied to a device id - the idea is that they'll be ignored unless a device-specific policy has also been loaded. This way you can get a single device-specific signed blob that allows you to set an arbitrary policy later by using a combination of supplementary policies.

This is all fine in the Anniversary Edition. Unfortunately older versions of the boot loader will happily load a supplementary policy as if it were a full policy, ignoring the fact that it doesn't include a device ID. The loaded policy replaces the built-in policy, so in the absence of a base policy a supplementary policy as simple as "Enable this feature" will effectively remove all other restrictions.

Unfortunately for Microsoft, such a supplementary policy leaked. Installing it as a base policy on pre-Anniversary Edition boot loaders will then allow you to disable all integrity verification, including in the boot loader. Which means you can ask the boot loader to chain to any other executable, in turn allowing you to boot a compromised copy of any operating system you want (not just Windows).

This does require you to be able to install the policy, though. The PoC released includes a signed copy of SecureBootDebug.efi for ARM, which is sufficient to install the policy on ARM systems. There doesn't (yet) appear to be a public equivalent for x86, which means it's not (yet) practical for arbitrary attackers to subvert the Secure Boot process on x86. I've been doing my testing on a setup where I've manually installed the policy, which isn't practical in an automated way.

How can this be prevented? Installing the policy requires the ability to run code in the firmware environment, and by default the boot loader will only load signed images. The number of signed applications that will copy the policy to the Boot Services variable is presumably limited, so if the Windows boot loader supported blacklisting second-stage bootloaders Microsoft could simply blacklist all policy installers that permit installation of a supplementary policy as a primary policy. If that's not possible, they'll have to blacklist of the vulnerable boot loaders themselves. That would mean all pre-Anniversary Edition install media would stop working, including recovery and deployment images. That's, well, a problem. Things are much easier if the first case is true.

Thankfully, if you're not running Windows this doesn't have to be a issue. There are two commonly used Microsoft Secure Boot keys. The first is the one used to sign all third party code, including drivers in option ROMs and non-Windows operating systems. The second is used purely to sign Windows. If you delete the second from your system, Windows boot loaders (including all the vulnerable ones) will be rejected by your firmware, but non-Windows operating systems will still work fine.

From what we know so far, this isn't an absolute disaster. The ARM policy installer requires user intervention, so if the x86 one is similar it'd be difficult to use this as an automated attack vector[2]. If Microsoft are able to blacklist the policy installers without blacklisting the boot loader, it's also going to be minimally annoying. But if it's possible to install a policy without triggering any boot loader blacklists, this could end up being embarrassing.

Even outside the immediate harm, this is an interesting vulnerability. Presumably when the older boot loaders were written, Microsoft policy was that they would never sign policy files that didn't include a device ID. That policy changed when support for supplemental policies was added. without this policy change, the older boot loaders could still be considered secure. Adding new features can break old assumptions, and your design needs to take that into account.

[1] EFI variables come in two main forms - those accessible at runtime (Runtime Services variables) and those only accessible in the early boot environment (Boot Services variables). Boot Services variables can only be accessed before ExitBootServices() is called, and in Secure Boot environments all code executing before this point is (theoretically) signed. This means that Boot Services variables are nominally tamper-resistant.

[2] Shim has explicit support for allowing a physically present machine owner to disable signature validation - this is basically equivalent

comment count unavailable comments
LVFS has a new CDN

Now that we’re hitting cough Cough COUGH1 million users a month the LVFS is getting slower and slower. It’s really just a flask app that’s handling the admin panel and then apache is serving a set of small files to a lot of people. As switching to a HA server is taking longer than I hoped2, I’m in the process of switching to using S3 as a CDN to take the load off. I’ve pushed a commit that changes the default in the fwupd.conf file. If you want to help test this, you can do a substitution of secure-lvfs.rhcloud.com to s3.amazonaws.com/lvfsbucket in /etc/fwupd.conf although the old CDN will be running for a long time indeed for compatibility.

  1. Various vendors have sworn me to secrecy
  2. I can’t believe GPGME and python-gpg is the best we have…
Flatpak cross-compilation support
A couple of weeks ago, I hinted at a presentation that I wanted to do during this year's GUADEC, as a Lightning talk.

Unfortunately, I didn't get a chance to finish the work that I set out to do, encountering a couple of bugs that set me back. Hopefully this will get resolved post-GUADEC, so you can expect some announcements later on in the year.

At least one of the tasks I set to do worked out, and was promptly obsoleted by a nicer solution. Let's dive in.

How to compile for a different architecture

There are four possible solutions to compile programs for a different architecture:

  • Native compilation: get a machine of that architecture, install your development packages, and compile. This is nice when you have fast machines with plenty of RAM to compile on, usually developer boards, not so good when you target low-power devices.
  • Cross-compilation: install a version of GCC and friends that runs on your machine's architecture, but produces binaries for your target one. This is usually fast, but you won't be able to run the binaries created, so might end up with some data created from a different set of options, and won't be able to run the generated test suite.
  • Virtual Machine: you'd run a virtual machine for the target architecture, install an OS, and build everything. This is slower than cross-compilation, but avoids the problems you'd see in cross-compilation.
The final option is one that's used more and more, mixing the last 2 solutions: the QEmu user-space emulator.

Using the QEMU user-space emulator

If you want to run just the one command, you'd do something like:

qemu-static-arm myarmbinary

Easy enough, but hardly something you want to try when compiling a whole application, with library dependencies. This is where binfmt support in Linux comes into play. Register the ELF format for your target with that user-space emulator, and you can run myarmbinary without any commands before it.

One thing to note though, is that this won't work as easily if the qemu user-space emulator and the target executable are built as a dynamic executables: QEmu will need to find the libraries for your architecture, usually x86-64, to launch itself, and the emulated binary will also need to find its libraries.

To solve that first problem, there are QEmu static binaries available in a number of distributions (Fedora support is coming). For the second one, the easiest would be if we didn't have to mix native and target libraries on the filesystem, in a chroot, or container for example. Hmm, container you say.

Running QEmu user-space emulator in a container

We have our statically compiled QEmu, and a filesystem with our target binaries, and switched the root filesystem. Well, you try to run anything, and you get a bunch of errors. The problem is that there is a single binfmt configuration for the kernel, whether it's the normal OS, or inside a container or chroot.

The Flatpak hack

This commit for Flatpak works-around the problem. The binary for the emulator needs to have the right path, so it can be found within the chroot'ed environment, and it will need to be copied there so it is accessible too, which is what this patch will do for you.

Follow the instructions in the commit, and test it out with this Flatpak script for GNU Hello.

$ TARGET=arm ./build.sh
[...]
$ ls org.gnu.hello.arm.xdgapp
918k org.gnu.hello.arm.xdgapp

Ready to install on your device!

The proper way

The above solution was built before it looked like the "proper way" was going to find its way in the upstream kernel. This should hopefully land in the upcoming 4.8 kernel.

Instead of launching a separate binary for each non-native invocation, this patchset allows the kernel to keep the binary opened, so it doesn't need to be copied to the container.

In short

With the work being done on Fedora's static QEmu user-space emulators, and the kernel feature that will land, we should be able to have a nice tickbox in Builder to build for any of the targets supported by QEmu.

Get cross-compiling!
Adding suggestions to AppData files

An oft-requested feature is to show suggestions for other apps to install. This is useful if the apps are part of a larger suite of application, or if the apps are in some way complimentary to each other. A good example might be that we want to recommend libreoffice-writer when the user is looking at the details (or perhaps had just installed) libreoffice-calc.

At the moment we’ve got got any UI using this kind of data, as simply put, there isn’t much data to use. Using the ODRS I can kinda correlate things that the same people look at (i.e. user A got review for B and C, so B+C are possibly related) but it’s not as good as actual upstream information.

Those familiar with my history will be unsuprised: AppData to the rescue! By adding lines like this in the foo.appdata.xml file you can provide some information to the software center:

<suggests>
<id>libreoffice-draw.desktop</id>
<id>libreoffice-calc.desktop</id>
</suggests>

You don’t have to specify the parent app (e.g. libreoffice-writer.desktop in this case) and is the only tag that’s accepted. If isn’t found in the AppStream metadata then it’s just ignored, so it’s quite safe to add things that might not be in stable distros.

If enough upstreams do this then we can look at what UI makes sense. If you make use of this feature, please let me know and I can make sure we discuss the use-case in the design discussions.

August 09, 2016

Blog backlog, Post 4, Headset fixes for Dell machines
At the bottom of the release notes for GNOME 3.20, you might have seen the line:
If you plug in an audio device (such as a headset, headphones or microphone) and it cannot be identified, you will now be asked what kind of device it is. This addresses an issue that prevented headsets and microphones being used on many Dell computers.
Before I start explaining what this does, as a picture is worth a thousand words:


This selection dialogue is one you will get on some laptops and desktop machines when the hardware is not able to detect whether the plugged in device is headphones, a microphone, or a combination of both, probably because it doesn't have an impedance detection circuit to figure that out.

This functionality was integrated into Unity's gnome-settings-daemon version a couple of years ago, written by David Henningsson.

The code that existed for this functionality was completely independent, not using any of the facilities available in the media-keys plugin to volume keys, and it could probably have been split as an external binary with very little effort.

After a bit of to and fro, most of the sound backend functionality was merged into libgnome-volume-control, leaving just 2 entry points, one to signal that something was plugged into the jack, and another to select which type of device was plugged in, in response to the user selection. This means that the functionality should be easily implementable in other desktop environments that use libgnome-volume-control to interact with PulseAudio.

Many thanks to David Henningsson for the original code, and his help integrating the functionality into GNOME, Bednet for providing hardware to test and maintain this functionality, and Allan, Florian and Rui for working on the UI notification part of the functionality, and wiring it all up after I abandoned them to go on holidays ;)

August 05, 2016

libinput and disable-while-typing

A common issue with users typing on a laptop is that the user's palms will inadvertently get in contact with the touchpad at some point, causing the cursor to move and/or click. In the best case it's annoying, in the worst case you're now typing your password into the newly focused twitter application. While this provides some general entertainment and thus makes the world a better place for a short while, here at the libinput HQ [1] we strive to keep life as boring as possible and avoid those situations.

The best way to avoid accidental input is to detect palm touches and simply ignore them. That works ok-ish on some touchpads and fails badly on others. Lots of hardware is barely able to provide an accurate touch location, let alone enough information to decide whether a touch is a palm. libinput's palm detection largely works by using areas on the touchpad that are likely to be touched by the palms.

The second-best way to avoid accidental input is to disable the touchpad while a user is typing. The libinput marketing department [2] has decided to name this feature "disable-while-typing" (DWT) and it's been in libinput for quite a while. In this post I'll describe how exactly DWT works in libinput.

Back in the olden days of roughly two years ago we all used the synaptics X.Org driver and were happy with it [3]. Disable-while-typing was featured there through the use of a tool called syndaemon. This synaptics daemon [4] has two modes. One was to poll the keyboard state every few milliseconds and check whether a key was down. If so, syndaemon sends a command to the driver to tell it to disable itself. After a timeout when the keyboard state is neutral again syndaemon tells the driver to re-enable itself. This causes a lot of wakeups, especially during those 95% of the time when the user isn't actually typing. Or missed keys if the press + release occurs between two polls. Hence the second mode, using the RECORD extension, where syndaemon opens a second connection to the X server and end checks for key events [5]. If it sees one float past, it tells the driver to disable itself, and so on and so forth. Either way, you had a separate process that did that job. syndaemon had a couple of extra options and features that I'm not going to discuss here, but we've replicated the useful ones in libinput.

libinput has no external process, DWT is integrated into the library with a couple of smart extra features. This is made easier by libinput controlling all the devices, so all keyboard events are passed around internally to the touchpad backend. That backend then decides whether it should stop sending events. And this is where the interesting bits come in.

First, we have different timeouts: if you only hit a single key, the touchpad will re-enable itself quicker than after a period of typing. So if you use the touchpad, hit a key to trigger some UI the pointer only stops moving for a very short time. But once you type, the touchpad disables itself longer. Since your hand is now in a position over the keyboard, moving back to the touchpad takes time anyway so a longer timeout doesn't hurt. And as typing is interrupted by pauses, a longer timeout bridges over those to avoid accidental movement of the cursor.

Second, any touch started while you were typing is permanently ignored, so it's safe to rest the palm on the touchpad while typing and leave it there. But we keep track of the start time of each touch so any touch started after the last key event will work normally once the DWT timeout expires. You may feel a short delay but it should be well in the acceptable range of a tens of ms.

Third, libinput is smart enough to detect which keyboard to pair with. If you have an external touchpad like the Apple Magic Trackpad or a Logitech T650, DWT will never enable on those. Likewise, typing on an external keyboard won't disable the internal touchpad. And in the rare case of two internal touchpads [6], both of them will do the right thing. As of systemd v231 the information of whether a touchpad is internal or external is available in the ID_INPUT_TOUCHPAD_INTEGRATION udev tag and thus available to everyone, not just libinput.

Finally, modifier keys are ignored for DWT, so using the touchpad to do shift-clicks works unimpeded. This also goes for the F-Key row and the numpad if you have any. These keys are usually out of the range of the touchpad anyway so interference is not an issue here. As of today, modifier key combos work too. So hitting Ctrl+S to save a document won't disable the touchpad (or any other modifiers + key combination). But once you are typing DWT activates and if you now type Shift+S to type the letter 'S' the touchpad remains disabled.

So in summary: what we've gained from switching to libinput is one external process less that causes wakeups and the ability to be a lot smarter about when we disable the touchpad. Coincidentally, libinput has similar code to avoid touchpad interference when the trackpoint is in use.

[1] that would be me
[2] also me
[3] uphill, both ways, snow, etc.
[4] nope. this one wasn't my fault
[5] Yes, syndaemon is effectively a keylogger, except it doesn't do any of the "logging" bit a keylogger would be expected to do to live up to its name
[6] This currently happens on some Dell laptops using hid-i2c. We get two devices, one named "DLL0704:01 06CB:76AE Touchpad" or similar and one "SynPS/2 Synaptics TouchPad". The latter one will never send events unless hid-i2c is disabled in the kernel

July 27, 2016

FINAL REMINDER! systemd.conf 2016 CfP Ends on Monday!

Please note that the systemd.conf 2016 Call for Participation ends on Monday, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are very interested in yours, too!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

ALSO: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

July 21, 2016

Using modern gettext

gettext has seen quite some enhancements in recent years, after Daiki Ueno started maintaining it. It can now extract (and merge back) strings from diverse file formats, including many of the formats that are important for desktop applications. With gettext 0.19.8, there is really no need  anymore to use intltool or GLib’s dated gettext glue (AM_GLIB_GNU_GETTEXT and glib-gettextize).

Since intltool still sticks around in quite a few projects, I thought that I should perhaps explain some of the dos and don’ts for how to get by with plain gettext. Javier Jardon has been tirelessly fighting a battle for using upstream gettext, maybe this will help him reaching the finish line with that project.

Extracting strings

xgettext is the tool used to extract strings from sources into .pot files.

In addition to programming languages such as C, C++, Java, Scheme, etc, it recognizes the following files by their typical file extensions (and it is smart enough to disregard a .in extension):

    • Desktop files: .desktop
    • GSettings schemas: .gschema.xml
    • GtkBuilder ui files: .ui
    • Appdata files: .appdata.xml and .metainfo.xml

You can just add these files to POTFILES.in, without the extra type hints that intltool requires.

One important advantage of xgettext’s xml support, compared to intltool, is that you can install .in files that are valid; no more tag mutilation like <_description> required.

Merging translations

The trickier part is merging translations back into the various file formats. Sometimes that is not necessary, since the file has a reference to the gettext domain, and consumers know to use gettext at runtime: that is the case for GSettings schemas and GtkBuilder ui files, for example.

But in other cases, the translations need to be merged back into  the original file before installing it. In these cases, the original file from which the strings are extracted often has an extra .in extension. The tool that is doing this task is msgfmt.

Intltool installs autotools glue which can define make rules for many of these cases, such as @INTLTOOL_DESKTOP_RULE@. Gettext does not provide this kind of glue, but the msgfmt tool is versatile enough that you can write your own rules fairly easily, for example:

%.desktop: %.desktop.in
        msgfmt --desktop -d $(top_srcdir)/po \
               --template $< -o $@

Extending gettext

Gettext can be extended to understand new xml formats. To do so, you install .its and .loc files. The syntax for these files is explained in detail in the gettext docs. Libraries are expected to install these files for ‘their’ formats (GLib and GTK+ already do, and  PolicyKit will do the same soon.

If you don’t want to wait for your favorite format to come with built-in its support, you can also include its files with your application; gettext will look for such files in $XDG_DATA_DIRS/gettext/its/.

 

July 20, 2016

libinput is done

Don't panic. Of course it isn't. Stop typing that angry letter to the editor and read on. I just picked that title because it's clickbait and these days that's all that matters, right?

With the release of libinput 1.4 and the newest feature to add tablet pad mode switching, we've now finished the TODO list we had when libinput was first conceived. Let's see what we have in libinput right now:

  • keyboard support (actually quite boring)
  • touchscreen support (actually quite boring too)
  • support for mice, including middle button emulation where needed
  • support for trackballs including the ability to use them rotated and to use button-based scrolling
  • touchpad support, most notably:
    • proper multitouch support on touchpads [1]
    • two-finger scrolling and edge scrolling
    • tapping, tap-to-drag and drag-lock (all configurable)
    • pinch and swipe gestures
    • built-in palm and thumb detection
    • smart disable-while-typing without the need for an external process like syndaemon
    • more predictable touchpad behaviours because everything is based on physical units [2]
    • a proper API to allow for kinetic scrolling on a per-widget basis
  • tracksticks work with middle button scrolling and communicate with the touchpad where needed
  • tablet support, most notably:
    • each tool is a separate entity with its own capabilities
    • the pad itself is a separate entity with its own capabilities and events
    • mode switching is exported by the libinput API and should work consistently across callers
  • a way to identify if multiple kernel devices belong to the same physical device (libinput device groups)
  • a reliable test suite
  • Documentation!
The side-effect of libinput is that we are also trying to fix the rest of the stack where appropriate. Mostly this meant pushing stuff into systemd/udev so far, with the odd kernel fix as well. Specifically the udev bits means we
  • know the DPI density of a mouse
  • know whether a touchpad is internal or external
  • fix up incorrect axis ranges on absolute devices (mostly touchpads)
  • try to set the trackstick sensitivity to something sensible
  • know when the wheel click is less/more than the default 15 degrees
And of course, the whole point of libinput is that it can be used from any Wayland compositor and take away most of the effort of implementing an input stack. GNOME, KDE and enlightenment already uses libinput, and so does Canonical's Mir. And some distribution use libinput as the default driver in X through xf86-input-libinput (Fedora 22 was the first to do this). So overall libinput is already quite a success.

The hard work doesn't stop of course, there are still plenty of areas where we need to be better. And of course, new features come as HW manufacturers bring out new hardware. I already have touch arbitration on my todo list. But it's nice to wave at this big milestone as we pass it into the way to the glorious future of perfect, bug-free input. At this point, I'd like to extend my thanks to all our contributors: Andreas Pokorny, Benjamin Tissoires, Caibin Chen, Carlos Garnacho, Carlos Olmedo Escobar, David Herrmann, Derek Foreman, Eric Engestrom, Friedrich Schöller, Gilles Dartiguelongue, Hans de Goede, Jackie Huang, Jan Alexander Steffens (heftig), Jan Engelhardt, Jason Gerecke, Jasper St. Pierre, Jon A. Cruz, Jonas Ådahl, JoonCheol Park, Kristian Høgsberg, Krzysztof A. Sobiecki, Marek Chalupa, Olivier Blin, Olivier Fourdan, Peter Frühberger, Peter Hutterer, Peter Korsgaard, Stephen Chandler Paul, Thomas Hindoe Paaboel Andersen, Tomi Leppänen, U. Artie Eoff, Velimir Lisec.

Finally: libinput was started by Jonas Ådahl in late 2013, so it's already over 2.5 years old. And the git log shows we're approaching 2000 commits and a simple LOCC says over 60000 lines of code. I would also like to point out that the vast majority of commits were done by Red Hat employees, I've been working on it pretty much full-time since 2014 [3]. libinput is another example of Red Hat putting money, time and effort into the less press-worthy plumbing layers that keep our systems running. [4]

[1] Ironically, that's also the biggest cause of bugs because touchpads are terrible. synaptics still only does single-finger with a bit of icing and on bad touchpads that often papers over hardware issues. We now do that in libinput for affected hardware too.
[2] The synaptics driver uses absolute numbers, mostly based on the axis ranges for Synaptics touchpads making them unpredictable or at least different on other touchpads.
[3] Coincidentally, if you see someone suggesting that input is easy and you can "just do $foo", their assumptions may not match reality
[4] No, Red Hat did not require me to add this. I can pretty much write what I want in this blog and these opinions are my own anyway and don't necessary reflect Red Hat yadi yadi ya. The fact that I felt I had to add this footnote to counteract whatever wild conspiracy comes up next is depressing enough.

July 19, 2016

radv: initial hacking on a vulkan driver for AMD VI GPUs
(email sent to mesa-devel list).

I was waiting for an open source driver to appear when I realised I should really just write one myself, some talking with Bas later, and we decided to see where we could get.

This is the point at which we were willing to show it to others, it's not really a vulkan driver yet, so far it's a vulkan triangle demos driver.

It renders the tri and cube demos from the vulkan loader,
and the triangle demo from Sascha Willems demos
and the Vulkan CTS smoke tests (all 4 of them one of which draws a triangle).

There is a lot of work to do, and it's at the stage where we are seeing if anyone else wants to join in at the start, before we make too many serious design decisions or take a path we really don't want to.

So far it's only been run on Tonga and Fiji chips I think, we are hoping to support radeon kernel driver for SI/CIK at some point, but I think we need to get things a bit further on VI chips first.

The code is currently here:
https://github.com/airlied/mesa/tree/semi-interesting

There is a not-interesting branch which contains all the pre-history which might be useful for someone else bringing up a vulkan driver on other hardware.

The code is pretty much based on the Intel anv driver, with the winsys ported from gallium driver,
and most of the state setup from there. Bas wrote the code to connect NIR<->LLVM IR so we could reuse it in the future for SPIR-V in GL if required. It also copies AMD addrlib over, (this should be shared).

Also we don't do SPIR-V->LLVM direct. We use NIR as it has the best chance for inter shader stage optimisations (vertex/fragment combined) which neither SPIR-V or LLVM handles for us, (nir doesn't do it yet but it can).

If you want to submit bug reports, they will only be taken seriously if accompanied by working patches at this stage, and we've no plans to merge to master yet, but open to discussion on when we could do that and what would be required.
GUADEC Flatpak contest
I will be presenting a lightning talk during this year's GUADEC, and running a contest related to what I will be presenting.

Contest

To enter the contest, you will need to create a Flatpak for a piece of software that hasn't been flatpak'ed up to now (application, runtime or extension), hosted in a public repository.

You will have to send me an email about the location of that repository.

I will choose a winner amongst the participants, on the eve of the lightning talks, depending on, but not limited to, the difficulty of packaging, the popularity of the software packaged and its redistributability potential.

You can find plenty of examples (and a list of already packaged applications and runtimes) on this Wiki page.

Prize

A piece of hardware that you can use to replicate my presentation (or to replicate my attempts at a presentation, depending ;). You will need to be present during my presentation at GUADEC to claim your prize.

Good luck to one and all!

July 18, 2016

REMINDER! systemd.conf 2016 CfP Ends in Two Weeks!

Please note that the systemd.conf 2016 Call for Participation ends in less than two weeks, on Aug. 1st! Please send in your talk proposal by then! We’ve already got a good number of excellent submissions, but we are interested in yours even more!

We are looking for talks on all facets of systemd: deployment, maintenance, administration, development. Regardless of whether you use it in the cloud, on embedded, on IoT, on the desktop, on mobile, in a container or on the server: we are interested in your submissions!

In addition to proposals for talks for the main conference, we are looking for proposals for workshop sessions held during our Workshop Day (the first day of the conference). The workshop format consists of a day of 2-3h training sessions, that may cover any systemd-related topic you'd like. We are both interested in submissions from the developer community as well as submissions from organizations making use of systemd! Introductory workshop sessions are particularly welcome, as the Workshop Day is intended to open up our conference to newcomers and people who aren't systemd gurus yet, but would like to become more fluent.

For further details on the submissions we are looking for and the CfP process, please consult the CfP page and submit your proposal using the provided form!

And keep in mind:

REMINDER: Please sign up for the conference soon! Only a limited number of tickets are available, hence make sure to secure yours quickly before they run out! (Last year we sold out.) Please sign up here for the conference!

AND OF COURSE: We are also looking for more sponsors for systemd.conf! If you are working on systemd-related projects, or make use of it in your company, please consider becoming a sponsor of systemd.conf 2016! Without our sponsors we couldn't organize systemd.conf 2016!

Thank you very much, and see you in Berlin!

July 15, 2016

Why synclient does not work anymore

More and more distros are switching to libinput by default. That's a good thing but one side-effect is that the synclient tool does not work anymore [1], it just complains that "Couldn't find synaptics properties. No synaptics driver loaded?"

What is synclient? A bit of history first. Many years ago the only way to configure input devices was through xorg.conf options, there was nothing that allowed for run-time configuration. The Xorg synaptics driver found a solution to that: the driver would initialize a shared memory segment that kept the configuration options and a little tool, synclient (synaptics client), would know about that segment. Calling synclient with options would write to that SHM segment and thus toggle the various options at runtime. Driver and synclient had to be of the same version to know the layout of the segment and it's about as secure as you expect it to be. In 2008 I added input device properties to the server (X Input Extension 1.5 and it's part of 2.0 as well of course). Rather than the SHM segment we now had a generic API to talk to the driver. The API is quite simple, you effectively have two keys (device ID and property number) and you can set any value(s). Properties literally support just about anything but drivers restrict what they allow on their properties and which value maps to what. For example, to enable left-finger tap-to-click in synaptics you need to set the 5th byte of the "Synaptics Tap Action" property to 1.

xinput, a commandline tool and debug helper, has a generic API to change those properties so you can do things like xinput set-prop "device name" "property name" 1 [2]. It does a little bit under the hood but generally it's pretty stupid. You can run xinput set-prop and try to set a value that's out of range, or try to switch from int to float, or just generally do random things.

We were able to keep backwards compatibility in synclient, so where before it would use the SHM segment it would now use the property API, without the user interface changing (except the error messages are now standard Xlib errors complaining about BadValue, BadMatch or BadAccess). But synclient and xinput use the same API to talk to the server and the server can't tell the difference between the two.

Fast forward 8 years and now we have libinput, wrapped by the xf86-input-libinput driver. That driver does the same as synaptics, the config toggles are exported as properties and xinput can read and change them. Because really, you do the smart work by selecting the right property names and values and xinput just passes on the data. But synclient is broken now, simply because it requires the synaptics driver and won't work with anything else. It checks for a synaptics-specific property ("Synaptics Edges") and if that doesn't exists it complains with "Couldn't find synaptics properties. No synaptics driver loaded?". libinput doesn't initialise that property, it has its own set of properties. We did look into whether it's possible to have property-compatibility with synaptics in the libinput driver but it turned out to be a huge effort, flaky reliability at best (not all synaptics options map into libinput options and vice versa) and the benefit was quite limited. Because, as we've been saying since about 2009 - your desktop environment should take over configuration of input devices, hand-written scripts are dodo-esque.

So if you must insist on shellscripts to configure your input devices use xinput instead. synclient is like fsck.ext2, on that glorious day you switch to btrfs it won't work because it was only designed with one purpose in mind.

[1] Neither does syndaemon btw but it's functionality is built into libinput so that doesn't matter.
[2] xinput set-prop --type=int --format=32 "device name" "hey I have a banana" 1 2 3 4 5 6 and congratulations, you've just created a new property for all X clients to see. It doesn't do anything, but you could use those to attach info to devices. If anything was around to read that.

July 14, 2016

xinput resolves device names and property names

xinput is a commandline tool to change X device properties. Specifically, it's a generic interface to change X input driver configuration at run-time, used primarily in the absence of a desktop environment or just for testing things. But there's a feature of xinput that many don't appear to know: it resolves device and property names correctly. So plenty of times you see advice to run a command like this:


xinput set-prop 15 281 1
This is bad advice, it's almost impossible to figure out what this is supposed to do, it depends on the device ID never changing (spoiler: it will) and the property number never changing (spoiler: it will). Worst case, you may suddenly end up setting a different property on a different device and you won't even notice. Instead, just use the built-in name resolution features of xinput:

xinput set-prop "SynPS/2 Synaptics TouchPad" "libinput Tapping Enabled" 1
This command will work regardless of the device ID for the touchpad and regardless of the property number. Plus it's self-documenting. This has been possible for many many years, so please stop using the number-only approach.

July 11, 2016

libinput and graphics tablet mode support

In an earlier post, I explained how we added graphics tablet pad support to libinput. Read that article first, otherwise this article here will be quite confusing.

A lot of tablet pads have mode-switching capabilities. Specifically, they have a set of LEDs and pressing one of the buttons cycles the LEDs. And software is expected to map the ring, strip or buttons to different functionality depending on the mode. A common configuration for a ring or strip would be to send scroll events in mode 1 but zoom in/out when in mode 2. On the Intuos Pro series tablets that mode switch button is the one in the center of the ring. On the Cintiq 21UX2 there are two sets of buttons, one left and one right and one mode toggle button each. The Cintiq 24HD is even more special, it has three separate buttons on each side to switch to a mode directly (rather than just cycling through the modes).

In the upcoming libinput 1.4 we will have mode switching support in libinput, though modes themselves have no real effect within libinput, it is merely extra information to be used by the caller. The important terms here are "mode" and "mode group". A mode is a logical set of button, strip and ring functions, as interpreted by the compositor or the client. How they are used is up to them as well. The Wacom control panels for OS X and Windows allow mode assignment only to the strip and rings while the buttons remain in the same mode at all times. We assign a mode to each button so a caller may provide differing functionality on each button. But that's optional, having a OS X/Windows-style configuration is easy, just ignore the button modes.

A mode group is a physical set of buttons, strips and rings that belong together. On most tablets there is only one mode group but tablets like the Cintiq 21UX2 and the 24HD have two independently controlled mode groups - one left and one right. That's all there is to mode groups, modes are a function of mode groups and can thus be independently handled. Each button, ring or strip belongs to exactly one mode group. And finally, libinput provides information about which button will toggle modes or whether a specific event has toggled the mode. Documentation and a starting point for which functions to look at is available in the libinput documentation.

Mode switching on Wacom tablets is actually software-controlled. The tablet relies on some daemon running to intercept button events and write to the right sysfs files to toggle the LEDs. In the past this was handled by e.g. a callout by gnome-settings-daemon. The first libinput draft implementation took over that functionality so we only have one process to handle the events. But there are a few issues with that approach. First, we need write access to the sysfs file that exposes the LED. Second, running multiple libinput instances would result in conflicts during LED access. Third, the sysfs interface is decidedly nonstandard and quite quirky to handle. And fourth, the most recent device, the Express Key Remote has hardware-controlled LEDs.

So instead we opted for a two-factor solution: the non-standard sysfs interface will be deprecated in favour of a proper kernel LED interface (/sys/class/leds/...) with the same contents as other LEDs. And second, the kernel will take over mode switching using LED triggers that are set up to cover the most common case - hitting a mode toggle button changes the mode. Benjamin Tissoires is currently working on those patches. Until then, libinput's backend implementation will just pretend that each tablet only has one mode group with a single mode. This allows us to get the rest of the userstack in place and then, once the kernel patches are in a released kernel, switch over to the right backend.

July 10, 2016

How GNOME Software uses libflatpak

It seems people are interested in adding support for flatpaks into other software centers, and I thought I might be useful to explain how I did this in gnome-software. I’m lucky enough to have a plugin architecture to make all the flatpak code be self contained in one file, but that’s certainly not a requirement.

Flatpak generates AppStream metadata when you build desktop applications. This means it’s possible to use appstream-glib and a few tricks to just load all the enabled remotes into an existing system store. This makes searching the new applications using the (optionally stemmed) token cache trivial. Once per day gnome-software checks the age of the AppStream cache, and if required downloads a new copy using flatpak_installation_update_appstream_sync(). As if by magic, appstream-glib notices the file modification/creation and updates the internal AsStore with the new applications.

When listing the installed applications, a simple call to flatpak_installation_list_installed_refs() returns us the list we need, on which we can easily set other flatpak-specific data like the runtime. This is matched against the AppStream data, which gives us a localized and beautiful application to display in the listview.

At this point we also call flatpak_installation_list_installed_refs_for_update() and then do flatpak_installation_update() with the NO_DEPLOY flag set. This just downloads the data we need, and can be cancelled without anything bad happening. When populating the updates panel I can just call flatpak_installation_list_installed_refs() again to find installed applications that have downloaded updates ready to apply without network access.

For the sources list I’m calling flatpak_installation_list_remotes() then ignoring any set as disabled or noenumerate. Most remotes have a name and title, and this makes the UI feature complete. When collecting information to show in the ui like the size we have the metadata already, but we also add the size of the runtime if it’s not already installed. This is the same idea as flatpak_installation_install(), where we also install any required runtime when installing the main application. There is a slight impedance mismatch between the flatpak many-installed-versions and the AppStream only-one-version model, but it seems to work well enough in the current code. Flatpak splits the deployment into a runtime containing common libraries that can be shared between apps (for instance, GNOME 3.20 or KDE5) and the application itself, so the software center always needs to install the runtime for the application to launch successfully. This is something that is not enforced by the CLI tool. Rather than installing everything for each app, we can also install other so-called extensions. These are typically non-essential like the various translations and any debug information, but are not strictly limited to those things. libflatpak automatically keeps the extensions up to date when updating, so gnome-software doesn’t have to do anything special at all.

Updating single applications is trivial with flatpak_installation_update() and launching applications is just as easy with flatpak_installation_launch(), although we only support launching the newest installed version of an application at the moment. Reading local bundles works well with flatpak_bundle_ref_new(), although we do have to load the gzipped AppStream metadata and the icon ourselves. Reading a .flatpakrepo file is slightly more work, but the data is in keyfile format and trivial to parse with GKeyFile.

Overall I’ve found libflatpak to be surprisingly easy to work with, requiring none of the kludges of all the different package-based systems I’ve worked on developing PackageKit. Full marks to Alex et al.

July 09, 2016

"I recieved a free or discounted product in return for an honest review"
My experiences with Amazon reviewing have been somewhat unusual. A review of a smart switch I wrote received enough attention that the vendor pulled the product from Amazon. At the time of writing, I'm ranked as around the 2750th best reviewer on Amazon despite having a total of 18 reviews. But the world of Amazon reviews is even stranger than that, and the past couple of weeks have given me some insight into it.

Amazon's success is fairly phenomenal. It's estimated that there's over 50 million people in the US paying $100 a year to get free shipping on Amazon purchases, and combined with Amazon's surprisingly customer friendly service there's a lot of people with a very strong preference for choosing Amazon rather than any other retailer. If you're not on Amazon, you're hurting your sales.

And if you're an established brand, this works pretty well. Some people will search for your product directly and buy it, leaving reviews. Well reviewed products appear higher up in search results, so people searching for an item type rather than a brand will still see your product appear early in the search results, in turn driving sales. Some proportion of those customers will leave reviews, which helps keep your product high up in the results. As long as your products aren't utterly dreadful, you'll probably maintain that position.

But if you're a brand nobody's ever heard of, things are more difficult. People are unlikely to search for your product directly, so you're relying on turning up in the results for more generic terms. But if you're selling a more generic kind of item (say, a Bluetooth smart bulb) then there's probably a number of other brands nobody's ever heard of selling almost identical objects. If there's no reason for anybody to choose your product then you're probably not going to get any reviews and you're not going to move up the search rankings. Even if your product is better than the competition, a small number of sales means a tiny number of reviews. By the time that number's large enough to matter, you're probably onto a new product cycle.

In summary: if nobody's ever heard of you, you need reviews but you're probably not getting any.

The old way of doing this was to send review samples to journalists, but nobody's going to run a comprehensive review of 3000 different USB cables and even if they did almost nobody would read it before making a decision on Amazon. You need Amazon reviews, but you're not getting any. The obvious solution is to send review samples to people who will leave Amazon reviews. This is where things start getting more dubious.

Amazon run a program called Vine which is intended to solve this problem. Send samples to Amazon and they'll distribute them to a subset of trusted reviewers. These reviewers write a review as normal, and Amazon tag the review with a "Vine Voice" badge which indicates to readers that the reviewer received the product for free. But participation in Vine is apparently expensive, and so there's a proliferation of sites like Snagshout or AMZ Review Trader that use a different model. There's no requirement that you be an existing trusted reviewer and the product probably isn't free. You sign up, choose a product, receive a discount code and buy it from Amazon. You then have a couple of weeks to leave a review, and if you fail to do so you'll lose access to the service. This is completely acceptable under Amazon's rules, which state "If you receive a free or discounted product in exchange for your review, you must clearly and conspicuously disclose that fact". So far, so reasonable.

In reality it's worse than that, with several opportunities to game the system. AMZ Review Trader makes it clear to sellers that they can choose reviewers based on past reviews, giving customers an incentive to leave good reviews in order to keep receiving discounted products. Some customers take full advantage of this, leaving a giant number of 5 star reviews for products they clearly haven't tested and then (presumably) reselling them. What's surprising is that this kind of cynicism works both ways. Some sellers provide two listings for the same product, the second being significantly more expensive than the first. They then offer an attractive discount for the more expensive listing in return for a review, taking it down to approximately the same price as the original item. Once the reviews are in, they can remove the first listing and drop the price of the second to the original price point.

The end result is a bunch of reviews that are nominally honest but are tied to perverse incentives. In effect, the overall star rating tells you almost nothing - you still need to actually read the reviews to gain any insight into whether the customer actually used the product. And when you do write an honest review that the seller doesn't like, they may engage in heavy handed tactics in an attempt to make the review go away.

It's hard to avoid the conclusion that Amazon's review model is broken, but it's not obvious how to fix it. When search ranking is tied to reviews, companies have a strong incentive to do whatever it takes to obtain positive reviews. What we're left with for now is having to laboriously click through a number of products to see whether their rankings come from thoughtful and detailed reviews or are just a mass of 5 star one liners.

comment count unavailable comments

July 08, 2016

Portals: Using GTK+ in a Flatpak

The time is ripe for new ways to distribute and deploy desktop applications: between snappy, Flatpak, AppImage and others there are quite a few projects in this area.

Sandboxing

Most of these projects involve some notion of sandboxing: isolating the application from the rest of the system.

Snappy does this by setting environment variables like XDG_DATA_DIRS, PATH, etc,  to tell apps where to find their ‘stuff’ and using app-armor to not let them access things they shouldn’t.

Flatpak takes a somewhat different approach: it uses bind mounts and namespaces to construct a separate view of the world for the app in which it can only see what it is supposed to access.

Regardless which approach you take to sandboxing, desktop applications are not very useful without access to the rest of the system.  So, clearly, we need to poke some holes in the walls of the sandbox, since we want apps to interact with the rest of the system.

The important thing to keep in mind is that we always want to give the user control over these interactions and in particular, control over the data that goes in and out of the sandbox.

Portals

The flatpak story for this has been portals. To quickly summarize: Portals are high-level (in the sense that they talk about concepts that are relevant to users) APIs that let sandboxed applications request access to resources outside. Portal calls will (almost) always involve user interaction (basically, a dialog).

We have talked about portals for quite a while, see for example this post by Allan from two years ago. Therefore,  I am very happy to announce that we now have first release of the portal infrastructure :

Note that the portal APIs themselves are desktop-neutral, and we have separated the implementation using GTK+ dialogs into their own module. We are also working on a qt/KDE  implementation.

The  APIs included in these releases cover a lot of basic things:

  • opening files
  • opening uris
  • printing
  • taking screenshots
  • notifications
  • inhibiting screen lock
  • network status
  • proxy information

Using portals in apps

The good news is that most of these will just work transparently in GTK+ applications, since GTK+ and GLib have suitable APIs that can be made to use the portals without any changes required from the sandboxed application. This support is already in place in the master branches; and will be in the 3.22 releases. We are working on a similar level of support for qt/KDE.

In detail, for opening files, you can use GtkFileChooserNative (a file chooser implementation that will also use a native Windows filechooser if you are on that platform). GtkFileChooserButton will do this for you, unless you manually set its dialog to something else. Certain things will not work with the portal, such as previews, or adding extra custom widgets.

For printing, use GtkPrintOperation (again a long-standing GTK+ api that will use a native Windows print dialog if you are on that platform).

For opening uris, use gtk_show_uri or g_app_info_launch_default_for_uri. We also added a new function that works slightly better in this context, gtk_show_uri_on_window().

Testing

As I said, most of the portals will just be transparently used by sandboxed GTK+ applications (provided they use GTK+ from git), but I’ve also written a portal-test application to try all the available portals.

portal-test

All of the tests use the regular GTK+ APIs, with the exception of the Screenshot button. Since we don’t have a screenshooting API in GTK+, that button makes a direct D-Bus call to the portal. Here is how it looks in action:

portal-test2

Whats next ?

For the first release, we’ve focused on portals for toolkit-level functionality. There is obvously a long list of other system integration points that will need to be covered. High up on our list for the near future are access to devices like webcams and microphones, and pulseaudio. If you are interested, the list of issues has some more information.

Update: I’ve been asked: Why do I need a portal, the file chooser in the libreoffice flatpak seems to work just fine ?! Most of the current flatpaks are shipping with a relatively open sandbox configuration that allows the application full access to the host filesystem, or at least to your entire home directory. Portals enable applications to  function in a constrained sandbox that does not have this.

July 07, 2016

Bluetooth LED bulbs
The best known smart bulb setups (such as the Philips Hue and the Belkin Wemo) are based on Zigbee, a low-energy, low-bandwidth protocol that operates on various unlicensed radio bands. The problem with Zigbee is that basically no home routers or mobile devices have a Zigbee radio, so to communicate with them you need an additional device (usually called a hub or bridge) that can speak Zigbee and also hook up to your existing home network. Requests are sent to the hub (either directly if you're on the same network, or via some external control server if you're on a different network) and it sends appropriate Zigbee commands to the bulbs.

But requiring an additional device adds some expense. People have attempted to solve this in a couple of ways. The first is building direct network connectivity into the bulbs, in the form of adding an 802.11 controller. Go through some sort of setup process[1], the bulb joins your network and you can communicate with it happily. Unfortunately adding wifi costs more than adding Zigbee, both in terms of money and power - wifi bulbs consume noticeably more power when "off" than Zigbee ones.

There's a middle ground. There's a large number of bulbs available from Amazon advertising themselves as Bluetooth, which is true but slightly misleading. They're actually implementing Bluetooth Low Energy, which is part of the Bluetooth 4.0 spec. Implementing this requires both OS and hardware support, so older systems are unable to communicate. Android 4.3 devices tend to have all the necessary features, and modern desktop Linux is also fine as long as you have a Bluetooth 4.0 controller.

Bluetooth is intended as a low power communications protocol. Bluetooth Low Energy (or BLE) is even lower than that, running in a similar power range to Zigbee. Most semi-modern phones can speak it, so it seems like a pretty good choice. Obviously you lose the ability to access the device remotely, but given the track record on this sort of thing that's arguably a benefit. There's a couple of other downsides - the range is worse than Zigbee (but probably still acceptable for any reasonably sized house or apartment), and only one device can be connected to a given BLE server at any one time. That means that if you have the control app open while you're near a bulb, nobody else can control that bulb until you disconnect.

The quality of the bulbs varies a great deal. Some of them are pure RGB bulbs and incapable of producing a convincing white at a reasonable intensity[2]. Some have additional white LEDs but don't support running them at the same time as the colour LEDs, so you have the choice between colour or a fixed (and usually more intense) white. Some allow running the white LEDs at the same time as the RGB ones, which means you can vary the colour temperature of the "white" output.

But while the quality of the bulbs varies, the quality of the apps doesn't really. They're typically all dreadful, competing on features like changing bulb colour in time to music rather than on providing a pleasant user experience. And the whole "Only one person can control the lights at a time" thing doesn't really work so well if you actually live with anyone else. I was dissatisfied.

I'd met Mike Ryan at Kiwicon a couple of years back after watching him demonstrate hacking a BLE skateboard. He offered a couple of good hints for reverse engineering these devices, the first being that Android already does almost everything you need. Hidden in the developer settings is an option marked "Enable Bluetooth HCI snoop log". Turn that on and all Bluetooth traffic (including BLE) is dumped into /sdcard/btsnoop_hci.log. Turn that on, start the app, make some changes, retrieve the file and check it out using Wireshark. Easy.

Conveniently, BLE is very straightforward when it comes to network protocol. The only thing you have is GATT, the Generic Attribute Protocol. Using this you can read and write multiple characteristics. Each packet is limited to a maximum of 20 bytes. Most implementations use a single characteristic for light control, so it's then just a matter of staring at the dumped packets until something jumps out at you. A pretty typical implementation is something like:

0x56,r,g,b,0x00,0xf0,0x00,0xaa

where r, g and b are each just a single byte representing the corresponding red, green or blue intensity. 0x56 presumably indicates a "Set the light to these values" command, 0xaa indicates end of command and 0xf0 indicates that it's a request to set the colour LEDs. Sending 0x0f instead results in the previous byte (0x00 in this example) being interpreted as the intensity of the white LEDs. Unfortunately the bulb I tested that speaks this protocol didn't allow you to drive the white LEDs at the same time as anything else - setting the selection byte to 0xff didn't result in both sets of intensities being interpreted at once. Boo.

You can test this out fairly easily using the gatttool app. Run hcitool lescan to look for the device (remember that it won't show up if anything else is connected to it at the time), then do gatttool -b deviceid -I to get an interactive shell. Type connect to initiate a connection, and once connected send commands by doing char-write-cmd handle value using the handle obtained from your hci dump.

I did this successfully for various bulbs, but annoyingly hit a problem with one from Tikteck. The leading byte of each packet was clearly a counter, but the rest of the packet appeared to be garbage. For reasons best known to themselves, they've implemented application-level encryption on top of BLE. This was a shame, because they were easily the best of the bulbs I'd used - the white LEDs work in conjunction with the colour ones once you're sufficiently close to white, giving you good intensity and letting you modify the colour temperature. That gave me incentive, but figuring out the protocol took quite some time. Earlier this week, I finally cracked it. I've put a Python implementation on Github. The idea is to tie it into Ulfire running on a central machine with a Bluetooth controller, making it possible for me to control the lights from multiple different apps simultaneously and also integrating with my Echo.

I'd write something about the encryption, but I honestly don't know. Large parts of this make no sense to me whatsoever. I haven't even had any gin in the past two weeks. If anybody can explain how anything that's being done there makes any sense at all[3] that would be appreciated.

[1] typically via the bulb pretending to be an access point, but also these days through a terrifying hack involving spewing UDP multicast packets of varying lengths in order to broadcast the password to associated but unauthenticated devices and good god the future is terrifying

[2] For a given power input, blue LEDs produce more light than other colours. To get white with RGB LEDs you either need to have more red and green LEDs than blue ones (which costs more), or you need to reduce the intensity of the blue ones (which means your headline intensity is lower). Neither is appealing, so most of these bulbs will just give you a blue "white" if you ask for full red, green and blue

[3] Especially the bit where we calculate something from the username and password and then encrypt that using some random numbers as the key, then send 50% of the random numbers and 50% of the encrypted output to the device, because I can't even

comment count unavailable comments
crashtesting: now 92000 documents

crash testing, now 92000 documents continuously tested

Last August we had a collection of approximately 76000 documents. This July we have over 92000 documents. The corpus is mostly gathered from our bugzilla and other projects bugzillas via our get-bugzilla-attachments-by-mimetype script. For testing purposes we continuously import them all, and then export certain formats to multiple destination formats. For example odts are re-exported to odt, doc and docx and the results of what documents crashed (which includes asserts) are uploaded to the crashtest site
 
I like to imagine that these are typically the type of mean and bitter documents that try to eat innocent office software alive.
 
The get-bugzilla-attachments-by-mimetype script only downloads new attachments from its target bugzillas which are not already downloaded. The last run to refresh the corpus took over two days to complete. Refreshing isn't fast or cheap so it's fairly infrequent.
 
The regular crashtesting run to import and reexport the corpus is comparatively frequent, takes approximately 24 hours and typically gets run every two or three days on the latest 64bit Linux master in headless mode.

July 05, 2016

Flatpak and GNOME Software

I wanted to write a little about how Flatpak apps are treated differently to packages in GNOME Software. We’ve now got two plugins in master, one called flatpak-user and another called flatpak-system. They both share 99% of the same code, only differing in how they are initialised. As you might expect, -user does per-user installation and updating, and the latter does it per-system for all users. Per-user applications that are specific to just a single user account are an amazingly useful concept, as most developers found using tools like jhbuild. We default to installing software at the moment for all users, but there is actually a org.gnome.software.install-bundles-system-wide dconf key that can be used to reverse this on specific systems.

We go to great lengths to interoperate with the flatpak command line tool, so if you install the nightly GTK3 build of GIMP per-user you can install the normal version system-wide and they both show in the installed and updates panel without conflicting. We’ve also got file notifications set up so GNOME Software shows the correct application state straight away if you add a remote or install a flatpak app on the command line. At the moment we show both packages and flatpaks in the search results, but when we suggest apps on the overview page we automatically prefer the flatpak version if both are available. In Ubuntu, snappy results are sorted above package results unconditionally, but I don’t know if this is a good thing to do for flatpaks upstream, comments welcome. I’m sure whatever defaults I choose will mortally offend someone.

Screenshot from 2016-07-05 14-45-35

GNOME Software also supports single-file flatpak bundles like gimp.flatpak – just double click and you’re good to install. These files are somewhat like a package in that all the required files are included and you can install without internet access. These bundles can also install a remote (ie a reference to a flatpak repository) too, which allows them to be kept up to date. Such per-application remotes are only used for the specific application and not the others potentially in the same tree (for the curious, this is called a “noenumerate” remote). We also support the more rarely seen dummy.flatpakrepo files too; these allow a user to install a remote which could contain a number of applications and makes it very easy to set up an add-on remote that allows you browse a different set of apps than shipped, for instance the Endless-specific apps. Each of these files contains all the metadata we need in AppStream format, with translations, icons and all the things you expect from a modern software center. It’s a shame snappy decided not to use AppStream and AppData for application metadata, as this kind of extra data really makes the UI really beautiful.

Screenshot from 2016-07-05 14-54-18

With the latest version of flatpak we also do a much better job of installing the additional extensions the application needs, for instance locales or debug data. Sharing the same code between the upstream command line tool and gnome-software means we always agree on what needs installing and updating. Just like the CLI, gnome-software can update flatpaks safely live (even when the application is running), although we do a little bit extra compared to the CLI and download the data we need to do the update when the session is idle and on suitable unmetered network access. This means you can typically just click the ‘Update’ button in the updates panel for a near-instant live-update. This is what people have wanted for years, and I’ve told each and every bug-report that live updates using packages only works 99.99% of the time, exploding in a huge fireball 0.01% of the time. Once all desktop apps are packaged as flatpaks we will only need to reboot for atomic offline updates for core platform updates like a new glibc or the kernel. That future is very nearly now.

Screenshot from 2016-07-05 14-54-59

June 21, 2016

I've bought some more awful IoT stuff
I bought some awful WiFi lightbulbs a few months ago. The short version: they introduced terrible vulnerabilities on your network, they violated the GPL and they were also just bad at being lightbulbs. Since then I've bought some other Internet of Things devices, and since people seem to have a bizarre level of fascination with figuring out just what kind of fractal of poor design choices these things frequently embody, I thought I'd oblige.

Today we're going to be talking about the KanKun SP3, a plug that's been around for a while. The idea here is pretty simple - there's lots of devices that you'd like to be able to turn on and off in a programmatic way, and rather than rewiring them the simplest thing to do is just to insert a control device in between the wall and the device andn ow you can turn your foot bath on and off from your phone. Most vendors go further and also allow you to program timers and even provide some sort of remote tunneling protocol so you can turn off your lights from the comfort of somebody else's home.

The KanKun has all of these features and a bunch more, although when I say "features" I kind of mean the opposite. I plugged mine in and followed the install instructions. As is pretty typical, this took the form of the plug bringing up its own Wifi access point, the app on the phone connecting to it and sending configuration data, and the plug then using that data to join your network. Except it didn't work. I connected to the plug's network, gave it my SSID and password and waited. Nothing happened. No useful diagnostic data. Eventually I plugged my phone into my laptop and ran adb logcat, and the Android debug logs told me that the app was trying to modify a network that it hadn't created. Apparently this isn't permitted as of Android 6, but the app was handling this denial by just trying again. I deleted the network from the system settings, restarted the app, and this time the app created the network record and could modify it. It still didn't work, but that's because it let me give it a 5GHz network and it only has a 2.4GHz radio, so one reset later and I finally had it online.

The first thing I normally do to one of these things is run nmap with the -O argument, which gives you an indication of what OS it's running. I didn't really need to in this case, because if I just telnetted to port 22 I got a dropbear ssh banner. Googling turned up the root password ("p9z34c") and I was logged into a lightly hacked (and fairly obsolete) OpenWRT environment.

It turns out that here's a whole community of people playing with these plugs, and it's common for people to install CGI scripts on them so they can turn them on and off via an API. At first this sounds somewhat confusing, because if the phone app can control the plug then there clearly is some kind of API, right? Well ha yeah ok that's a great question and oh good lord do things start getting bad quickly at this point.

I'd grabbed the apk for the app and a copy of jadx, an incredibly useful piece of code that's surprisingly good at turning compiled Android apps into something resembling Java source. I dug through that for a while before figuring out that before packets were being sent, they were being handed off to some sort of encryption code. I couldn't find that in the app, but there was a native ARM library shipped with it. Running strings on that showed functions with names matching the calls in the Java code, so that made sense. There were also references to AES, which explained why when I ran tcpdump I only saw bizarre garbage packets.

But what was surprising was that most of these packets were substantially similar. There were a load that were identical other than a 16-byte chunk in the middle. That plus the fact that every payload length was a multiple of 16 bytes strongly indicated that AES was being used in ECB mode. In ECB mode each plaintext is split up into 16-byte chunks and encrypted with the same key. The same plaintext will always result in the same encrypted output. This implied that the packets were substantially similar and that the encryption key was static.

Some more digging showed that someone had figured out the encryption key last year, and that someone else had written some tools to control the plug without needing to modify it. The protocol is basically ascii and consists mostly of the MAC address of the target device, a password and a command. This is then encrypted and sent to the device's IP address. The device then sends a challenge packet containing a random number. The app has to decrypt this, obtain the random number, create a response, encrypt that and send it before the command takes effect. This avoids the most obvious weakness around using ECB - since the same plaintext always encrypts to the same ciphertext, you could just watch encrypted packets go past and replay them to get the same effect, even if you didn't have the encryption key. Using a random number in a challenge forces you to prove that you actually have the key.

At least, it would do if the numbers were actually random. It turns out that the plug is just calling rand(). Further, it turns out that it never calls srand(). This means that the plug will always generate the same sequence of challenges after a reboot, which means you can still carry out replay attacks if you can reboot the plug. Strong work.

But there was still the question of how the remote control works, since the code on github only worked locally. tcpdumping the traffic from the server and trying to decrypt it in the same way as local packets worked fine, and showed that the only difference was that the packet started "wan" rather than "lan". The server decrypts the packet, looks at the MAC address, re-encrypts it and sends it over the tunnel to the plug that registered with that address.

That's not really a great deal of authentication. The protocol permits a password, but the app doesn't insist on it - some quick playing suggests that about 90% of these devices still use the default password. And the devices are all based on the same wifi module, so the MAC addresses are all in the same range. The process of sending status check packets to the server with every MAC address wouldn't take that long and would tell you how many of these devices are out there. If they're using the default password, that's enough to have full control over them.

There's some other failings. The github repo mentioned earlier includes a script that allows arbitrary command execution - the wifi configuration information is passed to the system() command, so leaving a semicolon in the middle of it will result in your own commands being executed. Thankfully this doesn't seem to be true of the daemon that's listening for the remote control packets, which seems to restrict its use of system() to data entirely under its control. But even if you change the default root password, anyone on your local network can get root on the plug. So that's a thing. It also downloads firmware updates over http and doesn't appear to check signatures on them, so there's the potential for MITM attacks on the plug itself. The remote control server is on AWS unless your timezone is GMT+8, in which case it's in China. Sorry, Western Australia.

It's running Linux and includes Busybox and dnsmasq, so plenty of GPLed code. I emailed the manufacturer asking for a copy and got told that they wouldn't give it to me, which is unsurprising but still disappointing.

The use of AES is still somewhat confusing, given the relatively small amount of security it provides. One thing I've wondered is whether it's not actually intended to provide security at all. The remote servers need to accept connections from anywhere and funnel decent amounts of traffic around from phones to switches. If that weren't restricted in any way, competitors would be able to use existing servers rather than setting up their own. Using AES at least provides a minor obstacle that might encourage them to set up their own server.

Overall: the hardware seems fine, the software is shoddy and the security is terrible. If you have one of these, set a strong password. There's no rate-limiting on the server, so a weak password will be broken pretty quickly. It's also infringing my copyright, so I'd recommend against it on that point alone.

comment count unavailable comments
AAA game, indie game, card-board-box
Early bird gets eaten by the Nyarlathotep
 
The more adventurous of you can use those (designed as embeddable) Lua scripts to transform your DRM-free GOG.com downloads into Flatpaks.

The long-term goal would obviously be for this not to be needed, and for online games stores to ship ".flatpak" files, with metadata so we know what things are in GNOME Software, which automatically picks up the right voice/subtitle language, and presents its extra music and documents in the respective GNOME applications.
 
But in the meanwhile, and for the sake of the games already out there, there's flatpak-games. Note that lua-archive is still fiddly.
 
Support for a few Humble Bundle formats (some formats already are), grab-all RPMs and Debs, and those old Loki games is also planned.
 
It's late here, I'll be off to do some testing I think :)

PS: Even though I have enough programs that would fail to create bundles in my personal collection to accept "game donations", I'm still looking for original copies of Loki games. Drop me a message if you can spare one!

June 14, 2016

Dispatches from the GTK+ hackfest

A quick update from the GTK+ hackfest. I don’t really want to talk about the versioning discussion, except for two points:

First, I want to apologize to Allison for encouraging her to post about this – I really didn’t anticipate the amount of uninformed, unreasonable and hateful reactions that we received.

Second, I want to stress that we just at the beginning of this discussion, we will not make any decisions until after Guadec. Everybody who has opinions on this topic should feel free to give us feedback. We are of course particularly interested in the feedback from parties who will be directly affected by changes in GTK+ versioning, e.g. gstreamer and other dependent libraries.

20160613_155602Todays morning discussion was all about portals. Portals are high-level D-Bus apis that will allow sandboxed applications to request access to outside resources, such as files or pictures, or just for showing URIs or launch other applications. We have early implementations of some of these now,  but after the valuable feedback in todays discussion, we will likely make some  changes to it.

An expectation for portals is that there will be a user interaction before an operation is carried out, to keep users in control and let tem cancel requests. The portal UIs will be provided by the desktop session.

We want to have a number of portals implemented during the summer, starting with the most important ones, like file chooser,  ‘open a uri’, application launcher, proxy support, and a few others. The notes from this discussion can be found here.

20160613_105754In the afternoon, the discussion moved to developer documentation, how to improve it, let readers provide feedback and suggestions, and integrate them in gnome-builder.

We also discussed ways to make GTK+ better for responsive designs.

June 08, 2016

spellchecking calendar/calender
A short 2009 article about Microsoft's group with responsibility for spellchecking that mentions the calendar/calender masking problem. Sometimes you probably do want correctly spelled words to be flagged.

June 07, 2016

Modularity and the desktop

shellsThere has been much talk about modularity recently. Fedora even has a working group on this topic. Modularity is such a generic term that  it is a bit hard to figure out what this is all about, but the wiki page gives some hints: …base module… …docker image… …reduced dependencies… …tooling… 

A very high-level reading would be that this is about defining tools and processes to deliver software in modules, which can be larger units than packages, and e.g. come in the form of container images.

Another reading would be that this is an effort to come to grips with the changing role of distributions in a world were containers are the dominant way to run and deploy applications.

Modularity on the desktop

In this post, I want to look at how this modularity effort looks from the desktop perspective, and what we are doing in this area.

Our main goal for a while has been to make it easier to get desktop applications from application developers to users.

In the traditional Linux distribution world, this is a total nightmare: Once you’ve written your application, you need to either learn how to create an Ubuntu .deb, an Arch .pkg, a Fedora .rpm, to name just a few, and then follow lengthy processes to get your packages accepted into these distributions.

And once you’ve done all that, you will get bug reports that your application is broken with the one or other version of one of your dependencies. If not right away, then a few months down the road when the distributions move on to the latest and greatest releases.  So, keeping those packages working requires the constant attention of a package maintainer.

To improve on this sad state of affairs, we need to make applications much less dependent on the OS they run on, and on the libraries that happen to come with the OS.

At this point, you might say: Aha, you want to bundle dependencies! Yes, but read on. Bundling is a bad word in the traditional distribution world. It implies duplication, since multiple applications may bundle the same library. And having multiple copies of the same library (possibly different versions, too), makes it harder to ensure that bug and security fixes get applied to all the copies.  Not to mention that all the duplication consumes bandwidth when you have to download it all.

Bundling everything with the application certainly has its drawbacks. But there are several things we can do to preserve most of the benefits of bundling, while avoiding the worst of the problems.

One takeaway from the modularity discussions mentioned earlier is that going to larger units than individual packages (ie modules) has  advantages: we thin out the dependency graph, and we get a large piece of functionality that can be tested, installed and updated as a unit.

And we can use a smart storage and transport mechanism such as OSTree to limit the impact that duplication has on bandwidth.

Desktop applications commonly depend on similar sets of libraries: the GTK+ stack is a good example, kdelibs is another.  Treating these common library stacks as modules makes a lot of sense.  We decided to call these modules runtimes. Applications can declare a single dependency on the runtime they need, and get a well-defined set of APIs and libraries in return.

What if you need a library that is not in the runtime ? In that case, you can still bundle it with your application. You will be reponsible for the libraries that you bundle, but in return your users get exactly the version that you’ve tested your app with.

By decoupling the runtimes from the OS and by making them available in the same way as the applications, we  make it possible to have applications that can run on different distributions, regardless of the library versions that are included in the OS. Of course, there is limits to what we can achieve: the applications and runtimes still have requirements (e.g. on Linux kernel features, or on session services that are expected to be present). But building an application that can run on Fedora, Ubuntu, Arch, RHEL and other modern distributions is entirely possible.

But what about security updates?  To make this system work, we need to offer well-maintained runtimes that application authors can rely on, and provide updates for them just as we do now with distribution packages.

But that is not enough. We also need to isolate applications from the rest of the system (and your data!) at runtime, in order to limit the damage that they can do. Historically, the Linux desktop has been really bad about this: X lets any client snoop all input, and applications are generally free to read whatever they find in your home directory. Part of our desktop modularity story is to use container technologies and e.g. Wayland to confine (or sandbox) applications at runtime.

Modularity now!

 Bubblewrap cat by dancing_stupidity

We have been working towards this goal for quite a while, with the Wayland port and gnome-software as a non-package-centric installer.

What I’ve outlined above is more or less the architecture of the Flatpak system (formerly known as xdg-app).  It is not 100% fleshed out yet, but enough pieces are in place now that you can try it out and e.g. install a nightly build of libreoffice that works on both Fedora and Ubuntu.

Head over to www.flatpak.org to learn more.

Be wary of heroes
Inspiring change is difficult. Fighting the status quo typically means being able to communicate so effectively that powerful opponents can't win merely by outspending you. People need to read your work or hear you speak and leave with enough conviction that they in turn can convince others. You need charisma. You need to be smart. And you need to be able to tailor your message depending on the audience, even down to telling an individual exactly what they need to hear to take your side. Not many people have all these qualities, but those who do are powerful and you want them on your side.

But the skills that allow you to convince people that they shouldn't listen to a politician's arguments are the same skills that allow you to convince people that they shouldn't listen to someone you abused. The ability that allows you to argue that someone should change their mind about whether a given behaviour is of social benefit is the same ability that allows you to argue that someone should change their mind about whether they should sleep with you. The visibility that gives you the power to force people to take you seriously is the same visibility that makes people afraid to publicly criticise you.

We need these people, but we also need to be aware that their talents can be used to hurt as well as to help. We need to hold them to higher standards of scrutiny. We need to listen to stories about their behaviour, even if we don't want to believe them. And when there are reasons to believe those stories, we need to act on them. That means people need to feel safe in coming forward with their experiences, which means that nobody should have the power to damage them in reprisal. If you're not careful, allowing charismatic individuals to become the public face of your organisation gives them that power.

There's no reason to believe that someone is bad merely because they're charismatic, but this kind of role allows a charismatic abuser both a great deal of cover and a great deal of opportunity. Sometimes people are just too good to be true. Pretending otherwise doesn't benefit anybody but the abusers.

comment count unavailable comments

May 25, 2016

Blog backlog, Post 3, DisplayLink-based USB3 graphics support for Fedora
Last year, after DisplayLink released the first version of the supporting tools for their USB3 chipsets, I tried it out on my Dell S2340T.

As I wanted a clean way to test new versions, I took Eric Nothen's RPMs, and updated them along with newer versions, automating the creation of 32- and 64-bit x86 versions.

The RPM contains 3 parts, evdi, a GPLv2 kernel module that creates a virtual display, the LGPL library to access it, and a proprietary service which comes with "firmware" files.

Eric's initial RPMs used the precompiled libevdi.so, and proprietary bits, compiling only the kernel module with dkms when needed. I changed this, compiling the library from the upstream repository, using the minimal amount of pre-compiled binaries.

This package supports quite a few OEM devices, but does not work correctly with Wayland, so you'll need to disable Wayland support in /etc/gdm/custom.conf if you want it to work at the login screen, and without having to restart the displaylink.service systemd service after logging in.


 Plugged in via DisplayPort and USB (but I can only see one at a time)


The source for the RPM are on GitHub. Simply clone and run make in the repository to create 32-bit and 64-bit RPMs. The proprietary parts are redistributable, so if somebody wants to host and maintain those RPMs, I'd be glad to pass this on.

May 24, 2016

LVFS Technical White Paper

I spent a good chunk of today writing a technical whitepaper titled Introducing the Linux Vendor Firmware Service — I’d really appreciate any comments, either from people who have seen all progress from the start or who don’t know anything about it at all.

Typos, or more general comments are all welcome and once I’ve got something a bit more polished I’ll be sending this to some important suits in a few well known companies. Thanks for any help!

May 23, 2016

External Plugins in GNOME Software (6)

This is my last post about the gnome-software plugin structure. If you want more, join the mailing list and ask a question. If you’re not sure how something works then I’ve done a poor job on the docs, and I’m happy to explain as much as required.

GNOME Software used to provide a per-process plugin cache, automatically de-duplicating applications and trying to be smarter than the plugins themselves. This involved merging applications created by different plugins and really didn’t work very well. For 3.20 and later we moved to a per-plugin cache which allows the plugin to control getting and adding applications to the cache and invalidating it when it made sense. This seems to work a lot better and is an order of magnitude less complicated. Plugins can trivially be ported to using the cache using something like this:

 
   /* create new object */
   id = gs_plugin_flatpak_build_id (inst, xref);
-  app = gs_app_new (id);
+  app = gs_plugin_cache_lookup (plugin, id);
+  if (app == NULL) {
+     app = gs_app_new (id);
+     gs_plugin_cache_add (plugin, id, app);
+  }

Using the cache has two main benefits for plugins. The first is that we avoid creating duplicate GsApp objects for the same logical thing. This means we can query the installed list, start installing an application, then query it again before the install has finished. The GsApp returned from the second add_installed() request will be the same GObject, and thus all the signals connecting up to the UI will still be correct. This means we don’t have to care about migrating the UI widgets as the object changes and things like progress bars just magically work.

The other benefit is more obvious. If we know the application state from a previous request we don’t have to query a daemon or do another blocking library call to get it. This does of course imply that the plugin is properly invalidating the cache using gs_plugin_cache_invalidate() which it should do whenever a change is detected. Whether a plugin uses the cache for this reason is up to the plugin, but if it does it is up to the plugin to make sure the cache doesn’t get out of sync.

And one last thing: If you’re thinking of building an out-of-tree plugin for production use ask yourself if it actually belongs upstream. Upstream plugins get ported as the API evolves, and I’m already happily carrying Ubuntu and Fedora-specific plugins that either self-disable at runtime or are protected using --enable-foo configure argument.