Fedora desktop Planet

xinput list shows a "xwayland-pointer" device but not my real devices and what to do about it

Posted by Peter Hutterer on May 23, 2017 12:56 AM

TLDR: If you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging/configuration with xinput will not work.

For many years, the xinput tool has been a useful tool to debug configuration issues (it's not a configuration UI btw). It works by listing the various devices detected by the X server. So a typical output from xinput list under X could look like this:


:: whot@jelly:~> xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=22 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=23 [slave pointer (2)]
⎜ ↳ ELAN Touchscreen id=20 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Lid Switch id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=24 [slave keyboard (3)]
Alas, xinput is scheduled to go the way of the dodo. More and more systems are running a Wayland session instead of an X session, and xinput just doesn't work there. Here's an example output from xinput list under a Wayland session:

$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ xwayland-pointer:13 id=6 [slave pointer (2)]
⎜ ↳ xwayland-relative-pointer:13 id=7 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ xwayland-keyboard:13 id=8 [slave keyboard (3)]
As you can see, none of the physical devices are available, the only ones visible are the virtual devices created by XWayland. On a Wayland session, the X server doesn't have access to the physical devices. Instead, it talks via the Wayland protocol to the compositor. This image from the Wayland documentation shows the architecture:
In the above graphic, devices are known to the Wayland compositor (1), but not to the X server. The Wayland protocol doesn't expose physical devices, it merely provides a 'pointer' device, a 'keyboard' device and, where available, a touch and tablet tool/pad devices (2). XWayland wraps these into virtual devices and provides them via the X protocol (3), but they don't represent the physical devices.

This usually doesn't matter, but when it comes to debugging or configuring devices with xinput we run into a few issues. First, configuration via xinput usually means changing driver-specific properties but in the XWayland case there is no driver involved - it's all handled by libinput inside the compositor. Second, debugging via xinput only shows what the wayland protocol sends to XWayland and what XWayland then passes on to the client. For low-level issues with devices, this is all but useless.

The takeaway here is that if you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging with xinput will not work. If you're trying to configure a device, use the compositor's configuration system (e.g. gsettings). If you are debugging a device, use libinput-debug-events. Or compare the behaviour between the Wayland session and the X session to narrow down where the failure point is.

Updating Logitech Hardware on Linux

Posted by Richard Hughes on May 22, 2017 08:41 PM

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

  • Pair new devices with the receiver without user prompting
  • Inject keystrokes, covering various scenarios
  • Inject raw HID commands

This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day.

Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable.

The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue.

It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch.

This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora.

Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware.

If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible.

Fractional scaling goes east

Posted by Matthias Clasen on May 19, 2017 06:34 PM

When we introduced HiDPI support in GNOME a few years ago, we took the simplest possible approach that was feasible to implement with the infrastructure we had available at the time.

Some of the limitations:

  • You either get 1:1 or 2:1 scaling, nothing in between
  • The cut-off point that is somewhat arbitrarily chosen and you don’t get a say in it
  • In multi-monitor systems, all monitors share the same scale

Each of these limitations had technical reasons. For example, doing different scales per-monitor is hard to do as long as you are only using a single, big framebuffer for all of them. And allowing scale factors such as 1.5 leads to difficult questions about how to deal with windows that have a size like 640.5×480.5 pixels.

Over the years, we’ve removed the technical obstacles one-by-one, e.g. introduced per-monitor framebuffers. One of the last obstacles was the display configuration API that mutter exposes to the control-center display panel, which was closely modeled on XRANDR, and not suitable for per-monitor and non-integer scales. In the last cycle, we introduced a new, more suitable monitor configuration API, and the necessary support for it has just landed in the display panel.

With this, all of the hurdles have been cleared away, and we are finally ready to get serious about fractional scaling!

Yes, a hackfest!

Jonas and Marco happen to both be in Taipei in early June, so what better to do than to get together and spend some days hacking on fractional scaling support:

https://wiki.gnome.org/Hackfests/FractionalScaling2017

If you are a compositor developer (or plan on becoming one), or just generally interested in helping with this work, and are in the area, please check out the date and location by following the link. And, yes, this is a bit last-minute, but we still wanted to give others a chance to participate.

Recently released applications in GNOME Software

Posted by Richard Hughes on May 16, 2017 01:28 PM

By popular request, there is now a list of recently updated applications in the gnome-software overview page.

Upstream applications have to do two things to be featured here:

  1. Have an upstream release within the last 2 months (will be reduced as the number of apps increases)
  2. Have upstream release notes in the AppData file

Quite a few applications from XFCE, GNOME and KDE have already been including <release> tags and this visibility should hopefully encourage other projects to do the same. As a small reminder, the release notes should be small, and easily understandable by end users. You can see lots of examples in the GNOME Software AppData file. Any questions, please just email me or leave comments here. Thanks!

Intel AMT on wireless networks

Posted by Matthew Garrett on May 09, 2017 08:18 PM
More details about Intel's AMT vulnerablity have been released - it's about the worst case scenario, in that it's a total authentication bypass that appears to exist independent of whether the AMT is being used in Small Business or Enterprise modes (more background in my previous post here). One thing I claimed was that even though this was pretty bad it probably wasn't super bad, since Shodan indicated that there were only a small number of thousand machines on the public internet and accessible via AMT. Most deployments were probably behind corporate firewalls, which meant that it was plausibly a vector for spreading within a company but probably wasn't a likely initial vector.

I've since done some more playing and come to the conclusion that it's rather worse than that. AMT actually supports being accessed over wireless networks. Enabling this is a separate option - if you simply provision AMT it won't be accessible over wireless by default, you need to perform additional configuration (although this is as simple as logging into the web UI and turning on the option). Once enabled, there are two cases:
  1. The system is not running an operating system, or the operating system has not taken control of the wireless hardware. In this case AMT will attempt to join any network that it's been explicitly told about. Note that in default configuration, joining a wireless network from the OS is not sufficient for AMT to know about it - there needs to be explicit synchronisation of the network credentials to AMT. Intel provide a wireless manager that does this, but the stock behaviour in Windows (even after you've installed the AMT support drivers) is not to do this.
  2. The system is running an operating system that has taken control of the wireless hardware. In this state, AMT is no longer able to drive the wireless hardware directly and counts on OS support to pass packets on. Under Linux, Intel's wireless drivers do not appear to implement this feature. Under Windows, they do. This does not require any application level support, and uninstalling LMS will not disable this functionality. This also appears to happen at the driver level, which means it bypasses the Windows firewall.
Case 2 is the scary one. If you have a laptop that supports AMT, and if AMT has been provisioned, and if AMT has had wireless support turned on, and if you're running Windows, then connecting your laptop to a public wireless network means that AMT is accessible to anyone else on that network[1]. If it hasn't received a firmware update, they'll be able to do so without needing any valid credentials.

If you're a corporate IT department, and if you have AMT enabled over wifi, turn it off. Now.

[1] Assuming that the network doesn't block client to client traffic, of course

comment count unavailable comments

3000 Reviews on the ODRS

Posted by Richard Hughes on May 08, 2017 09:02 AM

The Open Desktop Ratings service is a simple Flask web service that various software centers use to retrieve and submit application reviews. Today it processed the 3000th review, and I thought I should mark this occasion here. I wanted to give a huge thanks to all the people who have submitted reviews; you have made life easier for people unfamiliar with installing software. There are reviews in over a hundred different languages and over 600 different applications have been reviewed.

Over 4000 people have clicked the “was this useful to you” buttons in the reviews, which affect how the reviews are ordered for a particular system. Without people clicking those buttons we don’t really know how useful a review is. Since we started this project, 37 reviews have been reported for abuse, of which 15 have been deleted for things like swearing and racism.

Here are some interesting graphs, first, showing the number of requests we’re handling per month. As you can see we’re handling nearly a million requests a month.

The second show the number of people contributing reviews. At about 350 per month this is a tiny fraction compared to the people requesting reviews, but this is to be expected.

The third shows where reviews come from; the notable absence is Ubuntu, but they use their own review system rather than the ODRS. Recently Debian has been increasing the fastest, I assume because at last they ship a gnome-software package new enough to support user reviews, but the reviews are still coming in fastest from Fedora users. Maybe Fedora users are the kindest in the open source community? Maybe we just shipped the gnome-software package first? :)

One notable thing missing from the ODRS is a community of people moderating reviews; at the moment it’s just me deciding which reviews are indeed abuse, and also fixing up common spelling errors in the submitted text. If this is something you would like to help with, please let me know and I can spend a bit of time adding a user type somewhere in-between benevolent dictator (me) and unauthenticated. Ideas welcome.

Recipes growing team

Posted by Matthias Clasen on May 05, 2017 06:31 PM

With the big push towards 1.0 now over, the development in GNOME recipes has moved to a more relaxed pace. But that doesn’t mean that nothing is happening! In fact, our team is growing, we will have two interns joining us this cycle, Ekta and Paxana.

While we are waiting for Ekta and Paxana to start working on the big projects for this cycle (sharing and unit conversion),  a number of smaller improvements have landed and will hopefully appear in a development release soon.

More recipes

We were somewhat successful in getting recipe contributions from the GNOME community.

Thanks to everybody who has contributed – keep it coming !

One consequence of this success is that we have too much data to ship with the application – the tarball for 1.0 was bigger than 100MB. To avoid this problem growing even further, the current development release downloads all recipe and image data at runtime, when needed. I’m very interested in feedback about how well that works.

More cuisines

Another consequence is that we now have so many cuisines represented that they don’t fit on one page anymore.

To address this, I’ve added an expander to show more cuisines.

Another improvement around cuisines is that we now offer all the cuisines that we know about to the cuisine combo box when you are editing recipes. A small step towards a user interface that adapts to your use of the application.

Inline editing

One of the findings of our testing session with Jakub and Tuomas at devconf was that creating a recipe was too fiddly, in particular the popover-heavy editing of the ingredients list.

To address this, we are moving to an inline editing approach for the ingredients list. To make this easier, I first refactored the ingredients list to be a separate widget which is now shared between the edit page and the details page (in a read-only mode).

Ekta is helping me with this.

Row reordering

Another outcome of our testing session was that we need to let the user reorder the ingredients list, which was not possible back in January. For 1.0, we added buttons to move a row up or down, but that was more of a stop-gap solution.

What we really want is to reorder rows by drag-and-drop,so I spent a bit of time recently to figure out drag-and-drop support for list boxes.

Temperature conversion

Last, but not least, we also added some preliminary support for unit conversion.

For now, we can display temperatures in Celsius or Fahrenheit. Currently, this gets determined by a setting, but as the next step, we are going to pick this up from the locale.

Paxana is working on this, as a warm-up for her internship project.

how much of the conformance test suite does radv pass now?

Posted by Dave Airlie on May 04, 2017 04:22 AM
Test run totals:
Passed: 109293/150992 (72.4%)
Failed: 0/150992 (0.0%)
Not supported: 41697/150992 (27.6%)
Warnings: 2/150992 (0.0%)

This is effectively a pass. The Not Supported stuff isn't missing features as uneducated people are quick to spout, it's more stuff the hardware doesn't support or is pointless to expose on the hardware. (lots of image formats).

This is the results from the Vulkan CTS 1.0.2 branch, against mesa master with one patch (a workaround for some InternalErrors that CTS throws up).

Do not call the driver conformant as that is against the Khronos rules as we haven't paid or filed for approval, but the driver does now effectively pass the latest conformance test suite. I'll update on things if that changes.

Thanks again to everyone involved.

Intel's remote AMT vulnerablity

Posted by Matthew Garrett on May 01, 2017 10:52 PM
Intel just announced a vulnerability in their Active Management Technology stack. Here's what we know so far.

Background

Intel chipsets for some years have included a Management Engine, a small microprocessor that runs independently of the main CPU and operating system. Various pieces of software run on the ME, ranging from code to handle media DRM to an implementation of a TPM. AMT is another piece of software running on the ME, albeit one that takes advantage of a wide range of ME features.

Active Management Technology

AMT is intended to provide IT departments with a means to manage client systems. When AMT is enabled, any packets sent to the machine's wired network port on port 16992 or 16993 will be redirected to the ME and passed on to AMT - the OS never sees these packets. AMT provides a web UI that allows you to do things like reboot a machine, provide remote install media or even (if the OS is configured appropriately) get a remote console. Access to AMT requires a password - the implication of this vulnerability is that that password can be bypassed.

Remote management

AMT has two types of remote console: emulated serial and full graphical. The emulated serial console requires only that the operating system run a console on that serial port, while the graphical environment requires drivers on the OS side requires that the OS set a compatible video mode but is also otherwise OS-independent[2]. However, an attacker who enables emulated serial support may be able to use that to configure grub to enable serial console. Remote graphical console seems to be problematic under Linux but some people claim to have it working, so an attacker would be able to interact with your graphical console as if you were physically present. Yes, this is terrifying.

Remote media

AMT supports providing an ISO remotely. In older versions of AMT (before 11.0) this was in the form of an emulated IDE controller. In 11.0 and later, this takes the form of an emulated USB device. The nice thing about the latter is that any image provided that way will probably be automounted if there's a logged in user, which probably means it's possible to use a malformed filesystem to get arbitrary code execution in the kernel. Fun!

The other part of the remote media is that systems will happily boot off it. An attacker can reboot a system into their own OS and examine drive contents at their leisure. This doesn't let them bypass disk encryption in a straightforward way[1], so you should probably enable that.

How bad is this

That depends. Unless you've explicitly enabled AMT at any point, you're probably fine. The drivers that allow local users to provision the system would require administrative rights to install, so as long as you don't have them installed then the only local users who can do anything are the ones who are admins anyway. If you do have it enabled, though…

How do I know if I have it enabled?

Yeah this is way more annoying than it should be. First of all, does your system even support AMT? AMT requires a few things:

1) A supported CPU
2) A supported chipset
3) Supported network hardware
4) The ME firmware to contain the AMT firmware

Merely having a "vPRO" CPU and chipset isn't sufficient - your system vendor also needs to have licensed the AMT code. Under Linux, if lspci doesn't show a communication controller with "MEI" or "HECI" in the description, AMT isn't running and you're safe. If it does show an MEI controller, that still doesn't mean you're vulnerable - AMT may still not be provisioned. If you reboot you should see a brief firmware splash mentioning the ME. Hitting ctrl+p at this point should get you into a menu which should let you disable AMT.

How about over Wifi?

Turning on AMT doesn't automatically turn it on for wifi. AMT will also only connect itself to networks it's been explicitly told about. Where things get more confusing is that once the OS is running, responsibility for wifi is switched from the ME to the OS and it forwards packets to AMT. I haven't been able to find good documentation on whether having AMT enabled for wifi results in the OS forwarding packets to AMT on all wifi networks or only ones that are explicitly configured.

What do we not know?

We have zero information about the vulnerability, other than that it allows unauthenticated access to AMT. One big thing that's not clear at the moment is whether this affects all AMT setups, setups that are in Small Business Mode, or setups that are in Enterprise Mode. If the latter, the impact on individual end-users will be basically zero - Enterprise Mode involves a bunch of effort to configure and nobody's doing that for their home systems. If it affects all systems, or just systems in Small Business Mode, things are likely to be worse.
We now know that the vulnerability exists in all configurations.

What should I do?

Make sure AMT is disabled. If it's your own computer, you should then have nothing else to worry about. If you're a Windows admin with untrusted users, you should also disable or uninstall LMS by following these instructions.

Does this mean every Intel system built since 2008 can be taken over by hackers?

No. Most Intel systems don't ship with AMT. Most Intel systems with AMT don't have it turned on.

Does this allow persistent compromise of the system?

Not in any novel way. An attacker could disable Secure Boot and install a backdoored bootloader, just as they could with physical access.

But isn't the ME a giant backdoor with arbitrary access to RAM?

Yes, but there's no indication that this vulnerability allows execution of arbitrary code on the ME - it looks like it's just (ha ha) an authentication bypass for AMT.

Is this a big deal anyway?

Yes. Fixing this requires a system firmware update in order to provide new ME firmware (including an updated copy of the AMT code). Many of the affected machines are no longer receiving firmware updates from their manufacturers, and so will probably never get a fix. Anyone who ever enables AMT on one of these devices will be vulnerable. That's ignoring the fact that firmware updates are rarely flagged as security critical (they don't generally come via Windows update), so even when updates are made available, users probably won't know about them or install them.

Avoiding this kind of thing in future

Users ought to have full control over what's running on their systems, including the ME. If a vendor is no longer providing updates then it should at least be possible for a sufficiently desperate user to pay someone else to do a firmware build with the appropriate fixes. Leaving firmware updates at the whims of hardware manufacturers who will only support systems for a fraction of their useful lifespan is inevitably going to end badly.

How certain are you about any of this?

Not hugely - the quality of public documentation on AMT isn't wonderful, and while I've spent some time playing with it (and related technologies) I'm not an expert. If anything above seems inaccurate, let me know and I'll fix it.

[1] Eh well. They could reboot into their own OS, modify your initramfs (because that's not signed even if you're using UEFI Secure Boot) such that it writes a copy of your disk passphrase to /boot before unlocking it, wait for you to type in your passphrase, reboot again and gain access. Sealing the encryption key to the TPM would avoid this.

[2] Updated after this comment - I thought I'd fixed this before publishing but left that claim in by accident.

(Updated to add the section on wifi)

(Updated to typo replace LSM with LMS)

(Updated to indicate that the vulnerability affects all configurations)

comment count unavailable comments

Looking at the Netgear Arlo home IP camera

Posted by Matthew Garrett on April 30, 2017 05:09 AM
Another in the series of looking at the security of IoT type objects. This time I've gone for the Arlo network connected cameras produced by Netgear, specifically the stock Arlo base system with a single camera. The base station is based on a Broadcom 5358 SoC with an 802.11n radio, along with a single Broadcom gigabit ethernet interface. Other than it only having a single ethernet port, this looks pretty much like a standard Netgear router. There's a convenient unpopulated header on the board that turns out to be a serial console, so getting a shell is only a few minutes work.

Normal setup is straight forward. You plug the base station into a router, wait for all the lights to come on and then you visit arlo.netgear.com and follow the setup instructions - by this point the base station has connected to Netgear's cloud service and you're just associating it to your account. Security here is straightforward: you need to be coming from the same IP address as the Arlo. For most home users with NAT this works fine. I sat frustrated as it repeatedly failed to find any devices, before finally moving everything behind a backup router (my main network isn't NATted) for initial setup. Once you and the Arlo are on the same IP address, the site shows you the base station's serial number for confirmation and then you attach it to your account. Next step is adding cameras. Each base station is broadcasting an 802.11 network on the 2.4GHz spectrum. You connect a camera by pressing the sync button on the base station and then the sync button on the camera. The camera associates with the base station via WPS and now you're up and running.

This is the point where I get bored and stop following instructions, but if you're using a desktop browser (rather than using the mobile app) you appear to need Flash in order to actually see any of the camera footage. Bleah.

But back to the device itself. The first thing I traced was the initial device association. What I found was that once the device is associated with an account, it can't be attached to another account. This is good - I can't simply request that devices be rebound to my account from someone else's. Further, while the serial number is displayed to the user to disambiguate between devices, it doesn't seem to be what's used internally. Tracing the logon traffic from the base station shows it sending a long random device ID along with an authentication token. If you perform a factory reset, these values are regenerated. The device to account mapping seems to be based on this random device ID, which means that once the device is reset and bound to another account there's no way for the initial account owner to regain access (other than resetting it again and binding it back to their account). This is far better than many devices I've looked at.

Performing a factory reset also changes the WPA PSK for the camera network. Newsky Security discovered that doing so originally reset it to 12345678, which is, uh, suboptimal? That's been fixed in newer firmware, along with their discovery that the original random password choice was not terribly random.

All communication from the base station to the cloud seems to be over SSL, and everything validates certificates properly. This also seems to be true for client communication with the cloud service - camera footage is streamed back over port 443 as well.

Most of the functionality of the base station is provided by two daemons, xagent and vzdaemon. xagent appears to be responsible for registering the device with the cloud service, while vzdaemon handles the camera side of things (including motion detection). All of this is running as root, so in the event of any kind of vulnerability the entire platform is owned. For such a single purpose device this isn't really a big deal (the only sensitive data it has is the camera feed - if someone has access to that then root doesn't really buy them anything else). They're statically linked and stripped so I couldn't be bothered spending any significant amount of time digging into them. In any case, they don't expose any remotely accessible ports and only connect to services with verified SSL certificates. They're probably not a big risk.

Other than the dependence on Flash, there's nothing immediately concerning here. What is a little worrying is a family of daemons running on the device and listening to various high numbered UDP ports. These appear to be provided by Broadcom and a standard part of all their router platforms - they're intended for handling various bits of wireless authentication. It's not clear why they're listening on 0.0.0.0 rather than 127.0.0.1, and it's not obvious whether they're vulnerable (they mostly appear to receive packets from the driver itself, process them and then stick packets back into the kernel so who knows what's actually going on), but since you can't set one of these devices up in the first place without it being behind a NAT gateway it's unlikely to be of real concern to most users. On the other hand, the same daemons seem to be present on several Broadcom-based router platforms where they may end up being visible to the outside world. That's probably investigation for another day, though.

Overall: pretty solid, frustrating to set up if your network doesn't match their expectations, wouldn't have grave concerns over having it on an appropriately firewalled network.

(Edited to replace a mistaken reference to WDS with WPS)

comment count unavailable comments

LibreOffice in The Matrix [m]

Posted by Eike Rathke on April 25, 2017 08:45 PM
If you use the Riot App (or any other) to connect to the Matrix and communicate, there's now a LibreOffice room that is bridged with the #libreoffice IRC channel on freenode.net. IRC channels are bridged to the matrix already for some time, but so far you had to type
/join #freenode_#libreoffice:matrix.org
in your matrix mobile app, or in the browser app know the bridge exists and select from the list, now you can just search in the available matrix rooms for LibreOffice and join. This should be a convenient method to join a chat with other LibreOffice users for people who otherwise don't use IRC or don't want to install an IRC app just for this on their mobile, smartphone, tablet..

The #libreoffice IRC channel and thus the LibreOffice matrix room is dedicated to user questions around all LibreOffice applications. Join and enjoy, get help and help others.

Reverse engineering ComputerHardwareIds.exe with winedbg

Posted by Richard Hughes on April 25, 2017 07:49 PM

In an ideal world vendors could use the same GUID value for hardware matching in Windows and Linux firmware. When installing firmware and drivers in Windows vendors can always use some generated HardwareID GUIDs that match useful things like the BIOS vendor and the product SKU. It would make sense to use the same scheme as Microsoft. There are a few issues in an otherwise simple plan.

The first, solved with a simple kernel patch I wrote (awaiting review by Jean Delvare), exposes a few more SMBIOS fields into /sys/class/dmi/id that are required for the GUID calculation.

The second problem is a little more tricky. We don’t actually know how Microsoft joins the strings, what encoding is used, or more importantly the secret namespace UUID used to seed the GUID. The only thing we have got is the closed source ComputerHardwareIds.exe program in the Windows DDK. This, luckily, runs in Wine although Wine isn’t able to get the system firmware data itself. This can be worked around, and actually makes testing easier.

So, some research. All we know from the MSDN page is that Each hardware ID string is converted into a GUID by using the SHA-1 hashing algorithm which actually tells us quite a bit. Generating a GUID from a SHA-1 hash means this has to be a type 5 UUID.

The reference code for a type-5 UUID is helpfully available in the IETF RFC document so it’s quite quick to get started with research. From a few minutes of searching online, the most likely symbols the program will be using are the BCrypt* set of functions. From the RFC code, we call the checksum generation update function with first the encoded namespace (aha!) and then the encoded joined string (ahaha!). For Win32 programs, BCryptHashData is the function we want to trace.

So, to check:

wine /home/hughsie/ComputerHardwareIds.exe /mfg "To be filled by O.E.M."

…matches the reference HardwareID-14 output from Microsoft. So onto debugging, using +relay shows all the calling values and return values from each Win32 exported symbol:

WINEDEBUG=+relay winedbg --gdb ~/ComputerHardwareIds.exe
Wine-gdb> b BCryptHashData
Wine-gdb> r ~/ComputerHardwareIds.exe /mfg "To be filled by O.E.M." /family "To be filled by O.E.M."
005b:Call bcrypt.BCryptHashData(0011bab8,0033fcf4,00000010,00000000) ret=0100699d
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> 

Great, so this is the secret namespace. The first parameter is the context, the second is the data address, the third is the length (0x10 as a length is indeed SHA-1) and the forth is the flags — so lets print out the data so we can see what it is:

Wine-gdb> x/16xb 0x0033fcf4
0x33fcf4:	0x70	0xff	0xd8	0x12	0x4c	0x7f	0x4c	0x7d
0x33fcfc:	0x00	0x00	0x00	0x00	0x00	0x00	0x00	0x00

Using either the uuid in python, or uuid_unparse in libuuid, we can format the namespace to 70ffd812-4c7f-4c7d-0000-000000000000 — now this doesn’t look like a randomly generated UUID to me! Onto the next thing, the encoding and joining policy:

Wine-gdb> c
005f:Call bcrypt.BCryptHashData(0011bb90,00341458,0000005a,00000000) ret=010069b3
Breakpoint 1, 0x7ffd85f8 in BCryptHashData () from /lib/wine/bcrypt.dll.so
Wine-gdb> x/90xb 0x00341458
0x341458:	0x54	0x00	0x6f	0x00	0x20	0x00	0x62	0x00
0x341460:	0x65	0x00	0x20	0x00	0x66	0x00	0x69	0x00
0x341468:	0x6c	0x00	0x6c	0x00	0x65	0x00	0x64	0x00
0x341470:	0x20	0x00	0x62	0x00	0x79	0x00	0x20	0x00
0x341478:	0x4f	0x00	0x2e	0x00	0x45	0x00	0x2e	0x00
0x341480:	0x4d	0x00	0x2e	0x00	0x26	0x00	0x54	0x00
0x341488:	0x6f	0x00	0x20	0x00	0x62	0x00	0x65	0x00
0x341490:	0x20	0x00	0x66	0x00	0x69	0x00	0x6c	0x00
0x341498:	0x6c	0x00	0x65	0x00	0x64	0x00	0x20	0x00
0x3414a0:	0x62	0x00	0x79	0x00	0x20	0x00	0x4f	0x00
0x3414a8:	0x2e	0x00	0x45	0x00	0x2e	0x00	0x4d	0x00
0x3414b0:	0x2e	0x00
Wine-gdb> q

So there we go. The encoding looks like UTF-16 (as expected, much of the Windows API is this way) and the joining character seems to be &.

I’ve written some code in fwupd so that this happens:

$ fwupdmgr hwids
Computer Information
--------------------
BiosVendor: LENOVO
BiosVersion: GJET75WW (2.25 )
Manufacturer: LENOVO
Family: ThinkPad T440s
ProductName: 20ARS19C0C
ProductSku: LENOVO_MT_20AR_BU_Think_FM_ThinkPad T440s
EnclosureKind: 10
BaseboardManufacturer: LENOVO
BaseboardProduct: 20ARS19C0C

Hardware IDs
------------
{c4159f74-3d2c-526f-b6d1-fe24a2fbc881}   <- Manufacturer + Family + ProductName + ProductSku + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{ff66cb74-5f5d-5669-875a-8a8f97be22c1}   <- Manufacturer + Family + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{2e4dad4e-27a0-5de0-8e92-f395fc3fa5ba}   <- Manufacturer + ProductName + BiosVendor + BiosVersion + BiosMajorRelease + BiosMinorRelease
{3faec92a-3ae3-5744-be88-495e90a7d541}   <- Manufacturer + Family + ProductName + ProductSku + BaseboardManufacturer + BaseboardProduct
{660ccba8-1b78-5a33-80e6-9fb8354ee873}   <- Manufacturer + Family + ProductName + ProductSku
{8dc9b7c5-f5d5-5850-9ab3-bd6f0549d814}   <- Manufacturer + Family + ProductName
{178cd22d-ad9f-562d-ae0a-34009822cdbe}   <- Manufacturer + ProductSku + BaseboardManufacturer + BaseboardProduct
{da1da9b6-62f5-5f22-8aaa-14db7eeda2a4}   <- Manufacturer + ProductSku
{059eb22d-6dc7-59af-abd3-94bbe017f67c}   <- Manufacturer + ProductName + BaseboardManufacturer + BaseboardProduct
{0cf8618d-9eff-537c-9f35-46861406eb9c}   <- Manufacturer + ProductName
{f4275c1f-6130-5191-845c-3426247eb6a1}   <- Manufacturer + Family + BaseboardManufacturer + BaseboardProduct
{db73af4c-4612-50f7-b8a7-787cf4871847}   <- Manufacturer + Family
{5e820764-888e-529d-a6f9-dfd12bacb160}   <- Manufacturer + EnclosureKind
{f8e1de5f-b68c-5f52-9d1a-f1ba52f1f773}   <- Manufacturer + BaseboardManufacturer + BaseboardProduct
{6de5d951-d755-576b-bd09-c5cf66b27234}   <- Manufacturer

Which basically matches the output of ComputerHardwareIds.exe on the same hardware. If the kernel patch gets into the next release I’ll merge the fwupd branch to master and allow vendors to start using the Microsoft HardwareID GUID values.

Meson considerations

Posted by Matthias Clasen on April 20, 2017 08:38 PM

A post with my GNOME release team hat on…

Meson is new and cool

A number of GNOME modules are switching to meson for 3.26. I myself was an early adopter for this: recipes has had meson build support since the beginning of the year, and after the 1.0 release, I’ve dropped autotools support on the master branch.

autotools are of course very familiar to most of us, and we know how to get most things done there. But it often isn’t pretty, and  using meson feels like a breath of fresh air. Others have been praising meson for its simplicity, ease of use and speed, so I am not going repeat that here.

Supported tools

jhbuild is our traditional build support tool, and it has well-working meson support for a while now. GNOME builder and flatpak-builder also both support meson and we include meson in the GNOME sdk for 3.24.

So, for developers, meson support is more or less there, and working well.

Transition woes

So, things are pretty awesome all around: we have a new build system, it is shiny and fast and supported. Sounds too good to be true. Whats the catch ?

One thing that meson does not do is building traditional ‘make dist’ style tarballs. The premise is that you can just build your software from a git tag or from a snapshot produced by git-archive.

While that is true, and maybe a direction we want to be going in for the future, there are plenty of build systems out there that expect you to provide a tarball or similar archive. That is true for Fedora’s koji, and it is also true for form in which we currently produce GNOME releases.

A GNOME release is essentially defined by a jhbuild module file (several of them, in fact) which refers to release tarballs for all of our modules, including checksums and sizes.  For core GNOME modules, these tarballs are generally put in place using a tool called ftpadmin. As I’ve recently found out, ftpadmin is a little picky. It expects the content in the tarball to be in a directory that’s named in module-version style and will error out if that is not the case.

Thankfully, git archive is up to the task. Here is what I did to produce a recent gnome-recipes release:

git tag -m 1.2.0 1.2.0
git archive --prefix gnome-recipes-1.2.0/ \
            -o gnome-recipes-1.2.0.tar.xz \
            1.2.0

Some unsolved problems remain. For example, we have not decided on how to handle library documentation in the new meson world. The way this works with autotools is that the tarballs include generated docs, which get extracted and post-processed by some scripts before they end up on developer.gnome.org. But git archive snapshots contain no generated documentation…  So far, no library that we host documentation for has made the jump to meson-only builds, so we still have some time to come up with a different solution.

Overall, I am really excited that we are embracing meson!

Update: My discussion of archives failed to consider git submodules. git-archive does not handle those, so my recommendation will not produce a working snapshot if you use submodules. See nautilus’ make_release.sh script for how to handle that.

Mailing list for fwupd and the LVFS

Posted by Richard Hughes on April 13, 2017 08:56 AM

I’ve created a mailing list for fwupd and LVFS discussions. If you’re interested in firmware updating on Linux, or want to know what’s happening on the Linux Vendor Firmware Service you probably want to join. There are a few interesting things I’ll post in a few days.

Disabling SSL validation in binary apps

Posted by Matthew Garrett on April 11, 2017 10:27 PM
Reverse engineering protocols is a great deal easier when they're not encrypted. Thankfully most apps I've dealt with have been doing something convenient like using AES with a key embedded in the app, but others use remote protocols over HTTPS and that makes things much less straightforward. MITMProxy will solve this, as long as you're able to get the app to trust its certificate, but if there's a built-in pinned certificate that's going to be a pain. So, given an app written in C running on an embedded device, and without an easy way to inject new certificates into that device, what do you do?

First: The app is probably using libcurl, because it's free, works and is under a license that allows you to link it into proprietary apps. This is also bad news, because libcurl defaults to having sensible security settings. In the worst case we've got a statically linked binary with all the symbols stripped out, so we're left with the problem of (a) finding the relevant code and (b) replacing it with modified code. Fortuntely, this is much less difficult than you might imagine.

First, let's find where curl sets up its defaults. Curl_init_userdefined() in curl/lib/url.c has the following code:
set->ssl.primary.verifypeer = TRUE;
set->ssl.primary.verifyhost = TRUE;
#ifdef USE_TLS_SRP
set->ssl.authtype = CURL_TLSAUTH_NONE;
#endif
set->ssh_auth_types = CURLSSH_AUTH_DEFAULT; /* defaults to any auth
type */
set->general_ssl.sessionid = TRUE; /* session ID caching enabled by
default */
set->proxy_ssl = set->ssl;

set->new_file_perms = 0644; /* Default permissions */
set->new_directory_perms = 0755; /* Default permissions */

TRUE is defined as 1, so we want to change the code that currently sets verifypeer and verifyhost to 1 to instead set them to 0. How to find it? Look further down - new_file_perms is set to 0644 and new_directory_perms is set to 0755. The leading 0 indicates octal, so these correspond to decimal 420 and 493. Passing the file to objdump -d (assuming a build of objdump that supports this architecture) will give us a disassembled version of the code, so time to fix our problems with grep:
objdump -d target | grep --after=20 ,420 | grep ,493

This gives us the disassembly of target, searches for any occurrence of ",420" (indicating that 420 is being used as an argument in an instruction), prints the following 20 lines and then searches for a reference to 493. It spits out a single hit:
43e864: 240301ed li v1,493
Which is promising. Looking at the surrounding code gives:
43e820: 24030001 li v1,1
43e824: a0430138 sb v1,312(v0)
43e828: 8fc20018 lw v0,24(s8)
43e82c: 24030001 li v1,1
43e830: a0430139 sb v1,313(v0)
43e834: 8fc20018 lw v0,24(s8)
43e838: ac400170 sw zero,368(v0)
43e83c: 8fc20018 lw v0,24(s8)
43e840: 2403ffff li v1,-1
43e844: ac4301dc sw v1,476(v0)
43e848: 8fc20018 lw v0,24(s8)
43e84c: 24030001 li v1,1
43e850: a0430164 sb v1,356(v0)
43e854: 8fc20018 lw v0,24(s8)
43e858: 240301a4 li v1,420
43e85c: ac4301e4 sw v1,484(v0)
43e860: 8fc20018 lw v0,24(s8)
43e864: 240301ed li v1,493
43e868: ac4301e8 sw v1,488(v0)

Towards the end we can see 493 being loaded into v1, and v1 then being copied into an offset from v0. This looks like a structure member being set to 493, which is what we expected. Above that we see the same thing being done to 420. Further up we have some more stuff being set, including a -1 - that corresponds to CURLSSH_AUTH_DEFAULT, so we seem to be in the right place. There's a zero above that, which corresponds to CURL_TLSAUTH_NONE. That means that the two 1 operations above the -1 are the code we want, and simply changing 43e820 and 43e82c to 24030000 instead of 24030001 means that our targets will be set to 0 (ie, FALSE) rather than 1 (ie, TRUE). Copy the modified binary back to the device, run it and now it happily talks to MITMProxy. Huge success.

(If the app calls Curl_setopt() to reconfigure the state of these values, you'll need to stub those out as well - thankfully, recent versions of curl include a convenient string "CURLOPT_SSL_VERIFYHOST no longer supports 1 as value!" in this function, so if the code in question is using semi-recent curl it's easy to find. Then it's just a matter of looking for the constants that CURLOPT_SSL_VERIFYHOST and CURLOPT_SSL_VERIFYPEER are set to, following the jumps and hacking the code to always set them to 0 regardless of the argument)

comment count unavailable comments

A quick look at the Ikea Trådfri lighting platform

Posted by Matthew Garrett on April 09, 2017 12:16 AM
Ikea recently launched their Trådfri smart lighting platform in the US. The idea of Ikea plus internet security together at last seems like a pretty terrible one, but having taken a look it's surprisingly competent. Hardware-wise, the device is pretty minimal - it seems to be based on the Cypress[1] WICED IoT platform, with 100MBit ethernet and a Silicon Labs Zigbee chipset. It's running the Express Logic ThreadX RTOS, has no running services on any TCP ports and appears to listen on two single UDP ports. As IoT devices go, it's pleasingly minimal.

That single port seems to be a COAP server running with DTLS and a pre-shared key that's printed on the bottom of the device. When you start the app for the first time it prompts you to scan a QR code that's just a machine-readable version of that key. The Android app has code for using the insecure COAP port rather than the encrypted one, but the device doesn't respond to queries there so it's presumably disabled in release builds. It's also local only, with no cloud support. You can program timers, but they run on the device. The only other service it seems to run is an mdns responder, which responds to the _coap._udp.local query to allow for discovery.

From a security perspective, this is pretty close to ideal. Having no remote APIs means that security is limited to what's exposed locally. The local traffic is all encrypted. You can only authenticate with the device if you have physical access to read the (decently long) key off the bottom. I haven't checked whether the DTLS server is actually well-implemented, but it doesn't seem to respond unless you authenticate first which probably covers off a lot of potential risks. The SoC has wireless support, but it seems to be disabled - there's no antenna on board and no mechanism for configuring it.

However, there's one minor issue. On boot the device grabs the current time from pool.ntp.org (fine) but also hits http://fw.ota.homesmart.ikea.net/feed/version_info.json . That file contains a bunch of links to firmware updates, all of which are also downloaded over http (and not https). The firmware images themselves appear to be signed, but downloading untrusted objects and then parsing them isn't ideal. Realistically, this is only a problem if someone already has enough control over your network to mess with your DNS, and being wired-only makes this pretty unlikely. I'd be surprised if it's ever used as a real avenue of attack.

Overall: as far as design goes, this is one of the most secure IoT-style devices I've looked at. I haven't examined the COAP stack in detail to figure out whether it has any exploitable bugs, but the attack surface is pretty much as minimal as it could be while still retaining any functionality at all. I'm impressed.

[1] Formerly Broadcom

comment count unavailable comments

inputfd - a protocol for direct access to input devices in wayland

Posted by Peter Hutterer on April 01, 2017 06:12 AM

This is a higher-level explanation of the inputfd protocol RFC I sent to the list on March 31. Note that this is a first draft, this protocol may never see the light of the day or may be significantly altered before it lands.

First, what is it? inputfd is a protocol for a Wayland compositor to pass a file descriptor (fd) for an input device directly to the client. The client can then read events off this fd and process them, without any additional buffering in between. In the ideal case, the compositor doesn't care (or even notice) that events are flowing between the kernel and the client. Because the compositor sets up the fd, the client does not need any special privileges.

Why is this needed? There are a few input devices that will not be handled by libinput. libinput is the stack for those devices that have a direct interaction with the desktop - mice, keyboards, touchpads, ... But plenty of devices do not want or require desktop interactions. Joysticks/gamepads are the prime example here, but 3D mice or VR input devices also come to mind. These devices don't control the cursor on the desktop, so the compositor doesn't care about the device's events. Note that the current draft only caters for joysticks, 3D mice etc. are TBD.

Why not handle these devices in libinput? Joysticks in particular are so varied that a library like libinput would have to increase the API surface area massively to cater for every possibility. And in the end it is likely that libinput would merely buffer events and pass them on almost as-is anyway. From a maintenance POV - I don't want to deal with that. And from a technical POV - there's little point to have libinput in between anyway. Furthermore, it's already the case that gaming devices are opened directly by the application rather than going through X. So just connecting the clients with the device directly has a advantages.

What does the compositor do? The compositor has two jobs: device filtering and focus management. In the current draft, a client can say "give me all gaming devices" but it's the compositor that decides which device is classified as such (see below for details). Depending on seat assignment or other policies, a client may only see a fraction of the devices currently connected.

Focus management is relatively trivial: the compositor decides that a client has focus now (e.g. by clicking into the window) and hands that client an fd. On focus out (e.g. alt-tab) the fd is revoked and the client cannot read any more events. Rinse, wash, repeat as the focus changes. Note that the protocol does not define how the focus is decided, that task is up to the compositor. Having focus management in the compositor gives us a simple benefit: the client can't hog the device or read events when it's out of focus. This makes it easy to have multiple clients running that need access to the same device without messing up state in a backgrounded client. The security benefit is minimal, few people enter their password with a joystick. But it's still a warm fuzzy feeling to know that a client cannot read events when it's not supposed to.

What devices are applicable devices? This is one of the TBD of the protocol. The compositor needs to know whether a device is a gaming device or not, but the default udev tag ID_INPUT_JOYSTICK is too crude and provides false positives (e.g. some Wacom tablets have that tag set). The conversion currently goes towards having a database that can define this better, but this point is far from decided on.

There are a couple of other points that are still being discussed. If you have knowledge in game (framework) development, do join the discussion, we need input. Personally I am far from a game developer, so I cannot fathom all the weird corner cases we may have to deal with here. Flying blind while designing a protocol is fun, but not so much when you then have to maintain it for the next 10 years. So some navigational input would be helpful.

And just to make sure you join the right discussion, I've disabled comments here :)

GTK+ happenings

Posted by Matthias Clasen on March 31, 2017 08:10 PM

This will be a longer post summarizing some of the discussions and decisions at the recent GTK+ hackfest in London.

I’ve just released GTK+ 3.90, as a milestone on our way towards GTK+ 4. This post should explain not just what is or isn’t in that release, but also where we are on the journey towards GTK+ 4, and what changes are still in the pipeline.

The reason for doing a 3.90 release now and not just another 3.89.x milestone is that we want to encourage experimental ports of some GTK+ 3 applications at this point, to gather feedback.

If you want to do this for your application, you should be aware that we are not promising more than 6 month API stability yet. There are still a number of unfinished transitions and fundamental changes (such as dropping subwindows, or moving container APIs to GtkWidget) that can and will affect application APIs before GTK+ 4.

I am willing to do a few 3.90.x snapshots if there is interest, but you should also be prepared to keep your application building against GTK+ 3, since 3.90 may not be widely available in distributions.

Hackfest happenings

We had a 3 productive days in London last week. The hackfest was a bit different from previous GTK+ get-togethers since we had a bigger and more diverse audience, which means that we had less down-in-the-details discussion about GTK+ internals.

Instead, some of us spent time discussion portals, Flatpak, build services and other things – but I will focus on the GTK+ parts in this post.

The GTK+ discussion was sometimes a bit higher-level, with topics like graphics pipelines and how we use them in GSK, or constraints and how they can fit into the GTK+ layout machinery.

I’ve personally enjoyed this change of pace quite a bit, and hope we can repeat this style of event in the future.

The details: GSK

We spent a while reviewing how far we have come with rebasing the GTK+ rendering onto GSK (and thus, GL / Vulkan). I won’t go into excessive details here, but we have both a Vulkan and a GL renderer for GSK.

The Vulkan renderer has shader implementations for many of the performance-relevant elements of CSS that make up GTK+ widgets. One major omission is that we still don’t handle text natively and fall back to cairo rendering and texture upload for it. We decided to look at cogl-pango and lift the necessary texture atlas and pango renderer from there, as far as possible.

The GL renderer is much less complete, since Benjamin has focused on implementing things on the Vulkan side at first. The plan here is to finish the Vulkan side first and then port things over to GL. One difference between the backends that we discussed at some length is that while changing pipelines is cheap with Vulkan, that may not be the case on the GL side, so we may need an optimizing pass over the render node tree to minimize state changes.

The details: constraints

Emmanuele gave a demo of Emeus, which is his implementation of constraints-based layout for GTK+. This demo left me feeling quite positive – everybody seems to be moving in the direction of constraints-based layout.

Emeus also supports Apple’s shorthand notation for constraints, so it should be familiar to many people who don’t have experience with GTK+’s nested boxes model. If we can integrate this in time for GTK+ 4, it could make a real difference for how GTK+ applications are designed and implemented.

In the discussion after the demo, we touched on questions of how we could possibly reimplement existing GTK+ containers in terms of constraints, and how we can take even more advantage of constraints by making Emeus composable.

Of course, there are questions of how far this approach will scale – could you put all of gtk-widget-factory into a single system of constraints and solve it while resizing the window ? We will find out…

The details: input

One of the reasons why we are moving to GL-based rendering is that we want to enable more pervasive animations in many places. One example that we’ve discussed is animating the showing and hiding of list or grid elements in a filtered list box or flow box. Doing this smoothly at 60 frames per second requires that we can move and transform elements without causing full relayouts, which in turn means that we to separate positions from sizes, which is only practical in a world without subwindows.

Therefore, one of the milestones on our journey towards GTK+ 4 is to move event handling away from using subwindows for routing input events to their target. Once we have taken this step, it will also be possible to take transformations into account.

Carlos just posted a branch that is taking the first steps in this direction.

The details: children, nodes and gadgets

I’ve already talked about Emeus and constraints, so I won’t repeat that here. But there are some other changes in the way GTK+ widgets work that are worth mentioning here.

We are moving away from GtkContainer as the parent class for container. Instead we now allow any widget to have children. The APIs for this are more DOM-like than GtkContainer: gtk_widget_get_first_child, gtk_widget_get_last_child, gtk_widget_get_next_sibling, gtk_widget_get_prev_sibling.

In GTK+ 3.20 and afterwards, we introduced gadgets as internal construct that allowed us to clean up our widget’s adherence to the CSS box model. This was always meant as a stop-gap solution for pre-existing widgets that were just too complicated to fit the CSS box model well. Now that any widget can have children, we have begun to replace some of these gadgets with regular widgets.

For example, GtkSwitch now has three child widgets: the slider and two labels.

Summary

We’ve had a successful hackfest and GTK+ 4 development is on track, with 3.90 as a visible milestone. Try it out!

Announcing the Shim review process

Posted by Matthew Garrett on March 21, 2017 08:29 PM
Shim has been hugely successful, to the point of being used by the majority of significant Linux distributions and many other third party products (even, apparently, Solaris). The aim was to ensure that it would remain possible to install free operating systems on UEFI Secure Boot platforms while still allowing machine owners to replace their bootloaders and kernels, and it's achieved this goal.

However, a legitimate criticism has been that there's very little transparency in Microsoft's signing process. Some people have waited for significant periods of time before being receiving a response. A large part of this is simply that demand has been greater than expected, and Microsoft aren't in the best position to review code that they didn't write in the first place.

To that end, we're adopting a new model. A mailing list has been created at shim-review@lists.freedesktop.org, and members of this list will review submissions and provide a recommendation to Microsoft on whether these should be signed or not. The current set of expectations around binaries to be signed documented here and the current process here - it is expected that this will evolve slightly as we get used to the process, and we'll provide a more formal set of documentation once things have settled down.

This is a new initiative and one that will probably take a little while to get working smoothly, but we hope it'll make it much easier to get signed releases of Shim out without compromising security in the process.

comment count unavailable comments

Buying a Utah teapot

Posted by Matthew Garrett on March 20, 2017 08:45 PM
The Utah teapot was one of the early 3D reference objects. It's canonically a Melitta but hasn't been part of their range in a long time, so I'd been watching Ebay in the hope of one turning up. Until last week, when I discovered that a company called Friesland had apparently bought a chunk of Melitta's range some years ago and sell the original teapot[1]. I've just ordered one, and am utterly unreasonably excited about this.

Update: Friesland have apparently always produced the Utah teapot, but were part of the Melitta group for some time - they didn't buy the range from Melitta.

[1] They have them in 0.35, 0.85 and 1.4 litre sizes. I believe (based on the measurements here) that the 1.4 litre one matches the Utah teapot.

comment count unavailable comments

how close to conformant is radv?

Posted by Dave Airlie on March 20, 2017 07:26 AM
I spent some time staring into the results of the VK-GL-CTS test suite on radv, which contains the Vulkan 1.0 conformance tests.

In order to be conformant you have to pass all the tests on the mustpass list for the Vulkan version you want to conform to, from the branch of the test suite for that version.

The latest CTS tests for 1.0 is the vulkan-cts-1.0.2 branch, and the mustpass list is in external/vulkancts/mustpass/1.0.2/vk-default.txt

Using some WIP radv patches in my github radv-wip-conform branch and the 1.0.2 test suite, today's results are on my Tonga GPU:

Test run totals:
Passed: 82551/150950 (54.7%)
Failed: 0/150950 (0.0%)
Not supported: 68397/150950 (45.3%)
Warnings: 2/150950 (0.0%)

That is pretty conformant (in fact it would pass as-is). However I need to clean up the patches in the branch and maybe figure out how to do some bits properly without hacks (particularly some semaphore wait tweaks), but that is most of the work done.

Thanks again to Bas and all other radv contributors.

Recipes 1.0

Posted by Matthias Clasen on March 17, 2017 02:31 PM

Recipes 1.0 is here, in time for GNOME 3.24 next week. You can get it here:

https://download.gnome.org/sources/gnome-recipes/1.0/

A flatpak is available here:

https://matthiasclasen.github.io/recipes-releases/gnome-recipes.flatpakref

and can be installed with

flatpak install https://matthiasclasen.github.io/recipes-releases/gnome-recipes.flatpakref

Thanks to everybody who helped us to reach this point by contributing recipes, sending patches, translations or bug reports!

Documentation

Recipes looks pretty good in GNOME Software already, but one thing is missing: No documentation costs us a perfect rating. Thankfully, Paul Cutler has shown up and started to fill this gap, so we can get the last icon turned blue with the next release.

Since one of the goals of Recipes is to be an exemplaric Flatpak app, I took this opportunity to investigate how we can handle documentation for sandboxed applications.

One option is to just put all the docs on the web and launch a web browser, but that feels a bit like cheating. Another option is to export all the documentation files from the sandbox and launch the host help browser on it. But this would require us to recursively export an entire directory full of possibly malicious content – so far, we’ve been careful to only export individual, known files like the desktop file or the app icon.

Therefore, we decided that we should instead ship a help browser in the GNOME runtime and launch it inside the sandbox. This turns out to work reasonably well, and will be used by more GNOME apps in the near future.

Interns

Apart from this ongoing work on documentation, a number of bug fixes and small improvements have found there way into the 1.0 release. For example, you can now find recipes by searching for the chef by name. And we ask for confirmation if you are about to close the window with unsaved changes.

Some of these changes were contributed by prospective Outreachy interns.

Roadmap

I have mentioned it before, you can find some information about our future plans for recipes here:

https://wiki.gnome.org/Apps/Recipes/Development

Your help is more than welcome!

A simple house-moving tip: use tape to mark empty cupboards

Posted by Peter Hutterer on March 17, 2017 02:00 AM

When you've emptied a cupboard, put masking tape across it, ideally in a colour that's highly visible. This way you immediately see know which ones are finished and which ones still need attention. You won't keep opening the cupboard a million times to check and after the move it takes merely seconds to undo.

The Internet of Microphones

Posted by Matthew Garrett on March 08, 2017 01:30 AM
So the CIA has tools to snoop on you via your TV and your Echo is testifying in a murder case and yet people are still buying connected devices with microphones in and why are they doing that the world is on fire surely this is terrible?

You're right that the world is terrible, but this isn't really a contributing factor to it. There's a few reasons why. The first is that there's really not any indication that the CIA and MI5 ever turned this into an actual deployable exploit. The development reports[1] describe a project that still didn't know what would happen to their exploit over firmware updates and a "fake off" mode that left a lit LED which wouldn't be there if the TV were actually off, so there's a potential for failed updates and people noticing that there's something wrong. It's certainly possible that development continued and it was turned into a polished and usable exploit, but it really just comes across as a bunch of nerds wanting to show off a neat demo.

But let's say it did get to the stage of being deployable - there's still not a great deal to worry about. No remote infection mechanism is described, so they'd need to do it locally. If someone is in a position to reflash your TV without you noticing, they're also in a position to, uh, just leave an internet connected microphone of their own. So how would they infect you remotely? TVs don't actually consume a huge amount of untrusted content from arbitrary sources[2], so that's much harder than it sounds and probably not worth it because:

YOU ARE CARRYING AN INTERNET CONNECTED MICROPHONE THAT CONSUMES VAST QUANTITIES OF UNTRUSTED CONTENT FROM ARBITRARY SOURCES

Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they'll do it via your phone. If you're paranoid enough to take the battery out of your phone before certain conversations, don't have those conversations in front of a TV with a microphone in it. But, uh, it's actually worse than that.

These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it's not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.

Anyone who's ever plugged a microphone into a speaker jack probably knows where I'm going with this. An attacker can "turn off" your TV, reconfigure the internal speaker output as an input and listen to you on your "microphoneless" TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you're telling them to disconnect the internal speakers as well.

If you're in a situation where you have to worry about an intelligence agency monitoring you, your TV is the least of your concerns - any device with speakers is just as bad. So what about Alexa? The summary here is, again, it's probably easier and more practical to just break your phone - it's probably near you whenever you're using an Echo anyway, and they also get to record you the rest of the time. The Echo platform is very restricted in terms of where it gets data[3], so it'd be incredibly hard to compromise without Amazon's cooperation. Amazon's not going to give their cooperation unless someone turns up with a warrant, and then we're back to you already being screwed enough that you should have got rid of all your electronics way earlier in this process. There are reasons to be worried about always listening devices, but intelligence agencies monitoring you shouldn't generally be one of them.

tl;dr: The CIA probably isn't listening to you through your TV, and if they are then you're almost certainly going to have a bad time anyway.

[1] Which I have obviously not read
[2] I look forward to the first person demonstrating code execution through malformed MPEG over terrestrial broadcast TV
[3] You'd need a vulnerability in its compressed audio codecs, and you'd need to convince the target to install a skill that played content from your servers

comment count unavailable comments

A journey, with recipes

Posted by Matthias Clasen on March 06, 2017 04:13 AM

This month, we will release GNOME 3.24, and alongside, GNOME recipes will have its 1.0 release.

It has been quite a journey from just an idea at GUADEC last August to the application that we have now.

To the finish line

Since my last update, we have focused on completing our 1.0 goals. The last thing that we needed was to get enough contributed recipes to replace all the test data with actual content. And we’ve made it, thanks to the cooks in the GNOME community, we have plenty of great recipes now:

A few small features have still made their way in, despite the focus on polish. One thing I’ve played with is making appstream data useful inside the application itself, for showing ‘What’s New’ style information:

The timer support in cooking mode has seen quite a bit of improvement and polish.

I’ve also picked up my earlier experiment again, and built Recipes on OS X. This time, I got far enough to produce a dmg image:

A useful by-product of this effort was a number of bug fixes for the GTK+ Quartz backend, such as working fullscreen and window functions.

What next ?

1.0 will not be the end of the road for Recipes. There’s a number of exciting features on the post-1.0 roadmap.

For the next leg of the journey, we will welcome some Outreachy interns who will help us to add useful functionality, from unit conversion to shopping list export.

Better Resolution of Kerberos Credential Caches

Posted by Nathaniel McCallum on March 03, 2017 03:45 PM

DevConf is a great time of year. Lots of developers gather in one place and we get to discuss integration issues between projects that may not have a direct relationship. One of those issues this year was the desktop integration of Kerberos authentication.

GNOME Online Accounts has supported the creation of Kerberos accounts since nearly the beginning, thanks to the effort of Debarshi Ray. However, we were made aware of an issue this year that had not come up before. Namely, in a variety of cases GSSAPI would not be able to complete authentication for non-default TGTs.

Roughly, this meant that if you logged into Kerberos using two different accounts GSSAPI would only be able to complete authentication using your default credential cache - meaning the last account you logged into. Users could work around this problem by using kswitch to change their default credential cache. However, since authentication transparently failed, there was no indication to the user that this could work. So the user experience was particularly poor.

This difficulty became even more noticable after the Fedora deployment of Kerberos by Patrick Uiterwijk. Many Fedora developers also use Kerberos for other realms, so the pain was spreading.

I am happy to say that we have discovered a cure for this malady!

Matt Rogers worked with upstream to merge this patch which causes GSSAPI to do the RightThing™. Robbie Harwood landed the patch in Fedora (rawhide, 26, 25). So we believe this issue to be resolved.

If you’re a Fedora 25 user, please help us test the fix! There is a pending update for krb5 on Bodhi. The easy way to reproduce this issue is as follows:

  1. Log in with the Kerberos account you want to use for the test.
  2. Log in with another Kerberos account.
  3. Confirm that the second account is default with klist.
  4. Attempt to login to a service using the first credential and GSSAPI. The easiest way to do this is probably to go to a Kerberos protected website using your browser (assming it is properly configured for GSSAPI).
  5. Before the patch, automatic login should fail. Afterwards, it shouldn’t.

Enjoy!

radv + steamvr

Posted by Dave Airlie on February 27, 2017 07:42 PM
If anyone wants to run SteamVR on top of radv, the code is all public now.

https://github.com/airlied/mesa/tree/radv-wip-steamvr

The external memory code will be going upstream to master once I clean it up a bit, the semaphore hack is waiting on kernel
changes, and the NIR shader hack is waiting on a new SteamVR build that removes the bad use of SPIR-V.

I've run Serious SAM TFE in VR mode on this branch.

The Fantasyland Code of Professionalism is an abuser's fantasy

Posted by Matthew Garrett on February 27, 2017 01:40 AM
The Fantasyland Institute of Learning is the organisation behind Lambdaconf, a functional programming conference perhaps best known for standing behind a racist they had invited as a speaker. The fallout of that has resulted in them trying to band together events in order to reduce disruption caused by sponsors or speakers declining to be associated with conferences that think inviting racists is more important than the comfort of non-racists, which is weird in all sorts of ways but not what I'm talking about here because they've also written a "Code of Professionalism" which is like a Code of Conduct except it protects abusers rather than minorities and no really it is genuinely as bad as it sounds.

The first thing you need to know is that the document uses its own jargon. Important here are the concepts of active and inactive participation - active participation is anything that you do within the community covered by a specific instance of the Code, inactive participation is anything that happens anywhere ever (ie, active participation is a subset of inactive participation). The restrictions based around active participation are broadly those that you'd expect in a very weak code of conduct - it's basically "Don't be mean", but with some quirks. The most significant is that there's a "Don't moralise" provision, which as written means saying "I think people who support slavery are bad" in a community setting is a violation of the code, but the description of discrimination means saying "I volunteer to mentor anybody from a minority background" could also result in any community member not from a minority background complaining that you've discriminated against them. It's just not very good.

Inactive participation is where things go badly wrong. If you engage in community or professional sabotage, or if you shame a member based on their behaviour inside the community, that's a violation. Community sabotage isn't defined and so basically allows a community to throw out whoever they want to. Professional sabotage means doing anything that can hurt a member's professional career. Shaming is saying anything negative about a member to a non-member if that information was obtained from within the community.

So, what does that mean? Here are some things that you are forbidden from doing:
  • If a member says something racist at a conference, you are not permitted to tell anyone who is not a community member that this happened (shaming)
  • If a member tries to assault you, you are not allowed to tell the police (shaming)
  • If a member gives a horribly racist speech at another conference, you are not allowed to suggest that they shouldn't be allowed to speak at your event (professional sabotage)
  • If a member of your community reports a violation and no action is taken, you are not allowed to warn other people outside the community that this is considered acceptable behaviour (community sabotage)

Now, clearly, some of these are unintentional - I don't think the authors of this policy would want to defend the idea that you can't report something to the police, and I'm sure they'd be willing to modify the document to permit this. But it's indicative of the mindset behind it. This policy has been written to protect people who are accused of doing something bad, not to protect people who have something bad done to them.

There are other examples of this. For instance, violations are not publicised unless the verdict is that they deserve banishment. If a member harasses another member but is merely given a warning, the victim is still not permitted to tell anyone else that this happened. The perpetrator is then free to repeat their behaviour in other communities, and the victim has to choose between either staying silent or warning them and risk being banished from the community for shaming.

If you're an abuser then this is perfect. You're in a position where your victims have to choose between their career (which will be harmed if they're unable to function in the community) and preventing the same thing from happening to others. Many will choose the former, which gives you far more freedom to continue abusing others. Which means that communities adopting the Fantasyland code will be more attractive to abusers, and become disproportionately populated by them.

I don't believe this is the intent, but it's an inevitable consequence of the priorities inherent in this code. No matter how many corner cases are cleaned up, if a code prevents you from saying bad things about people or communities it prevents people from being able to make informed choices about whether that community and its members are people they wish to associate with. When there are greater consequences to saying someone's racist than them being racist, you're fucking up badly.

comment count unavailable comments

Bluetooth in Fedora

Posted by Nathaniel McCallum on February 16, 2017 08:53 PM

So… Bluetooth. It’s everywhere now. Well, everywhere except Fedora. Fedora does, of course support bluetooth. But even the most common workflows are somewhat spotty. We should improve this.

To this end, I’ve enlisted the help of the Don Zickus, kernel developer extrordinaire, and Adam Williamson, the inimitable Fedora QA guru. The plan is to create a set of user tests for the most common bluetooth tasks. This plan has several goals.

First, we’d like to know when stuff is broken. For example, the recent breakage in linux-firmware. Catching this stuff early is a huge plus.

Second, we’d like to get high quality bug reports. When things do break, vague bug reports often cause things to sit in limbo for a while. Making sure we have all the debugging information up front can make reports actionable.

Third, we’d (eventually) like to block a new Fedora release if major functionality is broken. We’re obviously not ready for this step yet. But once the majority of workflows work on the hardware we care about, we need to ensure that we don’t ship a Fedora release with broken code.

To this end we are targeting three workflows which cover the most common cases:

  • Keyboards
  • Headsets
  • Mice

For more information, or to help develop the user testing, see the Fedora QA bug. Here’s to a better future!

Recipes by mail

Posted by Matthias Clasen on February 12, 2017 08:58 AM

Since I last wrote about GNOME recipes, we’ve mainly focused on completing our feature set for 3.24.

Todays 0.12.0 release brings us very close to covering all the goals we’ve set ourselves when we started working on recipes: We have a  fullscreen cooking mode

and recipes and shopping lists can be shared by email

Since the Share button has replaced the Export button, contributing your recipes is now easier too: Just choose the “Contribute” option and send the recipe to the new recipes-list@gnome.org mailing list.

While working on this, it suddenly dawned on my why I may have seen some recipe contributions in bugzilla that where missing attachments: bugzilla has a limit for the sizes of attachments it allows, and recipes with photos may hit this limit.

So, if you’ve tried to contributed a recipe via bugzilla, and ran into this problem, please send your recipe to recipes-list@gnome.org instead.

libinput knows about internal and external touchpads

Posted by Peter Hutterer on February 10, 2017 12:27 AM

libinput has a couple of features that 'automagically' work on touchpads such as disable-while-typing and the lid switch triggered disabling of touchpads and disabling the touchpad when an external mouse is plugged in [1]. But not all of these features make sense on all touchpads. For example, an Apple Magic Trackpad doesn't need disable-while-typing because unless you have a creative arrangement of input devices [2], the touchpad won't be where your palm is likely to hit it. Likewise, a Logitech T650 connected over a unifying receiver shouldn't get disabled when the laptop lid closes.

For this to work, libinput has some code to figure out whether a touchpad is internal or external. Initially we had some code to detect this but eventually moved this to the ID_INPUT_TOUCHPAD_INTEGRATION property now set by udev's hwdb (systemd 231 and later). Having it in the hwdb makes it quite trivial to override locally where the current rules are insufficient (and until the hwdb is fixed, thanks for filing a bug). We still have the fallback code though in case the tag is missing. On a sufficiently modern distribution, udevadm info /sys/class/input/event4 for your touchpad device node should show something like ID_INPUT_TOUCHPAD_INTEGRATION=internal.

So for any feature that libinput adds for touchpads, we only enable it where it makes sense. That's why your external touchpad doesn't trigger disable-while-typing or the lid switch.

[1] ok, I admit, this is something we should've left to the client, but now we have the feature.
[2] yes, I'm sure there's at least one person out there that uses the touchpad upside down in front of the keyboard and is now angry that libinput doesn't allow arbitrary rotation of the device combined with configurable dwt. I think of you every night I cry myself to sleep.

New fwupd release, and why you should buy a Dell

Posted by Richard Hughes on February 08, 2017 01:14 PM

This morning I released the first new release of fwupd on the 0.8.x branch. This has a number of interesting fixes, but more importantly adds the following new features:

  • Adds support for Intel Thunderbolt devices
  • Adds support for some Logitech Unifying devices
  • Adds support for Synaptics MST cascaded hubs
  • Adds support for the Altus-Metrum ChaosKey device
  • Adds Dell-specific functionality to allow other plugins turn on TBT/GPIO

Mario Limonciello from Dell has worked really hard on this release, and I can say with conviction: If you want to support a hardware company that cares about Linux — buy a Dell. They seem to be driving the importance of Linux support into their partners and suppliers. I wish other vendors would do the same.

Open Desktop Review System : One Year Review

Posted by Richard Hughes on February 06, 2017 10:01 AM

This weekend we had the 2,000th review submitted to the ODRS review system. Every month we’re getting an additional ~300 reviews and about 500,000 requests for reviews from the system. The reviews that have been contributed are in 94 languages, and from 1387 different users.

Most reviews have come from Fedora (which installs GNOME Software as part of the default workstation) but other distros like Debian and Arch are catching up all the time. I’d still welcome KDE software center clients like Discover and Apper using the ODRS although we do have quite a lot of KDE software reviews submitted using GNOME Software.

Out of ~2000 reviews just 23 have been marked as inappropriate, of which I agreed with 7 (inappropriate is supposed to be swearing or abuse, not just being unhelpful) and those 7 were deleted. The mean time between a review being posted that is actually abuse and it being marked as such (or me noticing it in the admin panel) is just over 8 hours, which is certainly good enough. In the last few months 5523 people have clicked the “upvote” button on a review, and 1474 people clicked the “downvote” button on a review. Although that’s less voting that I hoped for, that’s certainly enough to give good quality sorting of reviews to end users in most locales. If you have a couple of hours on your hands, gnome-software --mode=moderate is a great way to upvote/downvote a lot of reviews in your locale.

So, onward to 3,000 reviews. Many thanks to those who submitted reviews already — you’re helping new users who don’t know what software they should install.

libinput and lid switch events

Posted by Peter Hutterer on February 01, 2017 04:51 AM

I merged a patchset from James Ye today to add support for switch events to libinput, specifically: lid switch events. This feature is scheduled for libinput 1.7.

First, what are switches and how are they different so keys? A key's state is transient with a neutral state of "key is up". The state itself is expected to change frequently. Switches don't always have a defined logical neutral state and the state changes only infrequently. This requires different handling in applications and thus libinput exposes a new interface (and capability) for switches.

The interface itself is trivial. A switch event has two properties, the switch type (e.g. "lid") and the switch state (on/off). See the libinput-debug-events source code for a simple code to print the state and type.

In libinput, we generally try to restrict ourselves to the cases we know how to handle. So in the first iteration, we'll support a single switch event: the lid switch. This is the toggle that changes when you close the lid on your laptop.

But libinput uses this internally too: touchpads are disabled automatically whenever the lid is closed. Indeed, this functionally was the main motivation for this patchset. On a number of devices, we get ghost touches when the lid is closed. Even though the touchpad is unreachable by the user interference with the screen still causes events, moving the pointer in unexpected ways and generally being a nuisance. Some trackpoints suffer from the same issue. But now that libinput knows about the lid switch it can transparently disable the touchpad whenever the lid is closed and thus discard the events.

Lid switches on some devices are unreliable. There are some devices where the lid is permanently closed and other devices where the lid can be closed, but we'll never see the open event. So we change behaviour based on a few factors. After all, no-one likes a dysfunctional touchpad because the lid switch is broken (if you do, seek help). For one, whenever we detect keyboard events while in logically closed state we'll assume that the lid is open after all and adjust state accordingly. Unless the lid switch is reliable, we don't sync the initial state. That's annoying for those who start libinput in closed mode, but it filters out all devices that set the lid switch to "on" and then never change again. On the Surface 3 devices we go even further: we know those devices needs a bit of hand-holding. So whenever we detect activity on the keyboard, we also write the EV_SW/SW_LID state to the device node, thus updating the kernel to be correct again (and thus help everyone else who may be listening).

The exact behaviours will likely change slightly over time as we have to deal with corner-cases one-by-one. But meanwhile, it's even easier for compositors to listen to switch events and users don't have to deal with ghost touches anymore. Many thanks to James Ye for implementing this.

Presentation woes

Posted by Matthias Clasen on January 30, 2017 04:18 PM

My flatpak presentation at devconf.cz ran into some technical difficulties when the beamer system failed halfway through. I couldn’t show the second half of my slides, and had to improvise a bit. If you were in the room, I hope it wasn’t too incomprehensible.

You can see what you missed here:

And here is a quick recording of the demo I would have given at the end of my talk:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1754-1" preload="metadata" width="474"><source src="https://mclasen.fedorapeople.org/devconf-flatpak.webm?_=1" type="video/webm">https://mclasen.fedorapeople.org/devconf-flatpak.webm</video>

How libinput opens device nodes

Posted by Peter Hutterer on January 30, 2017 12:00 AM

In order to read events and modify devices, libinput needs a file descriptor to the /dev/input/event node. But those files are only accessible by the root user. If libinput were to open these directly, we would force any process that uses libinput to have sufficient privileges to open those files. But these days everyone tries to reduce a processes privileges wherever possible, so libinput simply delegates opening and closing the file descriptors to the caller.

The functions to create a libinput context take a parameter of type struct libinput_interface. This is an non-opaque struct with two function pointers: "open_restricted" and "close_restricted". Whenever libinput needs to open or close a file, it calls the respective function. For open_restricted() libinput expects the caller to return an fd with the given flags.

In the simplest case, a caller can merely call open() and close(). This is what the debugging tools do (and the test suite). But obviously this means you have to run those as root. The main wayland compositors (weston, mutter, kwin, ...) instead forward the request to systemd-logind. That then opens the event node and returns the fd which is passed to libinput. And voila, the compositors don't need to run as root, libinput doesn't have to know how the fd is opened and everybody wins. Plus, logind will mute the fd on VT-switch, so we can't leak keyboard events.

In the X.org case it's a combination of the two. When the server runs with systemd-logind enabled, it will open the fd before the driver initialises the device. During the init stage, libinput asks the xf86-input-libinput driver to open the device node. The driver forwards the request to the server which simply returns the already-open fd. When the server runs without systemd-logind, the server opens the file normally with a standard open() call.

So in summary: you can easily run libinput without systemd-logind but you'll have to figure out how to get the required privileges to open device nodes. For anything more than a test or debugging program, I recommend using systemd-logind.

libinput and wheel tilt events

Posted by Peter Hutterer on January 27, 2017 03:27 AM

We're in the middle of the 1.7 development cycle and one of the features merged already is support for "wheel tilt", i.e. support for devices that don't have a separate horizontal wheel but instead rely on a tilt motion for horizontal event. Now, the way this is handled in the kernel is that the events are sent via REL_WHEEL (or REL_DIAL) so we don't actually need special code in libinput to handle tilt. But libinput tries to to make sense of input devices so the upper layers have a reliable base to build on - and that's why we need tilt-wheels to be handled.

For 'pointer axis' events (i.e. scroll events) libinput provides scroll sources. These specify how the scroll event was generated, allowing a caller to handle things accordingly. A finger-based scroll for example can trigger kinetic scrolling while a mouse wheel would not usually do so. The value for a pointer axis is also dependent on the scroll source - for continuous/finger based scrolling the value is in pixels. For a mouse wheel, the value is in degrees. This obviously doesn't work for a tilt event because degrees don't make sense in this context. So the new axis source is just that, an indicator that the event was caused by a wheel tilt rather than a rotation. Its value matches the default wheel rotation (i.e. 15 degrees) just to make use of it easier.

Of course, a device won't tell us whether it provides a proper wheel or just tilt. So we need a hwdb property and I've added that to systemd's repo. To make this work, set the MOUSE_WHEEL_TILT_HORIZONTAL and/or MOUSE_WHEEL_TILT_VERTICAL property on your hardware and you're off. Yay.

Patches for the wayland protocol have been merged as well, so this is/will be available to wayland clients.

Android permissions and hypocrisy

Posted by Matthew Garrett on January 23, 2017 07:58 AM
I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.

Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.

So, to that end, here are the permissions that Kaspersky request on Android:
  • android.permission.READ_CONTACTS
  • android.permission.WRITE_CONTACTS
  • android.permission.READ_SMS
  • android.permission.WRITE_SMS
  • android.permission.READ_PHONE_STATE
  • android.permission.CALL_PHONE
  • android.permission.SEND_SMS
  • android.permission.RECEIVE_SMS
  • android.permission.RECEIVE_BOOT_COMPLETED
  • android.permission.WAKE_LOCK
  • android.permission.WRITE_EXTERNAL_STORAGE
  • android.permission.SUBSCRIBED_FEEDS_READ
  • android.permission.READ_SYNC_SETTINGS
  • android.permission.WRITE_SYNC_SETTINGS
  • android.permission.WRITE_SETTINGS
  • android.permission.INTERNET
  • android.permission.ACCESS_COARSE_LOCATION
  • android.permission.ACCESS_FINE_LOCATION
  • android.permission.READ_CALL_LOG
  • android.permission.WRITE_CALL_LOG
  • android.permission.RECORD_AUDIO
  • android.permission.SET_PREFERRED_APPLICATIONS
  • android.permission.WRITE_APN_SETTINGS
  • android.permission.READ_CALENDAR
  • android.permission.WRITE_CALENDAR
  • android.permission.KILL_BACKGROUND_PROCESSES
  • android.permission.RESTART_PACKAGES
  • android.permission.MANAGE_ACCOUNTS
  • android.permission.GET_ACCOUNTS
  • android.permission.MODIFY_PHONE_STATE
  • android.permission.CHANGE_NETWORK_STATE
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_LOCATION_EXTRA_COMMANDS
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.CHANGE_WIFI_STATE
  • android.permission.VIBRATE
  • android.permission.READ_LOGS
  • android.permission.GET_TASKS
  • android.permission.EXPAND_STATUS_BAR
  • com.android.browser.permission.READ_HISTORY_BOOKMARKS
  • com.android.browser.permission.WRITE_HISTORY_BOOKMARKS
  • android.permission.CAMERA
  • com.android.vending.BILLING
  • android.permission.SYSTEM_ALERT_WINDOW
  • android.permission.BATTERY_STATS
  • android.permission.MODIFY_AUDIO_SETTINGS
  • com.kms.free.permission.C2D_MESSAGE
  • com.google.android.c2dm.permission.RECEIVE

Every single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?

There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?

The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job.

comment count unavailable comments

Debugging a Flatpak application

Posted by Matthias Clasen on January 20, 2017 04:45 PM

Since I’ve been asking people to try the recipes app with Flatpak, I can’t complain too much if I get bug reports back. But how does one create a useful bug report when something goes wrong in a Flatpak sandbox ? Some of the stacktraces I’ve seen have not been very useful, since they are lacking symbols.

This post is a quick attempt to spread some basics about Flatpak debugging.

Normally, you run your Flatpak app like this:

flatpak run org.gnome.Recipes

Well, that’s not quite true; the ”normal” way to launch the Flatpak is just the same as launching a non-Flatpak app: click on the icon, or hit the Super key, type recipes, hit Enter. But lets assume you’re launching flatpak from the commandline.

What happens behind the scenes here is that flatpak finds the metadata for org.gnome.Recipes, determines which runtime it needs, sets up the sandbox by mounting the app in /app and the runtime in /usr, does some more sandboxy stuff, and eventually launches the app.

First problem for bug reporting: we want to run the app under gdb to get a stacktrace when it crashes.  Here is how you do that:

flatpak run --command=sh org.gnome.Recipes

Running this command, you’ll end up with a shell prompt ”inside” the recipes sandbox.  This is great, because we can now launch our app under gdb (note that the application gets installed in the /app prefix):

$ gdb /app/bin/recipes

Except… this fails because there is no gdb. Remember that we are inside the sandbox, so we can only run what is either shipped with the app in /app/bin or with the runtime in /usr/bin.  And gdb is not among either.

Thankfully, for each runtime, there is a corresponding sdk, which is just like the runtime, except it includes the stuff you need to develop and debug: headers, compilers, debuggers and other useful tools. And flatpak has a handy commandline option to use the sdk instead of the regular runtime:

flatpak run --devel --command=sh org.gnome.Recipes

The –devel option tells flatpak to use the sdk instead of the runtime  and do some other things that make debugging in the sandbox work.

Now for the last trick: I was complaining about stacktraces without symbols at the beginning. In rpm-based distributions, the debug symbols are split off into debuginfo packages. Flatpak does something similar and splits all the debug information of runtimes and apps into separate ”runtime extensions”, which by convention have .Debug appended to their name. So the debug info for org.gnome.Recipes is in the org.gnome.Recipes.Debug extension.

When you use the –devel option, flatpak automatically includes the Debug extensions for the application and runtime, if they are available. So, for the most useful stacktraces, make sure that you have the Debug extensions for the apps and runtimes in question installed.

Hope this helps!

Most of this information was taken from the Flatpak wiki.

Android apps, IMEIs and privacy

Posted by Matthew Garrett on January 19, 2017 11:36 PM
There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

Recipes for you and me

Posted by Matthias Clasen on January 18, 2017 08:31 PM

Since I’ve last written about recipes, we’ve started to figure out what we can achieve in time for GNOME 3.24, with an eye towards delivering a useful application. The result is this plan, which should be doable.

But: your help is needed. We need more recipe contributions from the GNOME community to have a well-populated initial experience. Everybody who contributes a recipe before 3.24 will get a little thank-you from us, so don’t delay…

The 0.8.0 release that I’ve just created already contains the first steps of this plan. One thing we decided is that we don’t have the time and resources to make the ingredients view useful by March, so the Ingredients tab is gone for now.

At the same time, there’s a new feature here, and that is the blue tile leading to the shopping list view:

The design for this page is still a bit up in the air, so you should expect this to change in the next releases. I decided to merge it already anyway, since I am impatient, and this view already provides useful functionality. You can print the shopping list:

Beyond this, I’ve spent some time on polishing and fixing bugs. One thing that I’ve discovered to my embarrassment earlier this week is that exporting recipes from the flatpak did not actually work. I had only ever tested this with an un-sandboxed local build.

Sorry to everyone who tried to export their recipe and was left wondering why it didn’t work!

We’ve now fixed all the bugs that were involved here, both in recipes and in the file chooser portal and in the portal infrastructure itself, and exporting recipes works fine with the current flatpak, which, as always, you can install from here:

https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

One related issue that became apparent during this bug hunt is that things work less than perfectly if the portals are not present on the host system. Until that becomes less likely, I’ve added a bit of code to make the failure less mysterious, and give you some idea how to fix it:

I think recipes is proving its value as  a test bed and early adopter for flatpak and portals. At this point, it is using the file chooser portal, the account information portal, the print portal, the notification portal, the session inhibit portal, and it would also use the sharing portal, if we had that already.

I shouldn’t close this post without mentioning that you will have a chance to hear a bit from Elvin about the genesis of this application in the Fosdem design devroom. See you there!

The definitive guide to synclient

Posted by Peter Hutterer on January 03, 2017 05:45 AM

This post describes the synclient tool, part of the xf86-input-synaptics package. It does not describe the various options, that's what the synclient(1) and synaptics(4) man pages are for. This post describes what synclient is, where it came from and how it works on a high level. Think of it as a anti-bus-factor post.

Maintenance status

The most important thing first: synclient is part of the synaptics X.Org driver which is in maintenance mode, and superseded by libinput and the xf86-input-libinput driver. In general, you should not be using synaptics anymore anyway, switch to libinput instead (and report bugs where the behaviour is not correct). It is unlikely that significant additional features will be added to synclient or synaptics and bugfixes are rare too.

The interface

synclient's interface is extremely simple: it's a list of key/value pairs that would all be set at the same time. For example, the following command sets two options, TapButton1 and TapButton2:


synclient TapButton1=1 TapButton2=2
The -l switch lists the current values in one big list:

$ synclient -l
Parameter settings:
LeftEdge = 1310
RightEdge = 4826
TopEdge = 2220
BottomEdge = 4636
FingerLow = 25
FingerHigh = 30
MaxTapTime = 180
...
The commandline interface is effectively a mapping of the various xorg.conf options. As said above, look at the synaptics(4) man page for details to each option.

History

A decade ago, the X server had no capabilities to change driver settings at runtime. Changing a device's configuration required rewriting an xorg.conf file and restarting the server. To avoid this, the synaptics X.Org touchpad driver exposed a shared memory (SHM) segment. Anyone with knowledge of the memory layout (an internal struct) and permission to write to that segment could change driver options at runtime. This is how synclient came to be, it was the tool that knew that memory layout. A synclient command would thus set the correct bits in the SHM segment and the driver would use the newly updated options. For obvious reasons, synclient and synaptics had to be the same version to work.

Atoms are 32-bit unsigned integers and created for each property name at runtime. They represent a unique string (the property name) and can be created by applications too. Property name to Atom mappings are global. Once any driver initialises a property by its name (e.g. "Synaptics Tap Actions"), that property and the corresponding Atom will exist globally until the server resets. Atoms unknown to a driver are simply ignored.

8 or so years ago, the X server got support for input device properties, a generic key/value store attached to each input device. The keys are the properties, identified by an "Atom" (see box on the side). The values are driver-specific. All drivers make use of this now, being able to change a property at runtime is the result of changing a property that the driver knows of.

synclient was converted to use properties instead of the SHM segment and eventually the SHM support was removed from both synclient and the driver itself. The backend to synclient is thus identical to the one used by the xinput tool or tools used by other drivers (e.g. the xsetwacom tool). synclient's killer feature was that it was the only tool that knew how to configure the driver, these days it's merely a commandline argument to property mapping tool. xinput, GNOME, KDE, they all do the same thing in the backend.

How synclient works

The driver has properties of a specific name, format and value range. For example, the "Synaptics Tap Action" property contains 7 8-bit values, each representing a button mapping for a specific tap action. If you change the fifth value of that property, you change the button mapping for a single-finger tap. Another property "Synaptics Off" is a single 8-bit value with an allowed range of 0, 1 or 2. The properties are described in the synaptics(4) man page. There is no functional difference between this synclient command:


synclient SynapticsOff=1
and this xinput command

xinput set-prop "SynPS/2 Synaptics TouchPad" "Synaptics Off" 1
Both set the same property with the same calls. synclient uses XI 1.x's XChangeDeviceProperty() and xinput uses XI 2.x's XIChangeProperty() if available but that doesn't really matter. They both fetch the property, overwrite the respective value and send it back to the server.

Pitfalls and quirks

synclient is a simple tool. If multiple touchpads are present it will simply pick the first one. This is a common issue for users with a i2c touchpad and will be even more common once the RMI4/SMBus support is in a released kernel. In both cases, the kernel creates the i2c/SMBus device and an additional PS/2 touchpad device that never sends events. So if synclient picks that device, all the settings are changed on a device that doesn't actually send events. This depends on the order the devices were added to the X server and can vary between reboots. You can work around that by disabling or ignoring the PS/2 device.

synclient is a one-shot tool, it does not monitor devices. If a device is added at runtime, the user must run the command to change settings. If a device is disabled and re-enabled (VT-switch, suspend/resume, ...), the user must run synclient to change settings. This is a major reason we recommend against using synclient, the desktop environment should take care of this. synclient will also conflict with the desktop environment in that it isn't aware when something else changes things. If synclient runs before the DE's init scripts (e.g. through xinitrc), its settings may be overwritten by the DE. If it runs later, it overwrites the DE's settings.

synclient exclusively supports synaptics driver properties. It cannot change any other driver's properties and it cannot change the properties created by the X server on each device. That's another reason we recommend against it, because you have to mix multiple tools to configure all devices instead of using e.g. the xinput tool for all property changes. Or, as above, letting the desktop environment take care of it.

The interface of synclient is IMO not significantly more obvious than setting the input properties directly. One has to look up what TapButton1 does anyway, so looking up how to set the property with the more generic xinput is the same amount of effort. A wrong value won't give the user anything more useful than the equivalent of a "this didn't work".

TL;DR

If you're TL;DR'ing an article labelled "the definitive guide to" you're kinda missing the point...

GTK+ Happenings

Posted by Matthias Clasen on January 02, 2017 12:39 AM

I said that I would post regular updates on what is happening in GTK+ 4 land. This was a while ago, so an update is overdue.

So, whats new ?

Cleanup

Deprecation cleanup has continued, and is mostly done at this point. We have the beginning of a porting guide that mentions some of the required changes for early adopters who want to stick their toes into the GTK+ 4 waters. Sadly, I haven’t gotten the GTK+ 4 docs up on the website yet, so no link…

Among the things that have been dropped as part of our ongoing cleanup has been the pixel cache, which should no longer be needed. This is nice since the pixel cache was causing problems, in particular on connection with transparency and component alpha (in font rendering).

Not really a cleanup, but we also got rid of the split into multiple shared objects (libgtk, libgdk, libgsk). Now, we just install a single libgtk, which also provides the gdk and gsk APIs. This has some small performance benefits, but mainly, it makes it easier for us to have private APIs that cross the gtk/gdk boundary.

Widget APIs

Some of the core APIs that are important when you are creating your own widgets have been changed around a bit:

  • The five different virtual functions that are used for size requisition have been replaced by a single new vfunc, measure(). This is using the same approach that we are already using for gadgets, where it has worked well.
  • The draw() virtual function that lets widget render themselves onto a cairo surface has been replaced by the new snapshot() vfunc, which lets widget create render nodes. This is essentially the change from direct to indirect rendering. Most widgets and gadgets have been ported over to this new wayof doing things.

These changes are only important to you if you create your own widgets.

Window APIs

GdkWindow has gained a few new constructors to replace the old libX11-style gdk_window_new.  Their names should indicate what they are good for:

  • gdk_window_new_toplevel
  • gdk_window_new_popup
  • gdk_window_new_temp
  • gtk_window_new_child
  • gdk_window_new_input
  • gdk_wayland_window_new_subsurface
  • gdk_x11_window_foreign_new_for_display

The last two are worth mentioning as examples where we move backend-specific functionality to backend APIs.

In the medium term, we are moving towards a world with only toplevel windows. As a first step towards this, we no longer support native child windows, and gdk_window_reparent() is gone. This allowed us to considerably simply the GdkWindow code.

Renderers

When we initially merged GSK, it had a GL renderer and a software fallback (using cairo). Since then, Benjamin has created a Vulkan renderer. The renderer can be selected using the GSK_RENDERER environment variable.

So, for example, this is how to run gtk4-demo with the cairo renderer and the X11 backend:

GSK_RENDERER=cairo GDK_BACKEND=x11 gtk4-demo

After the GSK merge, we struggled a bit to come up with a working approach to converting all our widget and CSS rendering to render nodes. With the introduction of the snapshot() vfunc, we’ve been able to make progress on this front. As part of this effort, Benjamin changed the GSK API around a bit. There are now a bunch of special-purpose render node subclasses that let us effectively translate the CSS rendering, e.g.

  • gsk_linear_gradient_node_new
  • gsk_texture_node_new
  • gsk_color_node_new
  • gsk_border_node_new
  • gsk_transform_node_new

…and so on. More node types will be created as we discover the need for them.

New fun

As an example of new functionality that would be very hard to support adequately in GTK+ 3, Benjamin recently added gsk_color_matrix_node_new and used it to implement the CSS filter spec, which is good for a few screenshots:

<video class="wp-video-shortcode" controls="controls" height="261" id="video-1725-2" loop="1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/01/color-filter2.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/01/color-filter2.webm</video>

Since this is all done on the GPU (unless you are using the software renderer), applying one of these filters does not affect performance much, as can be seen in this screencast of the fishbox demo:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1725-3" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/01/filter-perf1.webm?_=3" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/01/filter-perf1.webm</video>

Expect to see more uses of these new capabilities in GTK+ as things progress. Fun times ahead!

Last chance for ColorHug(1) users to get upgraded

Posted by Richard Hughes on December 28, 2016 05:08 PM

For the early adopters of the original ColorHug I’ve been offering a service where I send all the newer parts out to people so they can retrofit their device to the latest design. This included an updated LiveCD, the large velcro elasticated strap and the custom cut foam pad that replaced the old foam feet. In the last two years I’ve sent out over 300 free upgrades, but this has reduced to a dribble recently as later ColorHug1’s and all ColorHug2 had all the improvements and extra bits included by default. I’m going to stop this offer soon as I need to make things simpler so I can introduce a new thing (+? :) next year. If you do need a HugStrap and gasket still, please fill in the form before the 4th January. Thanks, and Merry Christmas to all.

On recipes, one more time

Posted by Matthias Clasen on December 26, 2016 06:36 PM

I’m still not quite done with this project. And since it is vacation time, I had some time to spend on it, leading to a release with some improvements that I’d like to present briefly.

One thing I noticed missing right away when I started to transcribe one of my mothers recipes was a segmented ingredients list. What I mean by that is the typical cake recipe that will say “For the dough…” “For the frosting…”

So I had to add support for this before I could continue with the recipe. The result looks like this:

Another weak point that became apparent was editing the ingredients on the edit page.  Initially, the ingredients list was just a plain text field. The previous release changed this to a list view, but the editing support consisted just of a popover with plain entries to add a new row.

This turned out to be hard to get right, and I had to go back to the designers (thanks, Jakub and Elvin) to get some ideas.  I am reasonably happy with the end result. The popover now provides suggestions for both ingredients and units, while still allowing you to enter free-form text. And the same popover is now also available to edit existing ingredients:

Just in time for the Christmas release, I was reminded that we have a nice and simple solution for spell-checking in GTK+ applications now, with Sébastian Wilmet’s gspell library. So I quickly added spell-checking to the text fields in Recipes:

Lastly, not really a new feature or due to my efforts, but Recipes looks really good in dark as well.

Looking back at the goals that are listed on the design page for this application,  we are almost there:

  • Find delicious food recipes to cook from all over the world
  • Assist people with dietary restrictions
  • Allow defining ingredient constraints
  • Print recipes so I can pin them on my fridge
  • Share recipes with my friends using e-mail

The one thing that is not covered yet  is sharing recipes by email. For that, we need work on the Flatpak side, to create a sharing portal that lets applications send email.

And for the first goal we really need your support – if you have been thinking about writing up one of your favorite recipes, the holiday season is the perfect opportunity to cook it again, take some pictures of the result and contribute your recipe!

 

radv and doom - kinda

Posted by Dave Airlie on December 23, 2016 07:26 AM
Yesterday Valve gave me a copy of DOOM for Christmas (not really for Christmas), and I got the wine bits in place from Fedora, then I spent today trying to get DOOM to render on radv.



Thanks to ParkerR on #radeon for taking the picture from his machine, I'm too lazy.

So it runs kinda, it hangs the GPU a fair bit, it misrenders some colors in some scenes, but you can see most of it. I'm not sure if I'll get back to this before next year (I'll try), but I'm pretty happy to have gotten it this far in a day, though I'm sure the next few things will me much more difficult to debug.

The branch is here:
https://github.com/airlied/mesa/commits/radv-wip-doom-wine

xf86-input-synaptics is not a Synaptics, Inc. driver

Posted by Peter Hutterer on December 19, 2016 10:47 PM

This is a common source of confusion: the legacy X.Org driver for touchpads is called xf86-input-synaptics but it is not a driver written by Synaptics, Inc. (the company).

The repository goes back to 2002 and for the first couple of years it Peter Osterlund was the sole contributor. Back then it was called "synaptics" and really was a "synaptics device" driver, i.e. it handled PS/2 protocol requests to initialise Synaptics, Inc. touchpads. Evdev support was added in 2003, punting the initialisation work to the kernel instead. This was the groundwork for a generic touchpad driver. In 2008 the driver was renamed to xf86-input-synaptics and relicensed from GPL to MIT to take it under the X.Org umbrella. I've been involved with it since 2008 and the official maintainer since 2011.

For many years now, the driver has been a generic touchpad driver that handles any device that the Linux kernel can handle. In fact, most bugs attributed to the synaptics driver not finding the touchpad are caused by the kernel not initialising the touchpad correctly. The synaptics driver reads the same evdev events that are also handled by libinput and the xf86-input-evdev driver, any differences in behaviour are driver-specific and not related to the hardware. The driver handles devices from Synaptics, Inc., ALPS, Elantech, Cypress, Apple and even some Wacom touch tablets. We don't care about what touchpad it is as long as the evdev events are sane.

Synaptics, Inc.'s developers are active in kernel development to help get new touchpads up and running. Once the kernel handles them, the xorg drivers and libinput will handle them too. I can't remember any significant contribution by Synaptics, Inc. to the X.org synaptics driver, so they are simply neither to credit nor to blame for the current state of the driver. The top 10 contributors since August 2008 when the first renamed version of xf86-input-synaptics was released are:


8 Simon Thum
10 Hans de Goede
10 Magnus Kessler
13 Alexandr Shadchin
15 Christoph Brill
18 Daniel Stone
18 Henrik Rydberg
39 Gaetan Nadon
50 Chase Douglas
396 Peter Hutterer
There's a long tail of other contributors but the top ten illustrate that it wasn't Synaptics, Inc. that wrote the driver. Any complaints about Synaptics, Inc. not maintaining/writing/fixing the driver are missing the point, because this driver was never a Synaptics, Inc. driver. That's not a criticism of Synaptics, Inc. btw, that's just how things are. We should have renamed the driver to just xf86-input-touchpad back in 2008 but that ship has sailed now. And synaptics is about to be superseded by libinput anyway, so it's simply not worth the effort now.

The other reason I included the commit count in the above: I'm also the main author of libinput. So "the synaptics developers" and "the libinput developers" are effectively the same person, i.e. me. Keep that in mind when you read random comments on the interwebs, it makes it easier to identify people just talking out of their behind.

libinput touchpad pointer acceleration analysis

Posted by Peter Hutterer on December 19, 2016 09:36 PM

A long-standing criticism of libinput is its touchpad acceleration code, oscillating somewhere between "terrible", "this is bad and you should feel bad" and "I can't complain because I keep missing the bloody send button". I finally found the time and some more laptops to sit down and figure out what's going on.

I recorded touch sequences of the following movements:

  • super-slow: a very slow movement as you would do when pixel-precision is required. I recorded this by effectively slowly rolling my finger. This is an unusual but sometimes required interaction.
  • slow: a slow movement as you would do when you need to hit a target several pixels across from a short distance away, e.g. the Firefox tab close button
  • medium: a medium-speed movement though probably closer to the slow side. This would be similar to the movement when you move 5cm across the screen.
  • medium-fast: a medium-to-fast speed movement. This would be similar to the movement when you move 5cm across the screen onto a large target, e.g. when moving between icons in the file manager.
  • fast: a fast movement. This would be similar to the movement when you move between windows some distance apart.
  • flick: a flick movement. This would be similar to the movement when you move to a corner of the screen.
Note that all these are by definition subjective and somewhat dependent on the hardware. Either way, I tried to get something of a reasonable subset.

Next, I ran this through a libinput 1.5.3 augmented with printfs in the pointer acceleration code and a script to post-process that output. Unfortunately, libinput's pointer acceleration internally uses units equivalent to a 1000dpi mouse and that's not something easy to understand. Either way, the numbers themselves don't matter too much for analysis right now and I've now switched everything to mm/s anyway.

A note ahead: the analysis relies on libinput recording an evemu replay. That relies on uinput and event timestamps are subject to a little bit of drift across recordings. Some differences in the before/after of the same recording can likely be blamed on that.

The graph I'll present for each recording is relatively simple, it shows the velocity and the matching factor.The x axis is simply the events in sequence, the y axes are the factor and the velocity (note: two different scales in one graph). And it colours in the bits that see some type of acceleration. Green means "maximum factor applied", yellow means "decelerated". The purple "adaptive" means per-velocity acceleration is applied. Anything that remains white is used as-is (aside from the constant deceleration). This isn't really different to the first graph, it just shows roughly the same data in different colours.

Interesting numbers for the factor are 0.4 and 0.8. We have a constant acceleration of 0.4 on touchpads, i.e. a factor of 0.4 "don't apply acceleration", the latter is "maximum factor". The maximum factor is twice as big as the normal factor, so the pointer moves twice as fast. Anything below 0.4 means we decelerate the pointer, i.e. the pointer moves slower than the finger.

The super-slow movement shows that the factor is, aside from the beginning always below 0.4, i.e. the sequence sees deceleration applied. The takeaway here is that acceleration appears to be doing the right thing, slow motion is decelerated and while there may or may not be some tweaking to do, there is no smoking gun.


Super slow motion is decelerated.

The slow movement shows that the factor is almost always 0.4, aside from a few extremely slow events. This indicates that for the slow speed, the pointer movement maps exactly to the finger movement save for our constant deceleration. As above, there is no indicator that we're doing something seriously wrong.


Slow motion is largely used as-is with a few decelerations.

The medium movement gets interesting. If we look at the factor applied, it changes wildly with the velocity across the whole range between 0.4 and the maximum 0.8. There is a short spike at the beginning where it maxes out but the rest is accelerated on-demand, i.e. different finger speeds will produce different acceleration. This shows the crux of what a lot of users have been complaining about - what is a fairly slow motion still results in an accelerated pointer. And because the acceleration changes with the speed the pointer behaviour is unpredictable.


In medium-speed motion acceleration changes with the speed and even maxes out.

The medium-fast movement shows almost the whole movement maxing out on the maximum acceleration factor, i.e. the pointer moves at twice the speed to the finger. This is a problem because this is roughly the speed you'd use to hit a "mentally preselected" target, i.e. you know exactly where the pointer should end up and you're just intuitively moving it there. If the pointer moves twice as fast, you're going to overshoot and indeed that's what I've observed during the touchpad tap analysis userstudy.


Medium-fast motion easily maxes out on acceleration.

The fast movement shows basically the same thing, almost the whole sequence maxes out on the acceleration factor so the pointer will move twice as far as intuitively guessed.


Fast motion maxes out acceleration.

So does the flick movement, but in that case we want it to go as far as possible and note that the speeds between fast and flick are virtually identical here. I'm not sure if that's me just being equally fast or the touchpad not quite picking up on the short motion.


Flick motion also maxes out acceleration.

Either way, the takeaway is simple: we accelerate too soon and there's a fairly narrow window where we have adaptive acceleration, it's very easy to top out. The simplest fix to get most touchpad movements working well is to increase the current threshold on when acceleration applies. Beyond that it's a bit harder to quantify, but a good idea seems to be to stretch out the acceleration function so that the factor changes at a slower rate as the velocity increases. And up the acceleration factor so we don't top out and we keep going as the finger goes faster. This would be the intuitive expectation since it resembles physics (more or less).

There's a set of patches on the list now that does exactly that. So let's see what the result of this is. Note ahead: I also switched everything from mm/s which causes some numbers to shift slightly.

The super-slow motion is largely unchanged though the velocity scale changes quite a bit. Part of that is that the new code has a different unit which, on my T440s, isn't exactly 1000dpi. So the numbers shift and the result of that is that deceleration applies a bit more often than before.


Super-slow motion largely remains the same.

The slow motions are largely unchanged but more deceleration is now applied. Tbh, I'm not sure if that's an artefact of the evemu replay, the new accel code or the result of the not-quite-1000dpi of my touchpad.


Slow motion largely remains the same.

The medium motion is the first interesting one because that's where we had the first observable issues. In the new code, the motion is almost entirely unaccelerated, i.e. the pointer will move as the finger does. Success!


Medium-speed motion now matches the finger speed.

The same is true of the medium-fast motion. In the recording the first few events were past the new thresholds so some acceleration is applied, the rest of the motion matches finger motion.


Medium-fast motion now matches the finger speed except at the beginning where some acceleration was applied.

The fast and flick motion are largely identical in having the acceleration factor applied to almost the whole motion but the big change is that the factor now goes up to 2.3 for the fast motion and 2.5 for the flick motion, i.e. both movements would go a lot faster than before. In the graphics below you still see the blue area marked as "previously max acceleration factor" though it does not actually max out in either recording now.


Fast motion increases acceleration as speed increases.

Flick motion increases acceleration as speed increases.

In summary, what this means is that the new code accelerates later but when it does accelerate, it goes faster. I tested this on a T440s, a T450p and an Asus VivoBook with an Elantech touchpad (which is almost unusable with current libinput). They don't quite feel the same yet and I'm not happy with the actual acceleration, but for 90% of 'normal' movements the touchpad now behaves very well. So at least we go from "this is terrible" to "this needs tweaking". I'll go check if there's any champagne left.

Another look at GNOME recipes

Posted by Matthias Clasen on December 19, 2016 11:53 AM

It has been a few weeks since I’ve first talked about this new app that I’ve started to work on, GNOME recipes.

Since then, a few things have changed. We have a new details page, which makes better use of the available space with a 2 column layout.

Among the improved details here is a more elaborate ingredients list. Also new is the image viewer, which lets you cycle through the available photos for the recipe without getting in the way too much.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1703-4" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2016/12/image-switcher.webm?_=4" type="video/webm">https://blogs.gnome.org/mclasen/files/2016/12/image-switcher.webm</video>

We also use a 2 column layout when editing a recipe now.

Most importantly, as you can see in these screenshots, we have received some contributed recipes. Thanks to everybody who has sent us one! If you haven’t yet, please do. You may win a prize, if we can work out the logistics :-)

If you want to give recipes a try, the sources are here: https://git.gnome.org/browse/recipes/ and here is a recent Flatpak.

Update: With the just-released flatpak 0.8.0, installing the Flatpak from the .flatpakref file I linked above is as simple as this:

$ flatpak install --from https://raw.githubusercontent.com/alexlarsson/test-releases/master/gnome-recipes.flatpakref
read flatpak info from GTK_USE_PORTAL: network: 1 portal: 0
This application depends on runtimes from:
 http://sdk.gnome.org/repo/
Configure this as new remote 'gnome-1' [y/n]: y
Installing: org.gnome.Recipes/x86_64/master
Updating: org.gnome.Platform/x86_64/3.22 from gnome
No updates.
Updating: org.gnome.Platform.Locale/x86_64/3.22 from gnome
No updates.
Installing: org.gnome.Recipes/x86_64/master from org.gnome.Recipes-origin

1 delta parts, 5 loose fetched; 20053 KiB transferred in 8 seconds 
Installing: org.gnome.Recipes.Locale/x86_64/master from org.gnome.Recipes-origin

5 metadata, 1 content objects fetched; 1 KiB transferred in 0 seconds

Making your own retro keyboard

Posted by Bastien Nocera on December 15, 2016 04:48 PM
We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

How do keyboards work?

Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

The keyboard hardware

I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

That needs a wash in soapy water

After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

Finicky metal covered plastic

Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

Desoldered connectors

After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

The micro-controller

The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

The soldering

Colorful tinning

I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

As always, make sure early that things would fit, especially the cables!

The unnecessary pollution

The firmware

Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


sudo dnf install -y avr-libc avr-binutils avr-gcc

I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

Some advices

This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
  • Don't forget the size, length and non-flexibility of cables in your design
  • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
  • Use breadboard cables and pins to connect things, if you have the room
  • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
That last one explains the slightly funny cabling of my keyboard.

Finishing touches

All Sugru'ed up

To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

Prototype and final hardware

All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

(If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)