Fedora desktop Planet

Android permissions and hypocrisy

Posted by Matthew Garrett on January 23, 2017 07:58 AM
I wrote a piece a few days ago about how the Meitu app asked for a bunch of permissions in ways that might concern people, but which were not actually any worse than many other apps. The fact that Android makes it so easy for apps to obtain data that's personally identifiable is of concern, but in the absence of another stable device identifier this is the sort of thing that capitalism is inherently going to end up making use of. Fundamentally, this is Google's problem to fix.

Around the same time, Kaspersky, the Russian anti-virus company, wrote a blog post that warned people about this specific app. It was framed somewhat misleadingly - "reading, deleting and modifying the data in your phone's memory" would probably be interpreted by most people as something other than "the ability to modify data on your phone's external storage", although it ends with some reasonable advice that users should ask why an app requires some permissions.

So, to that end, here are the permissions that Kaspersky request on Android:
  • android.permission.READ_CONTACTS
  • android.permission.WRITE_CONTACTS
  • android.permission.READ_SMS
  • android.permission.WRITE_SMS
  • android.permission.READ_PHONE_STATE
  • android.permission.CALL_PHONE
  • android.permission.SEND_SMS
  • android.permission.RECEIVE_SMS
  • android.permission.RECEIVE_BOOT_COMPLETED
  • android.permission.WAKE_LOCK
  • android.permission.WRITE_EXTERNAL_STORAGE
  • android.permission.SUBSCRIBED_FEEDS_READ
  • android.permission.READ_SYNC_SETTINGS
  • android.permission.WRITE_SYNC_SETTINGS
  • android.permission.WRITE_SETTINGS
  • android.permission.INTERNET
  • android.permission.ACCESS_COARSE_LOCATION
  • android.permission.ACCESS_FINE_LOCATION
  • android.permission.READ_CALL_LOG
  • android.permission.WRITE_CALL_LOG
  • android.permission.RECORD_AUDIO
  • android.permission.SET_PREFERRED_APPLICATIONS
  • android.permission.WRITE_APN_SETTINGS
  • android.permission.READ_CALENDAR
  • android.permission.WRITE_CALENDAR
  • android.permission.KILL_BACKGROUND_PROCESSES
  • android.permission.RESTART_PACKAGES
  • android.permission.MANAGE_ACCOUNTS
  • android.permission.GET_ACCOUNTS
  • android.permission.MODIFY_PHONE_STATE
  • android.permission.CHANGE_NETWORK_STATE
  • android.permission.ACCESS_NETWORK_STATE
  • android.permission.ACCESS_LOCATION_EXTRA_COMMANDS
  • android.permission.ACCESS_WIFI_STATE
  • android.permission.CHANGE_WIFI_STATE
  • android.permission.VIBRATE
  • android.permission.READ_LOGS
  • android.permission.GET_TASKS
  • android.permission.EXPAND_STATUS_BAR
  • com.android.browser.permission.READ_HISTORY_BOOKMARKS
  • com.android.browser.permission.WRITE_HISTORY_BOOKMARKS
  • android.permission.CAMERA
  • com.android.vending.BILLING
  • android.permission.SYSTEM_ALERT_WINDOW
  • android.permission.BATTERY_STATS
  • android.permission.MODIFY_AUDIO_SETTINGS
  • com.kms.free.permission.C2D_MESSAGE
  • com.google.android.c2dm.permission.RECEIVE

Every single permission that Kaspersky mention Meitu having? They require it as well. And a lot more. Why does Kaspersky want the ability to record audio? Why does it want to be able to send SMSes? Why does it want to read my contacts? Why does it need my fine-grained location? Why is it able to modify my settings?

There's no reason to assume that they're being malicious here. The reasons that these permissions exist at all is that there are legitimate reasons to use them, and Kaspersky may well have good reason to request them. But they don't explain that, and they do literally everything that their blog post criticises (including explicitly requesting the phone's IMEI). Why should we trust a Russian company more than a Chinese one?

The moral here isn't that Kaspersky are evil or that Meitu are virtuous. It's that talking about application permissions is difficult and we don't have the language to explain to users what our apps are doing and why they're doing it, and Google are still falling far short of where they should be in terms of making this transparent to users. But the other moral is that you shouldn't complain about the permissions an app requires when you're asking for even more of them because it just makes you look stupid and bad at your job.

comment count unavailable comments

Debugging a Flatpak application

Posted by Matthias Clasen on January 20, 2017 04:45 PM

Since I’ve been asking people to try the recipes app with Flatpak, I can’t complain too much if I get bug reports back. But how does one create a useful bug report when something goes wrong in a Flatpak sandbox ? Some of the stacktraces I’ve seen have not been very useful, since they are lacking symbols.

This post is a quick attempt to spread some basics about Flatpak debugging.

Normally, you run your Flatpak app like this:

flatpak run org.gnome.Recipes

Well, that’s not quite true; the ”normal” way to launch the Flatpak is just the same as launching a non-Flatpak app: click on the icon, or hit the Super key, type recipes, hit Enter. But lets assume you’re launching flatpak from the commandline.

What happens behind the scenes here is that flatpak finds the metadata for org.gnome.Recipes, determines which runtime it needs, sets up the sandbox by mounting the app in /app and the runtime in /usr, does some more sandboxy stuff, and eventually launches the app.

First problem for bug reporting: we want to run the app under gdb to get a stacktrace when it crashes.  Here is how you do that:

flatpak run --command=sh org.gnome.Recipes

Running this command, you’ll end up with a shell prompt ”inside” the recipes sandbox.  This is great, because we can now launch our app under gdb (note that the application gets installed in the /app prefix):

$ gdb /app/bin/recipes

Except… this fails because there is no gdb. Remember that we are inside the sandbox, so we can only run what is either shipped with the app in /app/bin or with the runtime in /usr/bin.  And gdb is not among either.

Thankfully, for each runtime, there is a corresponding sdk, which is just like the runtime, except it includes the stuff you need to develop and debug: headers, compilers, debuggers and other useful tools. And flatpak has a handy commandline option to use the sdk instead of the regular runtime:

flatpak run --devel --command=sh org.gnome.Recipes

The –devel option tells flatpak to use the sdk instead of the runtime  and do some other things that make debugging in the sandbox work.

Now for the last trick: I was complaining about stacktraces without symbols at the beginning. In rpm-based distributions, the debug symbols are split off into debuginfo packages. Flatpak does something similar and splits all the debug information of runtimes and apps into separate ”runtime extensions”, which by convention have .Debug appended to their name. So the debug info for org.gnome.Recipes is in the org.gnome.Recipes.Debug extension.

When you use the –devel option, flatpak automatically includes the Debug extensions for the application and runtime, if they are available. So, for the most useful stacktraces, make sure that you have the Debug extensions for the apps and runtimes in question installed.

Hope this helps!

Most of this information was taken from the Flatpak wiki.

Android apps, IMEIs and privacy

Posted by Matthew Garrett on January 19, 2017 11:36 PM
There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

Recipes for you and me

Posted by Matthias Clasen on January 18, 2017 08:31 PM

Since I’ve last written about recipes, we’ve started to figure out what we can achieve in time for GNOME 3.24, with an eye towards delivering a useful application. The result is this plan, which should be doable.

But: your help is needed. We need more recipe contributions from the GNOME community to have a well-populated initial experience. Everybody who contributes a recipe before 3.24 will get a little thank-you from us, so don’t delay…

The 0.8.0 release that I’ve just created already contains the first steps of this plan. One thing we decided is that we don’t have the time and resources to make the ingredients view useful by March, so the Ingredients tab is gone for now.

At the same time, there’s a new feature here, and that is the blue tile leading to the shopping list view:

The design for this page is still a bit up in the air, so you should expect this to change in the next releases. I decided to merge it already anyway, since I am impatient, and this view already provides useful functionality. You can print the shopping list:

Beyond this, I’ve spent some time on polishing and fixing bugs. One thing that I’ve discovered to my embarrassment earlier this week is that exporting recipes from the flatpak did not actually work. I had only ever tested this with an un-sandboxed local build.

Sorry to everyone who tried to export their recipe and was left wondering why it didn’t work!

We’ve now fixed all the bugs that were involved here, both in recipes and in the file chooser portal and in the portal infrastructure itself, and exporting recipes works fine with the current flatpak, which, as always, you can install from here:

https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

One related issue that became apparent during this bug hunt is that things work less than perfectly if the portals are not present on the host system. Until that becomes less likely, I’ve added a bit of code to make the failure less mysterious, and give you some idea how to fix it:

I think recipes is proving its value as  a test bed and early adopter for flatpak and portals. At this point, it is using the file chooser portal, the account information portal, the print portal, the notification portal, the session inhibit portal, and it would also use the sharing portal, if we had that already.

I shouldn’t close this post without mentioning that you will have a chance to hear a bit from Elvin about the genesis of this application in the Fosdem design devroom. See you there!

The definitive guide to synclient

Posted by Peter Hutterer on January 03, 2017 05:45 AM

This post describes the synclient tool, part of the xf86-input-synaptics package. It does not describe the various options, that's what the synclient(1) and synaptics(4) man pages are for. This post describes what synclient is, where it came from and how it works on a high level. Think of it as a anti-bus-factor post.

Maintenance status

The most important thing first: synclient is part of the synaptics X.Org driver which is in maintenance mode, and superseded by libinput and the xf86-input-libinput driver. In general, you should not be using synaptics anymore anyway, switch to libinput instead (and report bugs where the behaviour is not correct). It is unlikely that significant additional features will be added to synclient or synaptics and bugfixes are rare too.

The interface

synclient's interface is extremely simple: it's a list of key/value pairs that would all be set at the same time. For example, the following command sets two options, TapButton1 and TapButton2:


synclient TapButton1=1 TapButton2=2
The -l switch lists the current values in one big list:

$ synclient -l
Parameter settings:
LeftEdge = 1310
RightEdge = 4826
TopEdge = 2220
BottomEdge = 4636
FingerLow = 25
FingerHigh = 30
MaxTapTime = 180
...
The commandline interface is effectively a mapping of the various xorg.conf options. As said above, look at the synaptics(4) man page for details to each option.

History

A decade ago, the X server had no capabilities to change driver settings at runtime. Changing a device's configuration required rewriting an xorg.conf file and restarting the server. To avoid this, the synaptics X.Org touchpad driver exposed a shared memory (SHM) segment. Anyone with knowledge of the memory layout (an internal struct) and permission to write to that segment could change driver options at runtime. This is how synclient came to be, it was the tool that knew that memory layout. A synclient command would thus set the correct bits in the SHM segment and the driver would use the newly updated options. For obvious reasons, synclient and synaptics had to be the same version to work.

Atoms are 32-bit unsigned integers and created for each property name at runtime. They represent a unique string (the property name) and can be created by applications too. Property name to Atom mappings are global. Once any driver initialises a property by its name (e.g. "Synaptics Tap Actions"), that property and the corresponding Atom will exist globally until the server resets. Atoms unknown to a driver are simply ignored.

8 or so years ago, the X server got support for input device properties, a generic key/value store attached to each input device. The keys are the properties, identified by an "Atom" (see box on the side). The values are driver-specific. All drivers make use of this now, being able to change a property at runtime is the result of changing a property that the driver knows of.

synclient was converted to use properties instead of the SHM segment and eventually the SHM support was removed from both synclient and the driver itself. The backend to synclient is thus identical to the one used by the xinput tool or tools used by other drivers (e.g. the xsetwacom tool). synclient's killer feature was that it was the only tool that knew how to configure the driver, these days it's merely a commandline argument to property mapping tool. xinput, GNOME, KDE, they all do the same thing in the backend.

How synclient works

The driver has properties of a specific name, format and value range. For example, the "Synaptics Tap Action" property contains 7 8-bit values, each representing a button mapping for a specific tap action. If you change the fifth value of that property, you change the button mapping for a single-finger tap. Another property "Synaptics Off" is a single 8-bit value with an allowed range of 0, 1 or 2. The properties are described in the synaptics(4) man page. There is no functional difference between this synclient command:


synclient SynapticsOff=1
and this xinput command

xinput set-prop "SynPS/2 Synaptics TouchPad" "Synaptics Off" 1
Both set the same property with the same calls. synclient uses XI 1.x's XChangeDeviceProperty() and xinput uses XI 2.x's XIChangeProperty() if available but that doesn't really matter. They both fetch the property, overwrite the respective value and send it back to the server.

Pitfalls and quirks

synclient is a simple tool. If multiple touchpads are present it will simply pick the first one. This is a common issue for users with a i2c touchpad and will be even more common once the RMI4/SMBus support is in a released kernel. In both cases, the kernel creates the i2c/SMBus device and an additional PS/2 touchpad device that never sends events. So if synclient picks that device, all the settings are changed on a device that doesn't actually send events. This depends on the order the devices were added to the X server and can vary between reboots. You can work around that by disabling or ignoring the PS/2 device.

synclient is a one-shot tool, it does not monitor devices. If a device is added at runtime, the user must run the command to change settings. If a device is disabled and re-enabled (VT-switch, suspend/resume, ...), the user must run synclient to change settings. This is a major reason we recommend against using synclient, the desktop environment should take care of this. synclient will also conflict with the desktop environment in that it isn't aware when something else changes things. If synclient runs before the DE's init scripts (e.g. through xinitrc), its settings may be overwritten by the DE. If it runs later, it overwrites the DE's settings.

synclient exclusively supports synaptics driver properties. It cannot change any other driver's properties and it cannot change the properties created by the X server on each device. That's another reason we recommend against it, because you have to mix multiple tools to configure all devices instead of using e.g. the xinput tool for all property changes. Or, as above, letting the desktop environment take care of it.

The interface of synclient is IMO not significantly more obvious than setting the input properties directly. One has to look up what TapButton1 does anyway, so looking up how to set the property with the more generic xinput is the same amount of effort. A wrong value won't give the user anything more useful than the equivalent of a "this didn't work".

TL;DR

If you're TL;DR'ing an article labelled "the definitive guide to" you're kinda missing the point...

GTK+ Happenings

Posted by Matthias Clasen on January 02, 2017 12:39 AM

I said that I would post regular updates on what is happening in GTK+ 4 land. This was a while ago, so an update is overdue.

So, whats new ?

Cleanup

Deprecation cleanup has continued, and is mostly done at this point. We have the beginning of a porting guide that mentions some of the required changes for early adopters who want to stick their toes into the GTK+ 4 waters. Sadly, I haven’t gotten the GTK+ 4 docs up on the website yet, so no link…

Among the things that have been dropped as part of our ongoing cleanup has been the pixel cache, which should no longer be needed. This is nice since the pixel cache was causing problems, in particular on connection with transparency and component alpha (in font rendering).

Not really a cleanup, but we also got rid of the split into multiple shared objects (libgtk, libgdk, libgsk). Now, we just install a single libgtk, which also provides the gdk and gsk APIs. This has some small performance benefits, but mainly, it makes it easier for us to have private APIs that cross the gtk/gdk boundary.

Widget APIs

Some of the core APIs that are important when you are creating your own widgets have been changed around a bit:

  • The five different virtual functions that are used for size requisition have been replaced by a single new vfunc, measure(). This is using the same approach that we are already using for gadgets, where it has worked well.
  • The draw() virtual function that lets widget render themselves onto a cairo surface has been replaced by the new snapshot() vfunc, which lets widget create render nodes. This is essentially the change from direct to indirect rendering. Most widgets and gadgets have been ported over to this new wayof doing things.

These changes are only important to you if you create your own widgets.

Window APIs

GdkWindow has gained a few new constructors to replace the old libX11-style gdk_window_new.  Their names should indicate what they are good for:

  • gdk_window_new_toplevel
  • gdk_window_new_popup
  • gdk_window_new_temp
  • gtk_window_new_child
  • gdk_window_new_input
  • gdk_wayland_window_new_subsurface
  • gdk_x11_window_foreign_new_for_display

The last two are worth mentioning as examples where we move backend-specific functionality to backend APIs.

In the medium term, we are moving towards a world with only toplevel windows. As a first step towards this, we no longer support native child windows, and gdk_window_reparent() is gone. This allowed us to considerably simply the GdkWindow code.

Renderers

When we initially merged GSK, it had a GL renderer and a software fallback (using cairo). Since then, Benjamin has created a Vulkan renderer. The renderer can be selected using the GSK_RENDERER environment variable.

So, for example, this is how to run gtk4-demo with the cairo renderer and the X11 backend:

GSK_RENDERER=cairo GDK_BACKEND=x11 gtk4-demo

After the GSK merge, we struggled a bit to come up with a working approach to converting all our widget and CSS rendering to render nodes. With the introduction of the snapshot() vfunc, we’ve been able to make progress on this front. As part of this effort, Benjamin changed the GSK API around a bit. There are now a bunch of special-purpose render node subclasses that let us effectively translate the CSS rendering, e.g.

  • gsk_linear_gradient_node_new
  • gsk_texture_node_new
  • gsk_color_node_new
  • gsk_border_node_new
  • gsk_transform_node_new

…and so on. More node types will be created as we discover the need for them.

New fun

As an example of new functionality that would be very hard to support adequately in GTK+ 3, Benjamin recently added gsk_color_matrix_node_new and used it to implement the CSS filter spec, which is good for a few screenshots:

<video class="wp-video-shortcode" controls="controls" height="261" id="video-1725-1" loop="1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/01/color-filter2.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/01/color-filter2.webm</video>

Since this is all done on the GPU (unless you are using the software renderer), applying one of these filters does not affect performance much, as can be seen in this screencast of the fishbox demo:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1725-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/01/filter-perf1.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/01/filter-perf1.webm</video>

Expect to see more uses of these new capabilities in GTK+ as things progress. Fun times ahead!

Last chance for ColorHug(1) users to get upgraded

Posted by Richard Hughes on December 28, 2016 05:08 PM

For the early adopters of the original ColorHug I’ve been offering a service where I send all the newer parts out to people so they can retrofit their device to the latest design. This included an updated LiveCD, the large velcro elasticated strap and the custom cut foam pad that replaced the old foam feet. In the last two years I’ve sent out over 300 free upgrades, but this has reduced to a dribble recently as later ColorHug1’s and all ColorHug2 had all the improvements and extra bits included by default. I’m going to stop this offer soon as I need to make things simpler so I can introduce a new thing (+? :) next year. If you do need a HugStrap and gasket still, please fill in the form before the 4th January. Thanks, and Merry Christmas to all.

On recipes, one more time

Posted by Matthias Clasen on December 26, 2016 06:36 PM

I’m still not quite done with this project. And since it is vacation time, I had some time to spend on it, leading to a release with some improvements that I’d like to present briefly.

One thing I noticed missing right away when I started to transcribe one of my mothers recipes was a segmented ingredients list. What I mean by that is the typical cake recipe that will say “For the dough…” “For the frosting…”

So I had to add support for this before I could continue with the recipe. The result looks like this:

Another weak point that became apparent was editing the ingredients on the edit page.  Initially, the ingredients list was just a plain text field. The previous release changed this to a list view, but the editing support consisted just of a popover with plain entries to add a new row.

This turned out to be hard to get right, and I had to go back to the designers (thanks, Jakub and Elvin) to get some ideas.  I am reasonably happy with the end result. The popover now provides suggestions for both ingredients and units, while still allowing you to enter free-form text. And the same popover is now also available to edit existing ingredients:

Just in time for the Christmas release, I was reminded that we have a nice and simple solution for spell-checking in GTK+ applications now, with Sébastian Wilmet’s gspell library. So I quickly added spell-checking to the text fields in Recipes:

Lastly, not really a new feature or due to my efforts, but Recipes looks really good in dark as well.

Looking back at the goals that are listed on the design page for this application,  we are almost there:

  • Find delicious food recipes to cook from all over the world
  • Assist people with dietary restrictions
  • Allow defining ingredient constraints
  • Print recipes so I can pin them on my fridge
  • Share recipes with my friends using e-mail

The one thing that is not covered yet  is sharing recipes by email. For that, we need work on the Flatpak side, to create a sharing portal that lets applications send email.

And for the first goal we really need your support – if you have been thinking about writing up one of your favorite recipes, the holiday season is the perfect opportunity to cook it again, take some pictures of the result and contribute your recipe!

 

radv and doom - kinda

Posted by Dave Airlie on December 23, 2016 07:26 AM
Yesterday Valve gave me a copy of DOOM for Christmas (not really for Christmas), and I got the wine bits in place from Fedora, then I spent today trying to get DOOM to render on radv.



Thanks to ParkerR on #radeon for taking the picture from his machine, I'm too lazy.

So it runs kinda, it hangs the GPU a fair bit, it misrenders some colors in some scenes, but you can see most of it. I'm not sure if I'll get back to this before next year (I'll try), but I'm pretty happy to have gotten it this far in a day, though I'm sure the next few things will me much more difficult to debug.

The branch is here:
https://github.com/airlied/mesa/commits/radv-wip-doom-wine

xf86-input-synaptics is not a Synaptics, Inc. driver

Posted by Peter Hutterer on December 19, 2016 10:47 PM

This is a common source of confusion: the legacy X.Org driver for touchpads is called xf86-input-synaptics but it is not a driver written by Synaptics, Inc. (the company).

The repository goes back to 2002 and for the first couple of years it Peter Osterlund was the sole contributor. Back then it was called "synaptics" and really was a "synaptics device" driver, i.e. it handled PS/2 protocol requests to initialise Synaptics, Inc. touchpads. Evdev support was added in 2003, punting the initialisation work to the kernel instead. This was the groundwork for a generic touchpad driver. In 2008 the driver was renamed to xf86-input-synaptics and relicensed from GPL to MIT to take it under the X.Org umbrella. I've been involved with it since 2008 and the official maintainer since 2011.

For many years now, the driver has been a generic touchpad driver that handles any device that the Linux kernel can handle. In fact, most bugs attributed to the synaptics driver not finding the touchpad are caused by the kernel not initialising the touchpad correctly. The synaptics driver reads the same evdev events that are also handled by libinput and the xf86-input-evdev driver, any differences in behaviour are driver-specific and not related to the hardware. The driver handles devices from Synaptics, Inc., ALPS, Elantech, Cypress, Apple and even some Wacom touch tablets. We don't care about what touchpad it is as long as the evdev events are sane.

Synaptics, Inc.'s developers are active in kernel development to help get new touchpads up and running. Once the kernel handles them, the xorg drivers and libinput will handle them too. I can't remember any significant contribution by Synaptics, Inc. to the X.org synaptics driver, so they are simply neither to credit nor to blame for the current state of the driver. The top 10 contributors since August 2008 when the first renamed version of xf86-input-synaptics was released are:


8 Simon Thum
10 Hans de Goede
10 Magnus Kessler
13 Alexandr Shadchin
15 Christoph Brill
18 Daniel Stone
18 Henrik Rydberg
39 Gaetan Nadon
50 Chase Douglas
396 Peter Hutterer
There's a long tail of other contributors but the top ten illustrate that it wasn't Synaptics, Inc. that wrote the driver. Any complaints about Synaptics, Inc. not maintaining/writing/fixing the driver are missing the point, because this driver was never a Synaptics, Inc. driver. That's not a criticism of Synaptics, Inc. btw, that's just how things are. We should have renamed the driver to just xf86-input-touchpad back in 2008 but that ship has sailed now. And synaptics is about to be superseded by libinput anyway, so it's simply not worth the effort now.

The other reason I included the commit count in the above: I'm also the main author of libinput. So "the synaptics developers" and "the libinput developers" are effectively the same person, i.e. me. Keep that in mind when you read random comments on the interwebs, it makes it easier to identify people just talking out of their behind.

libinput touchpad pointer acceleration analysis

Posted by Peter Hutterer on December 19, 2016 09:36 PM

A long-standing criticism of libinput is its touchpad acceleration code, oscillating somewhere between "terrible", "this is bad and you should feel bad" and "I can't complain because I keep missing the bloody send button". I finally found the time and some more laptops to sit down and figure out what's going on.

I recorded touch sequences of the following movements:

  • super-slow: a very slow movement as you would do when pixel-precision is required. I recorded this by effectively slowly rolling my finger. This is an unusual but sometimes required interaction.
  • slow: a slow movement as you would do when you need to hit a target several pixels across from a short distance away, e.g. the Firefox tab close button
  • medium: a medium-speed movement though probably closer to the slow side. This would be similar to the movement when you move 5cm across the screen.
  • medium-fast: a medium-to-fast speed movement. This would be similar to the movement when you move 5cm across the screen onto a large target, e.g. when moving between icons in the file manager.
  • fast: a fast movement. This would be similar to the movement when you move between windows some distance apart.
  • flick: a flick movement. This would be similar to the movement when you move to a corner of the screen.
Note that all these are by definition subjective and somewhat dependent on the hardware. Either way, I tried to get something of a reasonable subset.

Next, I ran this through a libinput 1.5.3 augmented with printfs in the pointer acceleration code and a script to post-process that output. Unfortunately, libinput's pointer acceleration internally uses units equivalent to a 1000dpi mouse and that's not something easy to understand. Either way, the numbers themselves don't matter too much for analysis right now and I've now switched everything to mm/s anyway.

A note ahead: the analysis relies on libinput recording an evemu replay. That relies on uinput and event timestamps are subject to a little bit of drift across recordings. Some differences in the before/after of the same recording can likely be blamed on that.

The graph I'll present for each recording is relatively simple, it shows the velocity and the matching factor.The x axis is simply the events in sequence, the y axes are the factor and the velocity (note: two different scales in one graph). And it colours in the bits that see some type of acceleration. Green means "maximum factor applied", yellow means "decelerated". The purple "adaptive" means per-velocity acceleration is applied. Anything that remains white is used as-is (aside from the constant deceleration). This isn't really different to the first graph, it just shows roughly the same data in different colours.

Interesting numbers for the factor are 0.4 and 0.8. We have a constant acceleration of 0.4 on touchpads, i.e. a factor of 0.4 "don't apply acceleration", the latter is "maximum factor". The maximum factor is twice as big as the normal factor, so the pointer moves twice as fast. Anything below 0.4 means we decelerate the pointer, i.e. the pointer moves slower than the finger.

The super-slow movement shows that the factor is, aside from the beginning always below 0.4, i.e. the sequence sees deceleration applied. The takeaway here is that acceleration appears to be doing the right thing, slow motion is decelerated and while there may or may not be some tweaking to do, there is no smoking gun.


Super slow motion is decelerated.

The slow movement shows that the factor is almost always 0.4, aside from a few extremely slow events. This indicates that for the slow speed, the pointer movement maps exactly to the finger movement save for our constant deceleration. As above, there is no indicator that we're doing something seriously wrong.


Slow motion is largely used as-is with a few decelerations.

The medium movement gets interesting. If we look at the factor applied, it changes wildly with the velocity across the whole range between 0.4 and the maximum 0.8. There is a short spike at the beginning where it maxes out but the rest is accelerated on-demand, i.e. different finger speeds will produce different acceleration. This shows the crux of what a lot of users have been complaining about - what is a fairly slow motion still results in an accelerated pointer. And because the acceleration changes with the speed the pointer behaviour is unpredictable.


In medium-speed motion acceleration changes with the speed and even maxes out.

The medium-fast movement shows almost the whole movement maxing out on the maximum acceleration factor, i.e. the pointer moves at twice the speed to the finger. This is a problem because this is roughly the speed you'd use to hit a "mentally preselected" target, i.e. you know exactly where the pointer should end up and you're just intuitively moving it there. If the pointer moves twice as fast, you're going to overshoot and indeed that's what I've observed during the touchpad tap analysis userstudy.


Medium-fast motion easily maxes out on acceleration.

The fast movement shows basically the same thing, almost the whole sequence maxes out on the acceleration factor so the pointer will move twice as far as intuitively guessed.


Fast motion maxes out acceleration.

So does the flick movement, but in that case we want it to go as far as possible and note that the speeds between fast and flick are virtually identical here. I'm not sure if that's me just being equally fast or the touchpad not quite picking up on the short motion.


Flick motion also maxes out acceleration.

Either way, the takeaway is simple: we accelerate too soon and there's a fairly narrow window where we have adaptive acceleration, it's very easy to top out. The simplest fix to get most touchpad movements working well is to increase the current threshold on when acceleration applies. Beyond that it's a bit harder to quantify, but a good idea seems to be to stretch out the acceleration function so that the factor changes at a slower rate as the velocity increases. And up the acceleration factor so we don't top out and we keep going as the finger goes faster. This would be the intuitive expectation since it resembles physics (more or less).

There's a set of patches on the list now that does exactly that. So let's see what the result of this is. Note ahead: I also switched everything from mm/s which causes some numbers to shift slightly.

The super-slow motion is largely unchanged though the velocity scale changes quite a bit. Part of that is that the new code has a different unit which, on my T440s, isn't exactly 1000dpi. So the numbers shift and the result of that is that deceleration applies a bit more often than before.


Super-slow motion largely remains the same.

The slow motions are largely unchanged but more deceleration is now applied. Tbh, I'm not sure if that's an artefact of the evemu replay, the new accel code or the result of the not-quite-1000dpi of my touchpad.


Slow motion largely remains the same.

The medium motion is the first interesting one because that's where we had the first observable issues. In the new code, the motion is almost entirely unaccelerated, i.e. the pointer will move as the finger does. Success!


Medium-speed motion now matches the finger speed.

The same is true of the medium-fast motion. In the recording the first few events were past the new thresholds so some acceleration is applied, the rest of the motion matches finger motion.


Medium-fast motion now matches the finger speed except at the beginning where some acceleration was applied.

The fast and flick motion are largely identical in having the acceleration factor applied to almost the whole motion but the big change is that the factor now goes up to 2.3 for the fast motion and 2.5 for the flick motion, i.e. both movements would go a lot faster than before. In the graphics below you still see the blue area marked as "previously max acceleration factor" though it does not actually max out in either recording now.


Fast motion increases acceleration as speed increases.

Flick motion increases acceleration as speed increases.

In summary, what this means is that the new code accelerates later but when it does accelerate, it goes faster. I tested this on a T440s, a T450p and an Asus VivoBook with an Elantech touchpad (which is almost unusable with current libinput). They don't quite feel the same yet and I'm not happy with the actual acceleration, but for 90% of 'normal' movements the touchpad now behaves very well. So at least we go from "this is terrible" to "this needs tweaking". I'll go check if there's any champagne left.

Another look at GNOME recipes

Posted by Matthias Clasen on December 19, 2016 11:53 AM

It has been a few weeks since I’ve first talked about this new app that I’ve started to work on, GNOME recipes.

Since then, a few things have changed. We have a new details page, which makes better use of the available space with a 2 column layout.

Among the improved details here is a more elaborate ingredients list. Also new is the image viewer, which lets you cycle through the available photos for the recipe without getting in the way too much.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1703-3" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2016/12/image-switcher.webm?_=3" type="video/webm">https://blogs.gnome.org/mclasen/files/2016/12/image-switcher.webm</video>

We also use a 2 column layout when editing a recipe now.

Most importantly, as you can see in these screenshots, we have received some contributed recipes. Thanks to everybody who has sent us one! If you haven’t yet, please do. You may win a prize, if we can work out the logistics :-)

If you want to give recipes a try, the sources are here: https://git.gnome.org/browse/recipes/ and here is a recent Flatpak.

Update: With the just-released flatpak 0.8.0, installing the Flatpak from the .flatpakref file I linked above is as simple as this:

$ flatpak install --from https://raw.githubusercontent.com/alexlarsson/test-releases/master/gnome-recipes.flatpakref
read flatpak info from GTK_USE_PORTAL: network: 1 portal: 0
This application depends on runtimes from:
 http://sdk.gnome.org/repo/
Configure this as new remote 'gnome-1' [y/n]: y
Installing: org.gnome.Recipes/x86_64/master
Updating: org.gnome.Platform/x86_64/3.22 from gnome
No updates.
Updating: org.gnome.Platform.Locale/x86_64/3.22 from gnome
No updates.
Installing: org.gnome.Recipes/x86_64/master from org.gnome.Recipes-origin

1 delta parts, 5 loose fetched; 20053 KiB transferred in 8 seconds 
Installing: org.gnome.Recipes.Locale/x86_64/master from org.gnome.Recipes-origin

5 metadata, 1 content objects fetched; 1 KiB transferred in 0 seconds

Making your own retro keyboard

Posted by Bastien Nocera on December 15, 2016 04:48 PM
We're about a week before Christmas, and I'm going to explain how I created a retro keyboard as a gift to my father, who introduced me to computers when he brought back a Thomson TO7 home, all the way back in 1985.

The original idea was to use a Thomson computer to fit in a smaller computer, such as a CHIP or Raspberry Pi, but the software update support would have been difficult, the use limited to the builtin programs, and it would have required a separate screen. So I restricted myself to only making a keyboard. It was a big enough task, as we'll see.

How do keyboards work?

Loads of switches, that's how. I'll point you to Michał Trybus' blog post « How to make a keyboard - the matrix » for details on this works. You'll just need to remember that most of the keyboards present in those older computers have no support for xKRO, and that the micro-controller we'll be using already has the necessary pull-up resistors builtin.

The keyboard hardware

I chose the smallest Thomson computer available for my project, the MO5. I could have used a stand-alone keyboard, but would have lost all the charm of it (it just looks like a PC keyboard), some other computers have much bigger form factors, to include cartridge, cassette or floppy disk readers.

The DCMoto emulator's website includes tons of documentation, including technical documentation explaining the inner workings of each one of the chipsets on the mainboard. In one of those manuals, you'll find this page:



Whoot! The keyboard matrix in details, no need for us to discover it with a multimeter.

That needs a wash in soapy water

After opening up the computer, and eventually giving the internals, and the keyboard especially if it has mechanical keys, a good clean, we'll need to see how the keyboard is connected.

Finicky metal covered plastic

Those keyboards usually are membrane keyboards, with pressure pads, so we'll need to either find replacement connectors at our local electronics store, or desolder the ones on the motherboard. I chose the latter option.

Desoldered connectors

After matching the physical connectors to the rows and columns in the matrix, using a multimeter and a few key presses, we now know which connector pin corresponds to which connector on the matrix. We can start soldering.

The micro-controller

The micro-controller in my case is a Teensy 2.0, an Atmel AVR-based micro-controller with a very useful firmware that makes it very very difficult to brick. You can either press the little button on the board itself to upload new firmware, or wire it to an external momentary switch. The funny thing is that the Atmega32U4 is 16 times faster than the original CPU (yeah, we're getting old).

I chose to wire it to the "Initial. Prog" ("Reset") button on the keyboard, so as to make it easy to upload new firmware. To do this, I needed to cut a few traces coming out of the physical switch on the board, to avoid interferences from components on the board, using a tile cutter. This is completely optional, and if you're only going to use firmware that you already know at least somewhat works, you can set a key combo to go into firmware upload mode in the firmware. We'll get back to that later.

As far as connecting and soldering to the pins, we can use any I/O pins we want, except D6, which is connected to the board's LED. Note that any deviation from the pinout used in your firmware, you'd need to make changes to it. We'll come back to that again in a minute.

The soldering

Colorful tinning

I wanted to keep the external ports full, so it didn't look like there were holes in the case, but there was enough headroom inside the case to fit the original board, the teensy and pins on the board. That makes it easy to rewire in case of error. You could also dremel (yes, used as a verb) a hole in the board.

As always, make sure early that things would fit, especially the cables!

The unnecessary pollution

The firmware

Fairly early on during my research, I found the TMK keyboard firmware, as well as very well written forum post with detailed explanations on how to modify an existing firmware for your own uses.

This is what I used to modify the firmware for the gh60 keyboard for my own use. You can see here a step-by-step example, implementing the modifications in the same order as the forum post.

Once you've followed the steps, you'll need to compile the firmware. Fedora ships with the necessary packages, so it's a simple:


sudo dnf install -y avr-libc avr-binutils avr-gcc

I also compiled and installed in my $PATH the teensy_cli firmware uploader, and fixed up the udev rules. And after a "make teensy" and a button press...

It worked first time! This is a good time to verify that all the keys work, and you don't see doubled-up letters because of short circuits in your setup. I had 2 wires touching, and one column that just didn't work.

I also prepared a stand-alone repository, with a firmware that uses the tmk_core from the tmk firmware, instead of modifying an existing one.

Some advices

This isn't the first time I hacked on hardware, but I'll repeat some old adages, and advices, because I rarely heed those warnings, and I regret...
  • Don't forget the size, length and non-flexibility of cables in your design
  • Plan ahead when you're going to cut or otherwise modify hardware, because you might regret it later
  • Use breadboard cables and pins to connect things, if you have the room
  • Don't hotglue until you've tested and retested and are sure you're not going to make more modifications
That last one explains the slightly funny cabling of my keyboard.

Finishing touches

All Sugru'ed up

To finish things off nicely, I used Sugru to stick the USB cable, out of the machine, in place. And as earlier, it will avoid having an opening onto the internals.

There are a couple more things that I'll need to finish up before delivery. First, the keymap I have chosen in the firmware only works when a US keymap is selected. I'll need to make a keymap for Linux, possibly hard-coding it. I will also need to create a Windows keymap for my father to use (yep, genealogy software on Linux isn't quite up-to-par).

Prototype and final hardware

All this will happen in the aforementioned repository. And if you ever make your own keyboard, I'm happy to merge in changes to this repository with documentation for your Speccy, C64, or Amstrad CPC hacks.

(If somebody wants to buy me a Sega keyboard, I'll gladly work on a non-destructive adapter. Get in touch :)

Logitech Unifying Hardware Required

Posted by Richard Hughes on December 12, 2016 02:08 PM

Does anyone have a spare Logitech Unifying dongle I can borrow? I specifically need the newer Texas Instruments version, rather than the older Nordic version.

You can tell if it’s the version I need by looking at the etching on the metal USB plug, if it says U0008 above the CE marking then it’s the one I’m looking for. I’m based in London, UK if that matters. Thanks!

libinput touchpad tap analysis

Posted by Peter Hutterer on December 12, 2016 05:52 AM

A short while ago, I asked a bunch of people for long-term touchpad usage data (specifically: evemu recordings). I currently have 25 sets of data, the shortest of which has 9422 events, the longest of which has 987746 events. I requested that evemu-record was to be run in the background while people use their touchpad normally. Thus the data is quite messy, it contains taps, two-finger scrolling, edge scrolling, palm touches, etc. It's also raw data from the touchpad, not processed by libinput. Some care has to be taken with analysis, especially since it is weighted towards long recordings. In other words, the user with 987k events has a higher influence than the user with 9k events. So the data is useful for looking for patterns that can be independently verified with other data later. But it's also useful for disproving hypothesis, i.e. "we cannot do $foo because some users' events show $bla".

One of the things I've looked into was tapping. In libinput, a tap has two properties: a time threshold and a movement threshold. If the finger is held down longer than 180ms or it moves more than 3mm it is not a tap. These numbers are either taken from synaptics or just guesswork (both, probably). The need for a time-based threshold is obvious: we don't know whether the user is tapping until we see the finger up event. Only if that doesn't happen within a given time we know the user simply put the finger down. The movement threshold is required because small movements occur while tapping, caused by the finger really moving (e.g. when tapping shortly before/after a pointer motion) or by the finger center moving (as the finger flattens under pressure, the center may move a bit). Either way, these thresholds delay real pointer movement, making the pointer less reactive than it could be. So it's in our interest to have these thresholds low to get reactive pointer movement but as high as necessary to have reliable tap detection.

General data analysis

Let's look at the (messy) data. I wrote a script to calculate the time delta and movement distance for every single-touch sequence, i.e. anything with two or more fingers down was ignored. The script used a range of 250ms and 6mm of movement, discarding any sequences outside those thresholds. I also ignored anything in the left-most or right-most 10% because it's likely that anything that looks like a tap is a palm interaction [1]. I ran the script against those files where the users reported that they use tapping (10 users) which gave me 6800 tap sequences. Note that the ranges are purposely larger than libinput's to detect if there was a significant amount of attempted taps that exceed the current thresholds and would be misdetected as non-taps.

Let's have a look at the results. First, a simple picture that merely prints the start location of each tap, normalised to the width/height of the touchpad. As you can see, taps are primarily clustered around the center but can really occur anywhere on the touchpad. This means any attempt at detecting taps by location would be unreliable.


Normalized distribution of touch sequence start points (relative to touchpad width/height)

You can easily see the empty areas in the left-most and right-most 10%, that is an artefact of the filtering.

The analysis of time is more interesting: There are spikes around the 50ms mark with quite a few outliers going towards 100ms forming what looks like a narrow normal distribution curve. The data points are overlaid with markers for the mean [2], the 50 percentile, the 90 percentile and the 95 percentile [3]. And the data says: 95% of events fall below 116ms. That's something to go on.


Times between touch down and touch up for a possible tap event.
Note that we're using a 250ms timeout here and thus even look at touches that would not have been detected as tap by libinput. If we reduce to the 180ms libinput uses, we get a 95% percentile of 98ms, i.e. "of all taps currently detected as taps, 95% are 98ms or shorter".

The analysis of distance is similar: Most of the tap sequences have little to no movement, with 50% falling below 0.2mm of movement. Again the data points are overlaid with markers for the mean, the 50 percentile, the 90 percentile and the 95 percentile. And the data says: 95% of events fall below 1.8mm. Again, something to go on.


Movement between the touch down and the touch up event for a possible tap (10 == 1mm)
Note that we're using a 6mm threshold here and thus even look at touches that would not have been detected as tap by libinput. If we reduce to the 3mm libinput uses, we get a 95% percentile of 1.2mm, i.e. "of all taps currently detected as taps, 95% move 1.2mm or less".

Now let's combine the two. Below is a graph mapping times and distances from touch sequences. In general, the longer the time, the longer the more movement we get but most of the data is in the bottom left. Since doing percentiles is tricky on 2 axes, I mapped the respective axes individually. The biggest rectangle is the 95th percentile for time and distance, the number below shows how many data points actually fall into this rectangle. Looks promising, we still have a vast majority of touchpoints fall into the respective 95 percentiles though the numbers are slightly lower than the individual axes suggest.


Time to distance map for all possible taps
Again, this is for the 250ms by 6mm movement. About 3.3% of the events fall into the area between 180ms/3mm and 250ms/6mm. There is a chance that some of the touches have have been short, small movements, we just can't know by from data.

So based on the above, we learned one thing: it would not be reliable to detect taps based on their location. But we also suspect two things now: we can reduce the timeout and movement threshold without sacrificing a lot of reliability.

Verification of findings

Based on the above, our hypothesis is: we can reduce the timeout to 116ms and the threshold to 1.8mm while still having a 93% detection reliability. This is the most conservative reading, based on the extended thresholds.

To verify this, we needed to collect tap data from multiple users in a standardised and reproducible way. We wrote a basic website that displays 5 circles (see the screenshot below) on a canvas and asked a bunch of co-workers in two different offices [4] to tap them. While doing so, evemu-record was running in the background to capture the touchpad interactions. The touchpad was the one from a Lenovo T450 in both cases.


Screenshot of the <canvas> that users were asked to perform the taps on.
Some users ended up clicking instead of tapping and we had to discard those recordings. The total number of useful recordings was 15 from the Paris office and 27 from the Brisbane office. In total we had 245 taps (some users missed the circle on the first go, others double-tapped).

We asked each user three questions: "do you know what tapping/tap-to-click is?", "do you have tapping enabled" and "do you use it?". The answers are listed below:

  • Do you know what tapping is? 33 yes, 12 no
  • Do you have tapping enabled? 19 yes, 26 no
  • Do you use tapping? 10 yes, 35 no

I admit I kinda screwed up the data collection here because it includes those users whose recordings we had to discard. And the questions could've been better. So I'm not going to go into too much detail. The only useful thing here though is: the majority of users had tapping disabled and/or don't use it which should make any potential learning effect disappear[5]

Ok, let's look at the data sets, same scripts as above:


Times between touch down and touch up for tap events

Movement between the touch down and the touch up events of a tap (10 == 1mm)
95th percentile for time is 87ms. 95th percentile for distance is 1.09mm. Both are well within the numbers we expected we saw above. The combined diagram shows that 87% of events fall within the 87ms/10.9mm box.

Time to distance map for all taps
The few outliers here are close enough to the edge that expanding the box to to 100ms/1.3mm we get more than 95%. So it appears that our hypothesis is correct, reducing the timeout to 116ms and 1.8mm will have a 95% detection reliability. Furthermore, using the clean data it looks like we can use a lower threshold than previously assumed and still get a good detection ratio. Specifically, data collected in a controlled environment across 42 different users of varying familiarity with touchpad tapping shows that 100ms and 1.3mm gets us a 95% detection rate of taps.

What does this mean for users?

Based on the above, the libinput thresholds will be reduced to 100ms and 1.3mm. Let's see how we go with this and then we can increase it in the future if misdetection is higher than expected. Patches will on the wayland-devel list shortly.

For users that don't have tapping enabled, this will not change anything. All users who have tapping enabled will see a more responsive cursor on small movements as the time and distance thresholds have been significantly reduced. Some users may see a drop in tap detection rate. This is hopefully a subconscious enough effect that those users learn to tap faster or with less movement. If not, we have to look at it separately and see how we can deal with that.

If you find any issues with the analysis above, please let me know.

[1] These scripts analyse raw touchpad data, they don't benefit from libinput's palm detection
[2] Note: mean != average, the mean is less affected by strong outliers. look it up, it's worth knowing
[3] X percentile means X% of events fall below this value
[4] The Brisbane and Paris offices. No separate analysis was done, so it is unknown whether close proximity to baguettes has an effect to tap behaviour
[5] i.e. the effect of users learning how to use a system that doesn't work well out-of-the-box. This may result in e.g. quicker taps from those that are familiar with the system vs those that don't.

libinput beginner project - disabling touchpads on lid close

Posted by Peter Hutterer on December 07, 2016 10:49 PM

Update: Dec 08 2016: someone's working on this project. Sorry about the late update, but feel free to pick other projects you want to work on.

Interested in hacking on some low-level stuff and implementing a feature that's useful to a lot of laptop owners out there? We have a feature on libinput's todo list but I'm just constantly losing my fight against the ever-growing todo list. So if you already know C and you're interested in playing around with some low-level bits of software this may be the project for you.

Specifically: within libinput, we want to disable certain devices based on a lid state. In the first instance this means that when the lid switch is toggled to closed, the touchpad and trackpoint get silently disabled to not send events anymore. [1] Since it's based on a switch state, this also means that we'll now have to listen to switch events and expose those devices to libinput users.

The things required to get all this working are:

  • Designing a switch interface plus the boilerplate code required (I've done most of this bit already)
  • Extending the current evdev backend to handle devices with EV_SW and exposing their events
  • Hooking up the switch devices to internal touchpads/trackpoints to disable them ad-hoc
  • Handle those devices where lid switch is broken in the hardware (more details on this when we get to this point)

You get to dabble with libinput and a bit of udev and the kernel. Possibly Xorg stuff, but that's unlikely at this point. This project is well suited for someone with a few spare weekends ahead. It's great for someone who hasn't worked with libinput before, but it's not a project to learn C, you better know that ahead of time. I'd provide the mentoring of course (I'm in UTC+10, so expect IRC/email). If you're interested let me know. Riches and fame may happen but are not guaranteed.

[1] A number of laptops have a hw issue where either device may send random events when the lid is closed

xinput is not a configuration UI

Posted by Peter Hutterer on December 07, 2016 02:58 AM

xinput is a tool to query and modify X input device properties (amongst other things). Every so-often someone-complains about it's non-intuitive interface, but this is where users are mistaken: xinput is a not a configuration UI. It is a DUI - a developer user interface [1] - intended to test things without having to write custom (more user-friendly) for each new property. It is nothing but a tool to access what is effectively a key-value store. To use it you need to know not only the key name(s) but also the allowed formats, some of which are only documented in header files. It is intended to be run under user supervision, anything it does won't survive device hotplugging. Relying on xinput for configuration is the same as relying on 'echo' to toggle parameters in /sys for kernel configuration. It kinda possibly maybe works most of the time but it's not pretty. And it's not intended to be, so please don't complain to me about the arcane user interface.

[1] don't do it, things will be a bit confusing, you may not do the right thing, you can easily do damage, etc. A lot of similarities... ;)

Avoiding CVE-2016-8655 with systemd

Posted by Lennart Poettering on December 06, 2016 11:00 PM

Avoiding CVE-2016-8655 with systemd

Just a quick note: on recent versions of systemd it is relatively easy to block the vulnerability described in CVE-2016-8655 for individual services.

Since systemd release v211 there's an option RestrictAddressFamilies= for service unit files which takes away the right to create sockets of specific address families for processes of the service. In your unit file, add RestrictAddressFamilies=~AF_PACKET to the [Service] section to make AF_PACKET unavailable to it (i.e. a blacklist), which is sufficient to close the attack path. Safer of course is a whitelist of address families whch you can define by dropping the ~ character from the assignment. Here's a trivial example:

…
[Service]
ExecStart=/usr/bin/mydaemon
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
…

This restricts access to socket families, so that the service may access only AF_INET, AF_INET6 or AF_UNIX sockets, which is usually the right, minimal set for most system daemons. (AF_INET is the low-level name for the IPv4 address family, AF_INET6 for the IPv6 address family, and AF_UNIX for local UNIX socket IPC).

Starting with systemd v232 we added RestrictAddressFamilies= to all of systemd's own unit files, always with the minimal set of socket address families appropriate.

With the upcoming v233 release we'll provide a second method for blocking this vulnerability. Using RestrictNamespaces= it is possible to limit which types of Linux namespaces a service may get access to. Use RestrictNamespaces=yes to prohibit access to any kind of namespace, or set RestrictNamespaces=net ipc (or similar) to restrict access to a specific set (in this case: network and IPC namespaces). Given that user namespaces have been a major source of security vulnerabilities in the past months it's probably a good idea to block namespaces on all services which don't need them (which is probably most of them).

Of course, ideally, distributions such as Fedora, as well as upstream developers would turn on the various sandboxing settings systemd provides like these ones by default, since they know best which kind of address families or namespaces a specific daemon needs.

New udev property: XKB_FIXED_LAYOUT for keyboards that must not change layouts

Posted by Peter Hutterer on December 06, 2016 02:44 AM

This post mostly affects developers of desktop environments/Wayland compositors. A systemd pull request was merged to add two new properties to some keyboards: XKB_FIXED_LAYOUT and XKB_FIXED_VARIANT. If set, the device must not be switched to a user-configured layout but rather the one set in the properties. This is required to make fake keyboard devices work correctly out-of-the-box. For example, Yubikeys emulate a keyboard and send the configured passwords as key codes matching a US keyboard layout. If a different layout is applied, then the password may get mangled by the client.

Since udev and libinput are sitting below the keyboard layout there isn't much we can do in this layer. This is a job for those parts that handle keyboard layouts and layout configurations, i.e. GNOME, KDE, etc. I've filed a bug for gnome here, please do so for your desktop environment.

If you have a device that falls into this category, please submit a systemd patch/file a bug and cc me on it (@whot).

The future of xinput, xmodmap, setxkbmap, xsetwacom and other tools under Wayland

Posted by Peter Hutterer on December 05, 2016 08:42 PM

This post applies to most tools that interface with the X server and change settings in the server, including xinput, xmodmap, setxkbmap, xkbcomp, xrandr, xsetwacom and other tools that start with x. The one word to sum up the future for these tools under Wayland is: "non-functional".

An X window manager is little more than an innocent bystander when it comes to anything input-related. Short of handling global shortcuts and intercepting some mouse button presses (to bring the clicked window to the front) there is very little a window manager can do. It's a separate process to the X server and does not receive most input events and it cannot affect what events are being generated. When it comes to input device configuration, any X client can tell the server to change it - that's why general debugging tools like xinput work.

A Wayland compositor is much more, it is a window manager and the display server merged into one process. This gives the compositor a lot more power and responsibility. It handles all input events as they come out of libinput and also manages device's configuration. Oh, and instead of the X protocol it speaks Wayland protocol.

The difference becomes more obvious when you consider what happens when you toggle a setting in the GNOME control center. In both Wayland and X, the control center toggles a gsettings key and waits for some other process to pick it up. In both cases, mutter gets notified about the change but what happens then is quite different. In GNOME(X), mutter tells the X server to change a device property, the server passes that on to the xf86-input-libinput driver and from there the setting is toggled in libinput. In GNOME(Wayland), mutter toggles the setting directly in libinput.

Since there is no X server in the stack, the various tools can't talk to it. So to get the tools to work they would have to talk to the compositor instead. But they only know how to speak X protocol, and no Wayland protocol extension exists for input device configuration. Such a Wayland protocol extension would most likely have to be a private one since the various compositors expose device configuration in different ways. Whether this extension will be written and added to compositors is uncertain, I'm not aware of any plans or even intentions to do so (it's a very messy problem). But either way, until it exists, the tools will merely shout into the void, without even an echo to keep them entertained. Non-functional is thus a good summary.

libinput now requires axis resolutions for graphics tablets

Posted by Peter Hutterer on December 05, 2016 01:52 AM

I pushed the patch to require resolution today, expect this to hit the general public with libinput 1.6. If your graphics tablet does not provide axis resolution we will need to add a hwdb entry. Please file a bug in systemd and CC me on it (@whot).

How do you know if your device has resolution? Run sudo evemu-describe against the device node and look for the ABS_X/ABS_Y entries:


# Event code 0 (ABS_X)
# Value 2550
# Min 0
# Max 3968
# Fuzz 0
# Flat 0
# Resolution 13
# Event code 1 (ABS_Y)
# Value 1323
# Min 0
# Max 2240
# Fuzz 0
# Flat 0
# Resolution 13
if the Resolution value is 0 you'll need a hwdb entry or your tablet will stop working in libinput 1.6. You can file the bug now and we can get it fixed, that way it'll be in place once 1.6 comes out.

Please don't use pastebins in bugs

Posted by Peter Hutterer on December 05, 2016 01:51 AM

pastebins are useful for dumping large data sets whenever the medium of conversation doesn't make this easy or useful. IRC is one example, or audio/video conferencing. But pastebins only work when the other side looks at the pastebin before it expires, and the default expiry date for a pastebin may only be a few days.

This makes them effectively useless for bugs where it may take a while for the bug to be triaged and the assignee to respond. It may take even longer to figure out the source of the bug, and if there's a regression it can take months to figure it out. Once the content disappears we have to re-request the data from the reporter. And there is a vicious dependency too: usually, logs are more important for difficult bugs. Difficult bugs take longer to fix. Thus, with pastebins, the more difficult the bug, the more likely the logs become unavailable.

All useful bug tracking systems have an attachment facility. Use that instead, it's archived with the bug and if a year later we notice a regression, we still have access to the data.

If you got here because I pasted the link to this blog post, please do the following: download the pastebin content as raw text, then add it as attachment to the bug (don't paste it as comment). Once that's done, we can have a look at your bug again.

Protected: Another GTK+ update

Posted by Matthias Clasen on December 02, 2016 03:07 PM

This content is password protected. To view it please enter your password below:

GNOME loves to cook

Posted by Matthias Clasen on December 02, 2016 02:53 PM

GNOME needs a recipe app, since we all love to cook.org-gnome-recipesThis is not a new idea. Looking all the way back to 2007, the idea of a GNOME cook book already existed back then. For one reason or another, we never quite got there. But the idea has stuck around.

screenshot-from-2016-12-02-08-39-17

With the upcoming 20th birthday of GNOME next year, some of us thought that we should make another attempt at this application, maybe as a birthday gift to all of GNOME.

Shortly after GUADEC, I got my hands on some existing designs and started to toy around with implementing them over a few weekends and evenings. The screenshots in this post show how far I got since then.

screenshot-from-2016-12-02-08-39-33

Why did I start to write this app from scratch (instead of e.g. trying to give a face-lift to venerable gourmet)  ?

Beyond the obvious reason that I love to code as much as I love to cook, I wanted to give GNOME Builder a more serious test by starting an application from scratch. And I find it very useful to take a look at GTK+ from the application developer side, every now and then.  In both of these regards, the endeavor was already successful and has yielded improvements to both GNOME Builder and GTK+.  For the cooking part, you can judge that for yourself.

screenshot-from-2016-12-02-08-46-17

The main reason for writing this post is that we are at a point now where we need contributions to make progress. The idea is that we will include a decent set of recipes from GNOME contributors all over the world with the application.

Therefore, we need your recipes, ideally with good photos.

All the photos and text you see in the screenshots here are just test data that I’ve used during development, and need to be replaced with actual content.

So, how can you contribute your favorite recipes ?

Add your recipe to the app, and when you are satisfied with how it looks, you can use the Export button on the details page to create an archive with the recipe and related information, such as images. The archive also includes information about the author of the recipe (ie yourself), so make sure to provide some information for that in the Preferences dialog.

We just created a bugzilla project for recipes, so you can just attach the archive to a bug:

https://bugzilla.gnome.org/page.cgi?id=browse.html&product=recipes

Please make it clear in the bug that all the included images are your own and that we are allowed to ship it with the app.

screenshot-from-2016-12-02-08-44-40

Beyond recipes, there are plenty of other ways to contribute, of course. While I’ve tried hard to get many of the features in the initial design implemented, there is a lot more that can be done: for example, unit conversion, or a way to easily share recipes, or to print a shopping list.

screenshot-from-2016-12-02-08-39-43

Where do you get it ?

The project started out on github, but it is also available on git.gnome.org now. The design materials are collected on the GNOME wiki.

If you just want to try it out without building it yourself,  you can just use Flatpak:

flatpak install --from https://alexlarsson.github.io/test-releases/gnome-recipes.flatpakref

Of course, I did not get this far on my own. Thanks are due to several people. First and foremost, Emel Elvin Yıldız, for the designs and feedback on the implementation, Jakub Steiner for the icon and visuals, and Christian Hergert for keeping this idea alive and for making GNOME builder work great.

Ubuntu still isn't free software

Posted by Matthew Garrett on December 02, 2016 09:37 AM
Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.

The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu.

This remains incompatible with the principles of free software. The freedom to take someone else's work and redistribute it is a vital part of the four freedoms. It's legitimate for Canonical to insist that you not pass it off as their work when doing so, but their IP policy continues to insist that you remove all references to Canonical's trademarks even if their use would not infringe trademark law.

If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.

Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.

[1] And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise

comment count unavailable comments

Impress LibreOffice OpenGL Slide Transitions under Wayland via GTK3

Posted by Caolán McNamara on December 01, 2016 04:58 PM
<iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/mVVh_R7R2f8" width="560"></iframe>
Impress LibreOffice OpenGL Slide Transitions under Wayland via GTK3 (GtkGlArea).

So I've implemented enough to get this working on my machine now. I've demoed "static", "glitter" and "honeycomb" above from my -O0 debugging build. I'll work on merging this to master now, patches are in our gerrit instance. Porting from glew to epoxy is a necessary step, I know it builds on Windows and Mac, but that's utterly untested.

Linux communities, we need your help!

Posted by Richard Hughes on November 28, 2016 11:23 AM

There are a lot of Linux communities all over the globe filled with really nice people who just want to help others. Typically these people either can’t (or don’t feel comfortable) coding, and I’d love to harness some of that potential by adding a huge number of new application reviews to the ODRS. At the moment we have about 1100 reviews, mostly covering the more popular applications, and also mostly written in English.

What I would love is for a few groups of people to come together for their next LUG/outreach/InstallFest and sit down together somewhere cozy and write a few reviews. Bonus points if you use a less-well-known application, and even more points if you can write in a language other than English. Submitting a review is easy; just open up GNOME Software, find the application, and click ‘Write a Review‘ at the bottom of the page.

Application reviews help new users what to install, and the star ratings you give means we can return useful search results full of great applications. Please write an email, ask about helping the ODRS, and perhaps you can help a lot of new users next time you meet with your Linuxy friends.

Thanks!

Last batch of ColorHugALS

Posted by Richard Hughes on November 21, 2016 11:43 AM

I’ve got 9 more ColorHugALS devices in stock and then when they are sold they will be no more for sale. With all the supplier costs going recently up my “sell at cost price” has turned into “make a small loss on each one” which isn’t sustainable. It’s all OpenHardware, both hardware design and the firmware itself so if someone wanted to start building them for sale they would be doing it with my blessing. Of course, I’m happy to continue supporting the existing sold devices into the distant future.

colorhug-als1-large

In part the original goal is fixed, the kernel and userspace support for the new SensorHID protocol works great and ambient light functionality works out of the box for more people on more hardware. I’m slightly disappointed more people didn’t get involved in making the ambient lighting algorithms more smart, but I guess it’s quite a niche area of development.

Plus, in the Apple product development sense, killing off one device lets me start selling something else OpenHardware in the future. :)

Fedora - retiring xorg-x11-drv-synaptics

Posted by Peter Hutterer on November 20, 2016 03:57 AM

The Fedora Change to retire the synaptics driver was approved by FESCO. This will apply to Fedora 26 and is part of a cleanup to, ironically, make the synaptics driver easier to install.

Since Fedora 22, xorg-x11-drv-libinput is the preferred input driver. For historical reasons, almost all users have the xorg-x11-drv-synaptics package installed. But to actually use the synaptics driver over xorg-x11-drv-libinput requires a manually dropped xorg.conf.d snippet. And that's just not ideal. Unfortunately, in DNF/RPM we cannot just say "replace the xorg-x11-drv-synaptics package with xorg-x11-drv-libinput on update but still allow users to install xorg-x11-drv-synaptics after that".

So the path taken is a package rename. Starting with Fedora 26, xorg-x11-drv-libinput's RPM will Provide/Obsolete [1] xorg-x11-drv-synaptics and thus remove the old package on update. Users that need the synaptics driver then need to install xorg-x11-drv-synaptics-legacy. This driver will then install itself correctly without extra user intervention and will take precedence over the libinput driver. Removing xorg-x11-drv-synaptics-legacy will remove the driver assignment and thus fall back to libinput for touchpads. So aside from the name change, everything else works smoother now. Both packages are now updated in Rawhide and should be available from your local mirror soon.

What does this mean for you as a user? If you are a synaptics user, after an update/install, you need to now manually install xorg-x11-drv-synaptics-legacy. You can remove any xorg.conf.d snippets assigning the synaptics driver unless they also include other custom configuration.

See the Fedora Change page for details. Note that this is a Fedora-specific change only, the upstream change for this is already in place.

[1] "Provide" in RPM-speak means the package provides functionality otherwise provided by some other package even though it may not necessarily provide the code from that package. "Obsolete" means that installing this package replaces the obsoleted package.

Help, my app icon is missing!

Posted by Matthias Clasen on November 15, 2016 02:55 PM

I was in this situation recently: An application icon would not show up in GNOME shell, even after I double- and triple-checked that I did all the right things:

  • Installed the desktop file with the right name in /usr/share/applications
  • Installed the icon with the right name in the hicolor icon theme
  • Make sure that Icon key in the desktop file has the right name
  • Restart GNOME shell (this is necessary due to a bug where GNOME shell will not reliably pick up  desktop file changes)

Still, I just get the generic executable icon: missing-icon1

GNOME shell to the rescue

I asked GNOME shell maintainer Florian Müllner for help. He said: GNOME shell is probably picking up a stale desktop file from somewhere.

But from where ? I checked all the locations listed in XDG_DATA_DIRS to no avail.  At this point, I was getting desperate,  so I went back to Florian. He said: Just use looking glass to find out! If you don’t remember, looking glass is a pretty handy debugging console that is built right into GNOME shell. You open it by typing the command

lg

Into the Alt-F2 run dialog.

missing-icon2This brings up the looking glass. Here is how it helped me solve my problem:

missing-icon3I switched to the window tab, then clicked on the application in question, and hit the ‘Insert’ button. That binds the object to a variable named r(x) (for some number x)  in the Javascript evaluator.

Going there, I then typed the command

r(3).app_info.get_filename()

And that showed the problematic desktop file:

missing-icon4I removed that file and restarted the shell once more, and now everything is working as it should:

missing-icon5Thanks, looking glass!

Lyon GNOME Bug day #1

Posted by Bastien Nocera on November 15, 2016 09:48 AM
Last Friday, both a GNOME bug day and a bank holiday, a few of us got together to squash some bugs, and discuss GNOME and GNOME technologies.

Guillaume, a new comer in our group, tested the captive portal support for NetworkManager and GNOME in Gentoo, and added instructions on how to enable it to their Wiki. He also tested a gateway related configuration problem, the patch for which I merged after a code review. Near the end of the session, he also rebuilt WebKitGTK+ to test why Google Docs was not working for him anymore in Web. And nobody believed that he could build it that quickly. Looks like opinions based on past experiences are quite hard to change.

Mathieu worked on removing jhbuild's .desktop file as nobody seems to use it, and it was creating the Sundry category for him, in gnome-shell. He also spent time looking into the tracker blocker that is Mozilla's Focus, based on disconnectme's block lists. It's not as effective as uBlock when it comes to blocking adverts, but the memory and performance improvements, and the slow churn rate, could make it a good default blocker to have in Web.

Haïkel looked into using Emeus, potentially the new GTK+ 4.0 layout manager, to implement the series properties page for Videos.

Finally, I added Bolso to jhbuild, and struggled to get gnome-online-accounts/gnome-keyring to behave correctly in my installation, as the application just did not want to log in properly to the service. I also discussed Fedora's privacy policy (inappropriate for Fedora Workstation, as it doesn't cover the services used in the default installation), a potential design for Flatpak support of joypads and removable devices in general, as well as the future design of the Network panel.

Does $FEATURE work under Wayland?

Posted by Peter Hutterer on November 14, 2016 12:45 AM

I've written more extensively about this here but here's an analogy that should get the point across a bit better: Wayland is just a protocol, just like HTTP. In both cases, you have two sides with very different roles and functionality. In the HTTP case, you have the server (e.g. Apache) and the client (a browser, e.g. Firefox). The communication protocol is HTTP but both sides make a lot of decisions unrelated to the protocol. The server decides what data is sent, the client decides how the data is presented to the user. Wayland is very similar. The server, called the "compositor", decides what data is sent (also: which of the clients even gets the data). The client renders the data [1] and decides what to do with input like key strokes, etc.

Asking Does $FEATURE work under Wayland? is akin to asking Does $FEATURE work under HTTP?. The only answer is: it depends on the compositor and on the client. It's the wrong question. You should ask questions related to the compositor and the client instead, e.g. "does $FEATURE work in GNOME?" or "does $FEATURE work in GTK applications?". That's a question that can be answered.

Of course, there are some cases where the fault is really the protocol itself. But often enough, it's not.

[1] albeit it does so by telling the compositor to display it. The analogy with HTTP only works to some extent... :)

Tor, TPMs and service integrity attestation

Posted by Matthew Garrett on November 10, 2016 08:48 PM
One of the most powerful (and most scary) features of TPM-based measured boot is the ability for remote systems to request that clients attest to their boot state, allowing the remote system to determine whether the client has booted in the correct state. This involves each component in the boot process writing a hash of the next component into the TPM and logging it. When attestation is requested, the remote site gives the client a nonce and asks for an attestation, the client OS passes the nonce to the TPM and asks it to provide a signed copy of the hashes and the nonce and sends them (and the log) to the remote site. The remoteW site then replays the log to ensure it matches the signed hash values, and can examine the log to determine whether the system is trustworthy (whatever trustworthy means in this context).

When this was first proposed people were (justifiably!) scared that remote services would start refusing to work for users who weren't running (for instance) an approved version of Windows with a verifiable DRM stack. Various practical matters made this impossible. The first was that, until fairly recently, there was no way to demonstrate that the key used to sign the hashes actually came from a TPM[1], so anyone could simply generate a set of valid hashes, sign them with a random key and provide that. The second is that even if you have a signature from a TPM, you have no way of proving that it's from the TPM that the client booted with (you can MITM the request and either pass it to a client that did boot the appropriate OS or to an external TPM that you've plugged into your system after boot and then programmed appropriately). The third is that, well, systems and configurations vary so much that outside very controlled circumstances it's impossible to know what a "legitimate" set of hashes even is.

As a result, so far remote attestation has tended to be restricted to internal deployments. Some enterprises use it as part of their VPN login process, and we've been working on it at CoreOS to enable Kubernetes clusters to verify that workers are in a trustworthy state before running jobs on them. While useful, this isn't terribly exciting for most people. Can we do better?

Remote attestation has generally been thought of in terms of remote systems requiring that clients attest. But there's nothing that requires things to be done in that direction. There's nothing stopping clients from being able to request that a server attest to its state, allowing clients to make informed decisions about whether they should provide confidential data. But the problems that apply to clients apply equally well to servers. Let's work through them in reverse order.

We have no idea what expected "good" values are

Yes, and this is a problem. CoreOS ships with an expected set of good values, and we had general agreement at the Linux Plumbers Conference that other distributions would start looking at what it would take to do the same. But how do we know that those values are themselves trustworthy? In an ideal world this would involve reproducible builds, allowing anybody to grab the source code for the OS, build it locally and verify that they have the same hashes.

Ok. So we're able to verify that the booted OS was good. But how about the services? The rkt container runtime supports measuring each container into the TPM, which means we can verify which container images were started. If container images are also built in such a way that they're reproducible, users can grab the source code, rebuild the container locally and again verify that it has the same hashes. Users can then be sure that the remote site is running the code they're looking at.

Or can they? Not really - a general purpose OS has all kinds of ways to inject code into containers, so an admin could simply replace the binaries inside the container after it's been measured, or ptrace() the server, or modify rkt so it generates correct measurements regardless of the image or, well, there's lots they could do. So a general purpose OS is probably a bad idea here. Instead, let's imagine an immutable OS that does nothing other than bring up networking and then reads a config file that tells it which container images to download and run. This reduces the amount of code that needs to support reproducible builds, making it easier for a client to verify that the source corresponds to the code the remote system is actually running.

Is this sufficient? Eh sadly no. Even if we know the valid values for the entire OS and every container, we don't know the legitimate values for the system firmware. Any modified firmware could tamper with the rest of the trust chain, making it possible for you to get valid OS values even if the OS has been subverted. This isn't a solved problem yet, and really requires hardware vendor support. Let's handwave this for now, or assert that we'll have some sidechannel for distributing valid firmware values.

Avoiding TPM MITMing

This one's more interesting. If I ask the server to attest to its state, it can simply pass that through to a TPM running on another system that's running a trusted stack and happily serve me content from a compromised stack. Suboptimal. We need some way to tie the TPM identity and the service identity to each other.

Thankfully, we have one. Tor supports running services in the .onion TLD. The key used to identify the service to the Tor network is also used to create the "hostname" of the system. I wrote a pretty hacky implementation that generates that key on the TPM, tying the service identity to the TPM. You can ask the TPM to prove that it generated a key, and that allows you to tie both the key used to run the Tor service and the key used to sign the attestation hashes to the same TPM. You now know that the attestation values came from the same system that's running the service, and that means you know the TPM hasn't been MITMed.

How do you know it's a TPM at all?

This is much easier. See [1].



There's still various problems around this, including the fact that we don't have this immutable minimal container OS, that we don't have the infrastructure to ensure that container builds are reproducible, that we don't have any known good firmware values and that we don't have a mechanism for allowing a user to perform any of this validation. But these are all solvable, and it seems like an interesting project.

"Interesting" isn't necessarily the right metric, though. "Useful" is. And I think this is very useful. If I'm about to upload documents to a SecureDrop instance, it seems pretty important that I be able to verify that it is a SecureDrop instance rather than something pretending to be one. This gives us a mechanism.

The next few years seem likely to raise interest in ensuring that people have secure mechanisms to communicate. I'm not emotionally invested in this one, but if people have better ideas about how to solve this problem then this seems like a good time to talk about them.

[1] More modern TPMs have a certificate that chains from the TPM's root key back to the TPM manufacturer, so as long as you trust the TPM manufacturer to have kept control of that you can prove that the signature came from a real TPM

comment count unavailable comments

Searching in GNOME Software

Posted by Richard Hughes on November 03, 2016 05:10 PM

I’ve spent a few days profiling GNOME Software on ARM, mostly for curiosity but also to help our friends at Endless. I’ve merged a few patches that make the existing --profile code more useful to profile start up speed. Already there have been some big gains, over 200ms of startup time and 12Mb of RSS, but there’s plenty more that we want to fix to make GNOME Software run really nicely on resource constrained devices.

One of the biggest delays is constructing the search token cache at startup. This is where we look at all the fields of the .desktop files, the AppData files and the AppStream files and split them in a UTF8-sane way into search tokens, adding them into a big hash table after stemming them. We do it with 4 threads by default as it’s trivially parallelizable. With the search cache, when we search we just ask all the applications in the store “do you have this search term” and if so it gets added to the search results and ordered according to how good the match is. This takes 225ms on my super-fast Intel laptop (and much longer on ARM), and this happens automatically the very first time you search for anything in GNOME Software.

At the moment we add (for each locale, including fallbacks) the package name, the app ID, the app name, app single line description, the app keywords and the application long description. The latter is the multi-paragraph long description that’s typically prose. We use 90% of the time spent loading the token cache just splitting and adding the words in the description. As the description is prose, we have to ignore quite a few words e.g. “and”, “the”, “is” and “can” are some of the most frequent, useless words. Just the nature of the text itself (long non-technical prose) it doesn’t actually add many useful keywords to the search cache, and the ones that is does add are treated with such low priority other more important matches are ordered before them.

My proposal: continue to consume everything else for the search cache, and drop using the description. This means we start way quicker, use less memory, but it does require upstream actually adds some [localized] Keywords=foo;bar;baz in either the desktop file or <keywords> in the AppData file. At the moment most do, especially after I sent ~160 emails to the maintainers that didn’t have any defined keywords in the Fedora 25 Alpha, so I think it’s fairly safe at this point. Comments?

Another GTK+ update

Posted by Matthias Clasen on November 02, 2016 04:44 PM

The GTK+ 4 work is continuing at full speed, and today I want to show one of the first concrete benefits from  the GSK merge: We can now record and replay frames. If you ever wondered why your animation does not look quite right, this might be just the tool for you.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1662-4" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2016/11/Screencast-from-11-02-2016-122859-PM.webm?_=4" type="video/webm">https://blogs.gnome.org/mclasen/files/2016/11/Screencast-from-11-02-2016-122859-PM.webm</video>

To try this new tool, find the Recorder tab in the GTK+ inspector, and use the record button. As you can see, we capture and save the render node tree for each frame that the application draws, as long as recording is enabled (you want to be a bit careful, this is quickly eating up a lot of memory).

The leftmost pane lets you select one of the recorded frames. The middle pane shows the render node tree, and if you select a node there, the rendering on the right is updated to show only that subtree’s effect on the frame.

This is pretty nifty, and will hopefully be very useful in improving the GSK integration in GTK+, as well as helpful for debugging rendering problems in applications.

libinput touchpad pointer acceleration - laptop model names needed

Posted by Peter Hutterer on November 01, 2016 09:50 PM

I finally have a bit of time to look at touchpad pointer acceleration in libinput. But when I did, I found a great total of 5 bugs across freedesktop.org and Red Hat's bugzilla, despite this being the first thing anyone brings up whenever libinput is mentioned. 5 bugs - that's not much to work on. Note that over time there were also a lot of bugs where pointer acceleration was fixed once the touchpad's axis ranges were corrected which usually is a two-liner for the udev hwdb.

Anyway, point of this post: if you're still having issues with pointer acceleration on your touchpad in libinput, please file a bug against libinput and make it block the new tracker bug 98535. The libinput documentation has instructions on how to report a touchpad bug, but amongst the various things I need is your laptop model name.

Don't complain about it on reddit, phoronix, HN, or in some random forum, because you're just wasting bytes there and it won't get fixed that way.

Flatpak cross-compilation support: Epilogue

Posted by Bastien Nocera on October 31, 2016 12:00 PM
You might remember my attempts at getting an easy to use cross-compilation for ARM applications on my x86-64 desktop machine.

With Fedora 25 approaching, I'm happy to say that the necessary changes to integrate the feature have now rolled into Fedora 25.

For example, to compile the GNU Hello Flatpak for ARM, you would run:

$ flatpak install gnome org.freedesktop.Platform/arm org.freedesktop.Sdk/arm
Installing: org.freedesktop.Platform/arm/1.4 from gnome
[...]
$ sudo dnf install -y qemu-user-static
[...]
$ TARGET=arm ./build.sh

For other applications, add the --arch=arm argument to the flatpak-builder command-line.

This example also works for 64-bit ARM with the architecture name aarch64.

Of course smart homes are targets for hackers

Posted by Matthew Garrett on October 28, 2016 05:23 PM
The Wirecutter, an in-depth comparative review site for various electrical and electronic devices, just published an opinion piece on whether users should be worried about security issues in IoT devices. The summary: avoid devices that don't require passwords (or don't force you to change a default and devices that want you to disable security, follow general network security best practices but otherwise don't worry - criminals aren't likely to target you.

This is terrible, irresponsible advice. It's true that most users aren't likely to be individually targeted by random criminals, but that's a poor threat model. As I've mentioned before, you need to worry about people with an interest in you. Making purchasing decisions based on the assumption that you'll never end up dating someone with enough knowledge to compromise a cheap IoT device (or even meeting an especially creepy one in a bar) is not safe, and giving advice that doesn't take that into account is a huge disservice to many potentially vulnerable users.

Of course, there's also the larger question raised by the last week's problems. Insecure IoT devices still pose a threat to the wider internet, even if the owner's data isn't at risk. I may not be optimistic about the ease of fixing this problem, but that doesn't mean we should just give up. It is important that we improve the security of devices, and many vendors are just bad at that.

So, here's a few things that should be a minimum when considering an IoT device:
  • Does the vendor publish a security contact? (If not, they don't care about security)
  • Does the vendor provide frequent software updates, even for devices that are several years old? (If not, they don't care about security)
  • Has the vendor ever denied a security issue that turned out to be real? (If so, they care more about PR than security)
  • Is the vendor able to provide the source code to any open source components they use? (If not, they don't know which software is in their own product and so don't care about security, and also they're probably infringing my copyright)
  • Do they mark updates as fixing security bugs? (If not, they care more about hiding security issues than fixing them)
  • Has the vendor ever threatened to prosecute a security researcher? (If so, again, they care more about PR than security)
  • Does the vendor provide a public minimum support period for the device? (If not, they don't care about security or their users)

    I've worked with big name vendors who did a brilliant job here. I've also worked with big name vendors who responded with hostility when I pointed out that they were selling a device with arbitrary remote code execution. Going with brand names is probably a good proxy for many of these requirements, but it's insufficient.

    So here's my recommendations to The Wirecutter - talk to a wide range of security experts about the issues that users should be concerned about, and figure out how to test these things yourself. Don't just ask vendors whether they care about security, ask them what their processes and procedures look like. Look at their history. And don't assume that just because nobody's interested in you, everybody else's level of risk is equal.


  • comment count unavailable comments

    Deckard and LibreOffice

    Posted by Caolán McNamara on October 27, 2016 12:44 PM
    LibreOffice reuses the same ui format that gtk uses. This suggests that deckard could be used to preview translations of them.

    Testing this out shows (as above) that it can be made to work. A few problems though:

    1. We have various placeholder widgets which don't work in deckard because the widgets don't exist in gtk so dialogs that use them can't display as something falls over with e.g. "Invalid object type 'SvSimpleTableContainer'" I had hoped I'd get placeholders by default on failure.
    2. Our .po translation entries for the dialogs strings all have autogenerated msgctxt fields which don't correspond to the blank default of the .ui so the msgctxt fields have to be removed, then msguniq to remove duplicates, and the result can the be run through msgfmt to create a .mo that works with deckard to show web-previews

    Dual-GPU integration in GNOME

    Posted by Bastien Nocera on October 26, 2016 01:37 PM
    Thanks to the work of Hans de Goede and many others, dual-GPU (aka NVidia Optimus or AMD Hybrid Graphics) support works better than ever in Fedora 25.

    On my side, I picked up some work I originally did for Fedora 24, but ended up being blocked by hardware support. This brings better integration into GNOME.

    The Details Settings panel now shows which video cards you have in your (most likely) laptop.

    dual-GPU Graphics

    The second feature is what Blender and 3D video games users have been waiting for: a contextual menu item to launch the application on the more powerful GPU in your machine.

    Mooo Powaa!

    This demonstration uses a slightly modified GtkGLArea example, which shows which of the GPUs is used to render the application in the title bar.

    on the integrated GPU

    on the discrete GPU

    Behind the curtain

    Behind those 2 features, we have a simple D-Bus service, which runs automatically on boot, and stays running to offer a single property (HasDualGpu) that system components can use to detect what UI to present. This requires the "switcheroo" driver to work on the machine in question.

    Because of the way applications are launched on the discrete GPU, we cannot currently support D-Bus activated applications, but GPU-heavy D-Bus-integrated applications are few and far between right now.

    Future plans

    There's plenty more to do in this area, to polish the integration. We might want applications to tell us whether they'd prefer being run on the integrated or discrete GPU, as live switching between renderers is still something that's out of the question on Linux.

    Wayland dual-GPU support, as well as support for the proprietary NVidia drivers are also things that will be worked on, probably by my colleagues though, as the graphics stack really isn't my field.

    And if the hardware becomes more widely available, we'll most certainly want to support hardware with hotpluggable graphics support (whether gaming laptop "power-ups" or workstation docks).

    Availability

    All the patches necessary to make this work are now available in GNOME git (targeted at GNOME 3.24), and backports are integrated in Fedora 25, due to be released shortly.

    GTK+ happenings

    Posted by Matthias Clasen on October 22, 2016 03:57 PM

    I haven’t written about GTK+ development in some time. But now there are some exciting things happening that are worth writing about.

    Plans

    Back in June, a good number of GTK+ developers came together for a hackfest in Toronto,  It was a very productive gathering. One of the topics we discussed there was the (lack of) stability of GTK+ 3 and versioning. We caused a bit of a firestorm by blogging about this right away… so we went back to the drawing board and had another long discussion about the pros and cons of various versioning schemes at GUADEC.

    <figure class="wp-caption alignnone" style="width: 1024px"><figcaption class="wp-caption-text">GTK+ BOF in Karlsruhe</figcaption></figure>

    The final, agreed-on plan was published on the GTK+ blog, and you can read it there.

    Actions

    Fast-forward to today, and we’ve made good progress on putting this plan into place.

    GTK+ has been branched for 3.22, and all future GTK+ 3 releases will come from this branch. This is very similar to GTK+ 2, where we have the forever-stable 2.24 branch.  We plan to maintain the 3.22 branch for several years, so applications can rely on a stable GTK+ 3.

    One activity that you can see in the branch currently is that we are deprecating APIs that will go away in GTK+ 4. Most deprecations have been in place for a while (some even from 3.0!),  but some functions have just been forgotten. Marking them as deprecated now will make it easier to port to GTK+ 4 in the future. Keep in mind that deprecations are an optional service – you don’t have to rush to act on them unless you want to port to the next version.

    To avoid unnecessary heartburn and build breakage, we’ve switched  jhbuild, GNOME continuous and the flatpak runtimes over to using the 3.22 branch before opening the master branch for  new development, and did the necessary work to make the two branches parallel-installable.

    With all these preparations in place, Benjamin and Timm went to work and did a big round of deprecation cleanup. Altogether,  this removed some 80.000 lines of code. Next, we’ve merged Emmanueles GSK work.  And there is a lot more work queued up, from modernizing the GDK layer, to redoing input handling, to building with meson.

    The current git version of GTK+ calls itself 3.89, and we’re aiming to do a 3.90 release in spring, ideally keeping the usual 6 months cadence.

    …and you

    We hope that at least some of the core GNOME applications will switch to using 3.90 by next spring, since we need testing and validation. But… currently things are a still a bit rough in master. The GSK port will need some more time to shake out rendering issues and make it as fast as it should be.

    Therefore, we recommend that you stick with the 3.22 branch until we do a 3.89.1 release. By that time, the documentation should also have a 3 → 4 migration guide to help you with porting.

    If you are eager to get ready for GTK+ 4 now, you can prepare your application by eliminating the deprecations that show up when you build against the latest 3.22 release.

    Summary

    This is an exciting time for GTK+ ! We will post regular updates as things are landing, but just following the weekly updates on the GTK+ blog should give you a good idea of what is going on.

    Fixing the IoT isn't going to be easy

    Posted by Matthew Garrett on October 22, 2016 05:14 AM
    A large part of the internet became inaccessible today after a botnet made up of IP cameras and digital video recorders was used to DoS a major DNS provider. This highlighted a bunch of things including how maybe having all your DNS handled by a single provider is not the best of plans, but in the long run there's no real amount of diversification that can fix this - malicious actors have control of a sufficiently large number of hosts that they could easily take out multiple providers simultaneously.

    To fix this properly we need to get rid of the compromised systems. The question is how. Many of these devices are sold by resellers who have no resources to handle any kind of recall. The manufacturer may not have any kind of legal presence in many of the countries where their products are sold. There's no way anybody can compel a recall, and even if they could it probably wouldn't help. If I've paid a contractor to install a security camera in my office, and if I get a notification that my camera is being used to take down Twitter, what do I do? Pay someone to come and take the camera down again, wait for a fixed one and pay to get that put up? That's probably not going to happen. As long as the device carries on working, many users are going to ignore any voluntary request.

    We're left with more aggressive remedies. If ISPs threaten to cut off customers who host compromised devices, we might get somewhere. But, inevitably, a number of small businesses and unskilled users will get cut off. Probably a large number. The economic damage is still going to be significant. And it doesn't necessarily help that much - if the US were to compel ISPs to do this, but nobody else did, public outcry would be massive, the botnet would not be much smaller and the attacks would continue. Do we start cutting off countries that fail to police their internet?

    Ok, so maybe we just chalk this one up as a loss and have everyone build out enough infrastructure that we're able to withstand attacks from this botnet and take steps to ensure that nobody is ever able to build a bigger one. To do that, we'd need to ensure that all IoT devices are secure, all the time. So, uh, how do we do that?

    These devices had trivial vulnerabilities in the form of hardcoded passwords and open telnet. It wouldn't take terribly strong skills to identify this at import time and block a shipment, so the "obvious" answer is to set up forces in customs who do a security analysis of each device. We'll ignore the fact that this would be a pretty huge set of people to keep up with the sheer quantity of crap being developed and skip straight to the explanation for why this wouldn't work.

    Yeah, sure, this vulnerability was obvious. But what about the product from a well-known vendor that included a debug app listening on a high numbered UDP port that accepted a packet of the form "BackdoorPacketCmdLine_Req" and then executed the rest of the payload as root? A portscan's not going to show that up[1]. Finding this kind of thing involves pulling the device apart, dumping the firmware and reverse engineering the binaries. It typically takes me about a day to do that. Amazon has over 30,000 listings that match "IP camera" right now, so you're going to need 99 more of me and a year just to examine the cameras. And that's assuming nobody ships any new ones.

    Even that's insufficient. Ok, with luck we've identified all the cases where the vendor has left an explicit backdoor in the code[2]. But these devices are still running software that's going to be full of bugs and which is almost certainly still vulnerable to at least half a dozen buffer overflows[3]. Who's going to audit that? All it takes is one attacker to find one flaw in one popular device line, and that's another botnet built.

    If we can't stop the vulnerabilities getting into people's homes in the first place, can we at least fix them afterwards? From an economic perspective, demanding that vendors ship security updates whenever a vulnerability is discovered no matter how old the device is is just not going to work. Many of these vendors are small enough that it'd be more cost effective for them to simply fold the company and reopen under a new name than it would be to put the engineering work into fixing a decade old codebase. And how does this actually help? So far the attackers building these networks haven't been terribly competent. The first thing a competent attacker would do would be to silently disable the firmware update mechanism.

    We can't easily fix the already broken devices, we can't easily stop more broken devices from being shipped and we can't easily guarantee that we can fix future devices that end up broken. The only solution I see working at all is to require ISPs to cut people off, and that's going to involve a great deal of pain. The harsh reality is that this is almost certainly just the tip of the iceberg, and things are going to get much worse before they get any better.

    Right. I'm off to portscan another smart socket.

    [1] UDP connection refused messages are typically ratelimited to one per second, so it'll take almost a day to do a full UDP portscan, and even then you have no idea what the service actually does.

    [2] It's worth noting that this is usually leftover test or debug code, not an overtly malicious act. Vendors should have processes in place to ensure that this isn't left in release builds, but ha well.

    [3] My vacuum cleaner crashes if I send certain malformed HTTP requests to the local API endpoint, which isn't a good sign

    comment count unavailable comments

    Office Binary Document RC4 CryptoAPI Encryption

    Posted by Caolán McNamara on October 21, 2016 10:35 AM
    In LibreOffice we've long supported Microsoft Office's "Office Binary Document RC4 Encryption" for decrypting xls, doc and ppt. But somewhere along the line the Microsoft Office encryption scheme was replaced by a new one, "Office Binary Document RC4 CryptoAPI Encryption", which we didn't support. This is what the error dialog of...

    "The encryption method used in this document is not supported. Only Microsoft Office 97/2000 compatible password encryption is supported."

    ...from LibreOffice is telling you when you open, for example, an encrypted xls saved by a contemporary Microsoft Excel version.

    I got the newer scheme working this morning for xls, so from LibreOffice 5-3 onwards (I may backport to upstream 5-2 and Fedora 5-1) these variants can be successfully decrypted and viewed in LibreOffice.

    systemd.conf 2016 Over Now

    Posted by Lennart Poettering on October 04, 2016 10:00 PM

    systemd.conf 2016 is Over Now!

    A few days ago systemd.conf 2016 ended, our second conference of this kind. I personally enjoyed this conference a lot: the talks, the atmosphere, the audience, the organization, the location, they all were excellent!

    I'd like to take the opportunity to thanks everybody involved. In particular I'd like to thank Chris, Daniel, Sandra and Henrike for organizing the conference, your work was stellar!

    I'd also like to thank our sponsors, without which the conference couldn't take place like this, of course. In particular I'd like to thank our gold sponsor, Red Hat, our organizing sponsor Kinvolk, as well as our silver sponsors CoreOS and Facebook. I'd also like to thank our bronze sponsors Collabora, OpenSUSE, Pantheon, Pengutronix, our supporting sponsor Codethink and last but not least our media sponsor Linux Magazin. Thank you all!

    I'd also like to thank the Video Operation Center ("VOC") for their amazing work on live-streaming the conference and making all talks available on YouTube. It's amazing how efficient the VOC is, it's simply stunning! Thank you guys!

    In case you missed this year's iteration of the conference, please have a look at our YouTube Channel. You'll find all of this year's talks there, as well the ones from last year. (For example, my welcome talk is available here). Enjoy!

    We hope to see you again next year, for systemd.conf 2017 in Berlin!

    The importance of paying attention in building community trust

    Posted by Matthew Garrett on October 03, 2016 05:14 PM
    Trust is important in any kind of interpersonal relationship. It's inevitable that there will be cases where something you do will irritate or upset others, even if only to a small degree. Handling small cases well helps build trust that you will do the right thing in more significant cases, whereas ignoring things that seem fairly insignificant (or saying that you'll do something about them and then failing to do so) suggests that you'll also fail when there's a major problem. Getting the small details right is a major part of creating the impression that you'll deal with significant challenges in a responsible and considerate way.

    This isn't limited to individual relationships. Something that distinguishes good customer service from bad customer service is getting the details right. There are many industries where significant failures happen infrequently, but minor ones happen a lot. Would you prefer to give your business to a company that handles those small details well (even if they're not overly annoying) or one that just tells you to deal with them?

    And the same is true of software communities. A strong and considerate response to minor bug reports makes it more likely that users will be patient with you when dealing with significant ones. Handling small patch contributions quickly makes it more likely that a submitter will be willing to do the work of making more significant contributions. These things are well understood, and most successful projects have actively worked to reduce barriers to entry and to be responsive to user requests in order to encourage participation and foster a feeling that they care.

    But what's often ignored is that this applies to other aspects of communities as well. Failing to use inclusive language may not seem like a big thing in itself, but it leaves people with the feeling that you're less likely to do anything about more egregious exclusionary behaviour. Allowing a baseline level of sexist humour gives the impression that you won't act if there are blatant displays of misogyny. The more examples of these "insignificant" issues people see, the more likely they are to choose to spend their time somewhere else, somewhere they can have faith that major issues will be handled appropriately.

    There's a more insidious aspect to this. Sometimes we can believe that we are handling minor issues appropriately, that we're acting in a way that handles people's concerns, while actually failing to do so. If someone raises a concern about an aspect of the community, it's important to discuss solutions with them. Putting effort into "solving" a problem without ensuring that the solution has the desired outcome is not only a waste of time, it alienates those affected even more - they're now not only left with the feeling that they can't trust you to respond appropriately, but that you will actively ignore their feelings in the process.

    It's not always possible to satisfy everybody's concerns. Sometimes you'll be left in situations where you have conflicting requests. In that case the best thing you can do is to explain the conflict and why you've made the choice you have, and demonstrate that you took this issue seriously rather than ignoring it. Depending on the issue, you may still alienate some number of participants, but it'll be fewer than if you just pretend that it's not actually a problem.

    One warning, though: while building trust in this way enhances people's willingness to join your community, it also builds expectations. If a significant issue does arise, and if you fail to handle it well, you'll burn a lot of that trust in the process. The fact that you've built that trust in the first place may be what saves your community from disintegrating completely, but people will feel even more betrayed if you don't actively work to rebuild it. And if there's a pattern of mishandling major problems, no amount of getting the details right will matter.

    Communities that ignore these issues are, long term, likely to end up weaker than communities that pay attention to them. Making sure you get this right in the first place, and setting expectations that you will pay attention to your contributors, is a vital part of building a meaningful relationship between your community and its members.

    comment count unavailable comments

    radv: status update or is Talos Principle rendering yet?

    Posted by Dave Airlie on September 27, 2016 04:33 AM
    The answer is YES!!

    I fixed the last bug with instance rendering and Talos renders great on radv now.

    Also with the semi-interesting branch vkQuake also renders, there are some upstream bugs that needs fixing in spirv/nir that I'm awaiting and upstream resolution on, but I've included some prelim fixes in semi-interesting for now, that'll go away when upstream fixes are decided on.

    Here's a screenshot:

    Comments about OARS and CSM age ratings

    Posted by Richard Hughes on September 22, 2016 07:40 AM

    I’ve had quite a few comments from people stating that using age rating classification values based on American culture is wrong. So far I’ve been using the Common Sense Media research (and various other psychology textbooks) to essentially clean-room implement a content-rating to appropriate age algorithm.

    Whilst I do agree that other cultures have different sensitivities (e.g. Smoking in Uganda, references to Nazis in Germany) there doesn’t appear to be much research on the suggested age ratings for different categories for those specific countries. Lots of things are outright banned for sale for various reasons (which the populous may completely ignore), but there doesn’t seem to be many statistics that back up the various anecdotal statements. For instance, are there any US-specific guidelines that say that the age rating for playing a game that involves taking illegal drugs should be 18, rather than the 14 which is inferred from CSM? Or the age rating should be 25+ for any game that features drinking alcohol in Saudi Arabia?

    Suggestions (especially references) welcome. Thanks!

    Microsoft aren't forcing Lenovo to block free operating systems

    Posted by Matthew Garrett on September 21, 2016 05:09 PM
    Update: Patches to fix this have been posted

    There's a story going round that Lenovo have signed an agreement with Microsoft that prevents installing free operating systems. This is sensationalist, untrue and distracts from a genuine problem.

    The background is straightforward. Intel platforms allow the storage to be configured in two different ways - "standard" (normal AHCI on SATA systems, normal NVMe on NVMe systems) or "RAID". "RAID" mode is typically just changing the PCI IDs so that the normal drivers won't bind, ensuring that drivers that support the software RAID mode are used. Intel have not submitted any patches to Linux to support the "RAID" mode.

    In this specific case, Lenovo's firmware defaults to "RAID" mode and doesn't allow you to change that. Since Linux has no support for the hardware when configured this way, you can't install Linux (distribution installers will boot, but won't find any storage device to install the OS to).

    Why would Lenovo do this? I don't know for sure, but it's potentially related to something I've written about before - recent Intel hardware needs special setup for good power management. The storage driver that Microsoft ship doesn't do that setup. The Intel-provided driver does. "RAID" mode prevents the Microsoft driver from binding and forces the user to use the Intel driver, which means they get the correct power management configuration, battery life is better and the machine doesn't melt.

    (Why not offer the option to disable it? A user who does would end up with a machine that doesn't boot, and if they managed to figure that out they'd have worse power management. That increases support costs. For a consumer device, why would you want to? The number of people buying these laptops to run anything other than Windows is miniscule)

    Things are somewhat obfuscated due to a statement from a Lenovo rep:This system has a Signature Edition of Windows 10 Home installed. It is locked per our agreement with Microsoft. It's unclear what this is meant to mean. Microsoft could be insisting that Signature Edition systems ship in "RAID" mode in order to ensure that users get a good power management experience. Or it could be a misunderstanding regarding UEFI Secure Boot - Microsoft do require that Secure Boot be enabled on all Windows 10 systems, but (a) the user must be able to manage the key database and (b) there are several free operating systems that support UEFI Secure Boot and have appropriate signatures. Neither interpretation indicates that there's a deliberate attempt to prevent users from installing their choice of operating system.

    The real problem here is that Intel do very little to ensure that free operating systems work well on their consumer hardware - we still have no information from Intel on how to configure systems to ensure good power management, we have no support for storage devices in "RAID" mode and we have no indication that this is going to get better in future. If Intel had provided that support, this issue would never have occurred. Rather than be angry at Lenovo, let's put pressure on Intel to provide support for their hardware.

    comment count unavailable comments

    GNOME Software and Age Ratings

    Posted by Richard Hughes on September 21, 2016 09:57 AM

    After all the tarballs for GNOME 3.22 the master branch of gnome-software is now open to new features. Along with the usual cleanups and speedups one new feature I’ve been working on is finally merging the age ratings work.

    screenshot-from-2016-09-21-10-22-36

    The age ratings are provided by the upstream-supplied OARS metadata in the AppData file (which can be generated easily online) and then an age classification is generated automatically using the advice from the appropriately-named Common Sense Media group. At the moment I’m not doing any country-specific mapping, although something like this will be required to show appropriate ratings when handling topics like alcohol and drugs.

    At the moment the only applications with ratings in Fedora 26 will be Steam games, but I’ve also emailed any maintainer that includes an <update_contact> email address in the appdata file that also identifies as a game in the desktop categories. If you ship an application with an AppData and you think you should have an age rating please use the generator and add the extra few lines to your AppData file. At the moment there’s no requirement for the extra data, although that might be something we introduce just for games in the future.

    I don’t think many other applications will need the extra application metadata, but if you know of any adult only applications (e.g. in Fedora there’s an application for the sole purpose of downloading p0rn) please let me know and I’ll contact the maintainer and ask what they think about the idea. Comments, as always, welcome. Thanks!

    libinput and the Lenovo T460 series trackstick

    Posted by Peter Hutterer on September 20, 2016 06:43 AM

    First a definition: a trackstick is also called trackpoint, pointing stick, or "that red knob between G, H, and B". I'll be using trackstick here, because why not.

    This post is the continuation of libinput and the Lenovo T450 and T460 series touchpads where we focused on a stalling pointer when moving the finger really slowly. Turns out the T460s at least, possibly others in the *60 series have another bug that caused a behaviour that is much worse but we didn't notice for ages as we were focusing on the high-precision cursor movement. Specifically, the pointer would just randomly stop moving for a short while (spoiler alert: 300ms), regardless of the movement speed.

    libinput has built-in palm detection and one of the things it does is to disable the touchpad when the trackstick is in use. It's not uncommon to rest the hand near or on the touchpad while using the trackstick and any detected touch would cause interference with the pointer motion. So events from the touchpad are ignored whenever the trackpoint sends events. [1]

    On (some of) the T460s the trackpoint sends spurious events. In the recording I have we have random events at 9s, then again 3.5s later, then 14s later, then 2s later, etc. Each time, our palm detection could would assume the trackpoint was in use and disable the touchpad for 300ms. If you were using the touchpad while this was happening, the touchpad would suddenly stop moving for 300ms and then continue as normal. Depending on how often these spurious events come in and the user's current caffeination state, this was somewhere between odd, annoying and infuriating.

    The good news is: this is fixed in libinput now. libinput 1.5 and the upcoming 1.4.3 releases will have a fix that ignores these spurious events and makes the touchpad stalls a footnote of history. Hooray.

    [1] we still allow touchpad physical button presses, and trackpoint button clicks won't disable the touchpad