Fedora desktop Planet

All Systems Go! 2018 CfP Open

Posted by Lennart Poettering on May 20, 2018 10:00 PM

<large>The All Systems Go! 2018 Call for Participation is Now Open!</large>

The Call for Participation (CFP) for All Systems Go! 2018 is now open. We’d like to invite you to submit your proposals for consideration to the CFP submission site.

ASG image

The CFP will close on July 30th. Notification of acceptance and non-acceptance will go out within 7 days of the closing of the CFP.

All topics relevant to foundational open-source Linux technologies are welcome. In particular, however, we are looking for proposals including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • BPF and eBPF filtering
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome, as long as they have a clear and direct relevance for user-space.

For more information please visit our conference website!

X server pointer acceleration analysis - part 4

Posted by Peter Hutterer on May 14, 2018 05:08 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In the first three parts, I covered the X server and synaptics pointer acceleration curves and how libinput compares to the X server pointer acceleration curve. In this post, I will compare libinput to the synaptics acceleration curve.

Comparison of synaptics and libinput

libinput has multiple different pointer acceleration curves, depending on the device. In this post, I will only consider the one used for touchpads. So let's compare the synaptics curve with the libinput curve at the default configurations:

But this one doesn't tell the whole story, because the touchpad accel for libinput actually changes once we get faster. So here are the same two curves, but this time with the range up to 1000mm/s. These two graphs show that libinput is both very different and similar. Both curves have an acceleration factor less than 1 for the majority of speeds, they both decelerate the touchpad more than accelerating it. synaptics has two factors it sticks to and a short curve, libinput has a short deceleration curve and its plateau is the same or lower than synaptics for the most part. Once the threshold is hit at around 250 mm/s, libinput's acceleration keeps increasing until it hits a maximum much later.

So, anything under ~20mm/s, libinput should be the same as synaptics (ignoring the <7mm/s deceleration). For anything less than 250mm/s, libinput should be slower. I say "should be" because that is not actually the case, synaptics is slower so I suspect the server scaling slows down synaptics even further. Hacking around in the libinput code, I found that moving libinput's baseline to 0.2 matches the synaptics cursor's speed. However, AFAIK that scaling depends on the screen size, so your mileage may vary.

Comparing configuration settings

Let's overlay the libinput speed toggles. In Part 2 we've seen the synaptics toggles and they're open-ended, so it's a bit hard to pick a specific set to go with to compare. I'll be using the same combined configuration options from the diagram there.

And we need the diagram from 0-1000mm/s as well. There isn't much I can talk about here in direct comparison, the curves are quite different and the synaptics curves vary greatly with the configuration options (even though the shape remains the same).


It's fairly obvious that the acceleration profiles are very different once depart from the default settings. Most notable, only libinput's slowest speed setting matches the 0.2 speed that is the synaptics default setting. In other words, if your touchpad is too fast compared to synaptics, it may not be possible to slow it down sufficiently. Likewise, even at the fastest speed, the baseline is well below the synaptics baseline for e.g. 0.6 [1], so if your touchpad is too slow, you may not be able to speed it up sufficiently (at least for low speeds). That problem won't exist for the maximum acceleration factor, the main question here is simply whether they are too high. Answer: I don't know.

So the base speed of the touchpad in libinput needs a higher range, that's IMO a definitive bug that I need to work on. The rest... I don't know. Let's see how we go.

[1] A configuration I found suggested in some forum when googling for MinSpeed, so let's assume there's at least one person out there using it.

X server pointer acceleration analysis - part 3

Posted by Peter Hutterer on May 14, 2018 05:08 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In Part 1 and Part 2 I showed the X server acceleration code as used by the evdev and synaptics drivers. In this part, I'll show how it compares against libinput.

Comparison to libinput

libinput has multiple different pointer acceleration curves, depending on the device. In this post, I will only consider the default one used for mice. A discussion of the touchpad acceleration curve comes later. So, back to the graph of the simple profile. Let's overlay this with the libinput pointer acceleration curve:

Turns out the pointer acceleration curve, mostly modeled after the xserver behaviour roughly matches the xserver behaviour. Note that libinput normalizes to 1000dpi (provided MOUSE_DPI is set correctly) and thus the curves only match this way for 1000dpi devices.

libinput's deceleration is slightly different but I doubt it is really noticeable. The plateau of no acceleration is virtually identical, i.e. at slow speeds libinput moves like the xserver's pointer does. Likewise for speeds above ~33mm/s, libinput and the server accelerate by the same amount. The actual curve is slightly different. It is a linear curve (I doubt that's noticeable) and it doesn't have that jump in it. The xserver acceleration maxes out at roughly 20mm/s. The only difference in acceleration is for the range of 10mm/s to 33mm/s.

30mm/s is still a relatively slow movement (just move your mouse by 30mm within a second, it doesn't feel fast). This means that for all but slow movements, the current server and libinput acceleration provides but a flat acceleration at whatever the maximum acceleration is set to.

Comparison of configuration options

The biggest difference libinput has to the X server is that it exposes a single knob of normalised continuous configuration (-1.0 == slowest, 1.0 == fastest). It relies on settings like MOUSE_DPI to provide enough information to map a device into that normalised range.

Let's look at the libinput speed settings and their effect on the acceleration profile (libinput 1.10.x).

libinput's speed setting is a combination of changing thresholds and accel at the same time. The faster you go, the sooner acceleration applies and the higher the maximum acceleration is. For very slow speeds, libinput provides deceleration. Noticeable here though is that the baseline speed is the same until we get to speed settings of less than -0.5 (where we have an effectively flat profile anyway). So up to the (speed-dependent) threshold, the mouse speed is always the same.

Let's look at the comparison of libinput's speed setting to the accel setting in the simple profile:

Clearly obvious: libinput's range is a lot smaller than what the accel setting allows (that one is effectively unbounded). This obviously applies to the deceleration as well: I'm not posting the threshold comparison, as Part 1 shows it does not effect the maximum acceleration factor anyway.


So, where does this leave us? I honestly don't know. The curves are different but the only paper I could find on comparing acceleration curves is Casiez and Roussel' 2011 UIST paper. It provides a comparison of the X server acceleration with the Windows and OS X acceleration curves [1]. It shows quite a difference between the three systems but the authors note that no specific acceleration curve is definitely superior. However, the most interesting bit here is that both the Windows and the OS X curve seem to be constant acceleration (with very minor changes) rather than changing the curve shape.

Either way, there is one possible solution for libinput to implement: to change the base plateau with the speed. Otherwise libinput's acceleration curve is well defined for the configurable range. And a maximum acceleration factor of 3.5 is plenty for a properly configured mouse (generally anything above 3 is tricky to control). AFAICT, the main issues with pointer acceleration come from mice that either don't have MOUSE_DPI set or trackpoints which are, unfortunately, a completely different problem.

I'll probably also give the windows/OS X approaches a try (i.e. same curve, different constant deceleration) and see how that goes. If it works well, that may be a a solution because it's easier to scale into a large range. Otherwise, *shrug*, someone will have to come with a better solution.

[1] I've never been able to reproduce the same gain (== factor) but at least the shape and x axis seems to match.

X server pointer acceleration analysis - part 2

Posted by Peter Hutterer on May 14, 2018 05:08 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In Part 1 I showed the X server acceleration code as used by the evdev driver (which leaves all acceleration up to the server). In this part, I'll show the acceleration code as used by the synaptics touchpad driver. This driver installs a device-specific acceleration profile but beyond that the acceleration is... difficult. The profile itself is not necessarily indicative of the real movement, the coordinates are scaled between device-relative, device-absolute, screen-relative, etc. so often that it's hard to keep track of what the real delta is. So let's look at the profile only.

Diagram generation

Diagrams were generated by gnuplot, parsing .dat files generated by the ptrveloc tool in the git repo. Helper scripts to regenerate all data are in the repo too. Default values unless otherwise specified:

  • MinSpeed: 0.4
  • MaxSpeed: 0.7
  • AccelFactor: 0.04
  • dpi: 1000 (used for converting units to mm)
All diagrams are limited to 100 mm/s and a factor of 5 so they are directly comparable. From earlier testing I found movements above over 300 mm/s are rare, once you hit 500 mm/s the acceleration doesn't really matter that much anymore, you're going to hit the screen edge anyway.

The choice of 1000 dpi is a difficult one. It makes the diagrams directly comparable to those in Part 1but touchpads have a great variety in their resolution. For example, an ALPS DualPoint touchpad may have resolutions of 25-32 units/mm. A Lenovo T440s has a resolution of 42 units/mm over PS/2 but 20 units/mm over the newer SMBus/RMI4 protocol. This is the same touchpad. Overall it doesn't actually matter that much though, see below.

The acceleration profile

This driver has a custom acceleration profile, configured by the MinSpeed, MaxSpeed and AccelFactor options. The former two put a cap on the factor but MinSpeed also adjusts (overwrites) ConstantDeceleration. The AccelFactor defaults to a device-specific size based on the device diagonal.

Let's look at the defaults of 0.4/0.7 for min/max and 0.04 (default on my touchpad) for the accel factor:

The simple profile from part 1 is shown in this graph for comparison. The synaptics profile is printed as two curves, one for the profile output value and one for the real value used on the delta. Unlike the simple profile you cannot configure ConstantDeceleration separately, it depends on MinSpeed. Thus the real acceleration factor is always less than 1, so the synaptics driver doesn't accelerate as such, it controls how much the deltas are decelerated.

The actual acceleration curve is just a plain old linear interpolation between the min and max acceleration values. If you look at the curves closer you'll find that there is no acceleration up to 20mm/s and flat acceleration from 25mm/s onwards. Only in this small speed range does the driver adjust its acceleration based on input speed. Whether this is in intentional or just happened, I don't know.

The accel factor depends on the touchpad x/y axis. On my T440s using PS/2, the factor defaults to 0.04. If I get it to use SMBus/RMI4 instead of PS/2, that same device has an accel factor of 0.09. An ALPS touchpad may have a factor of 0.13, based on the min/max values for the x/y axes. These devices all have different resolutions though, so here are the comparison graphs taking the axis range and the resolution into account:

The diagonal affects the accel factor, so these three touchpads (two curves are the same physical touchpad, just using a different bus) get slightly different acceleration curves. They're more similar than I expected though and for the rest of this post we can get away we just looking at the 0.04 default value from my touchpad.

Note that due to how synaptics is handled in the server, this isn't the whole story, there is more coordinate scaling etc. happening after the acceleration code. The synaptics acceleration profile also does not acccommodate for uneven x/y resolutions, this is handled in the server afterwards. On touchpads with uneven resolutions the velocity thus depends on the vector, moving along the x axis provides differently sized deltas than moving along the y axis. However, anything applied later isn't speed dependent but merely a constant scale, so these curves are still a good representation of what happens.

The effect of configurations

What does the acceleration factor do? It changes when acceleration kicks in and how steep the acceleration is.

And how do the min/max values play together? Let's adjust MinSpeed but leave MaxSpeed at 0.7.

MinSpeed lifts the baseline (i.e. the minimum acceleration factor), somewhat expected from a parameter named this way. But it looks again like we have a bug here. When MinSpeed and MaxSpeed are close together, our acceleration actually decreases once we're past the threshold. So counterintuitively, a higher MinSpeed can result in a slower cursor once you move faster.

MaxSpeed is not too different here:

The same bug is present, if the MaxSpeed is smaller or close to MinSpeed, our acceleration actually goes down. A quick check of the sources didn't indicate anything enforcing MinSpeed < MaxSpeed either. But otherwise MaxSpeed lifts the maximum acceleration factor.

These graphs look at the options in separation, in reality users would likely configure both MinSpeed and MaxSpeed at the same time. Since both have an immediate effect on pointer movement, trial and error configuration is simple and straightforward. Below is a graph of all three adjusted semi-randomly:

No suprises in there, the baseline (and thus slowest speed) changes, the maximum acceleration changes and how long it takes to get there changes. The curves vary quite a bit though, so without knowing the configuration options, it's impossible to predict how a specific touchpad behaves.


The graphs above show the effect of configuration options in the synaptics driver. I purposely didn't put any specific analysis in and/or compare it to libinput. That comes in a future post.

X server pointer acceleration analysis - part 1

Posted by Peter Hutterer on May 14, 2018 05:07 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

Over the last few days, I once again tried to tackle pointer acceleration. After all, I still get plenty of complaints about how terrible libinput is and how the world was so much better without it. So I once more tried to understand the X server's pointer acceleration code. Note: the evdev driver doesn't do any acceleration, it's all handled in the server. Synaptics will come in part two, so this here focuses mostly on pointer acceleration for mice/trackpoints.

After a few failed attempts of live analysis [1], I finally succeeded extracting the pointer acceleration code into something that could be visualised. That helped me a great deal in going back and understanding the various bits and how they fit together.

The approach was: copy the ptrveloc.(c|h) files into a new project, set up a meson.build file, #define all the bits that are assumed to be there and voila, here's your library. Now we can build basic analysis tools provided we initialise all the structs the pointer accel code needs correctly. I think I succeeded. The git repo is here if anyone wants to check the data. All scripts to generate the data files are in the repository.

A note on language: the terms "speed" and "velocity" are subtly different but for this post the difference doesn't matter. The code uses "velocity" but "speed" is more natural to talk about, so just assume equivalence.

The X server acceleration code

There are 15 configuration options for pointer acceleration (ConstantDeceleration, AdaptiveDeceleration, AccelerationProfile, ExpectedRate, VelocityTrackerCount, Softening, VelocityScale, VelocityReset, VelocityInitialRange, VelocityRelDiff, VelocityAbsDiff, AccelerationProfileAveraging, AccelerationNumerator, AccelerationDenominator, AccelerationThreshold). Basically, every number is exposed as configurable knob. The acceleration code is a product of a time when we were handing out configuration options like participation medals at a children's footy tournament. Assume that for the rest of this blog post, every behavioural description ends with "unless specific configuration combinations apply". In reality, I think only four options are commonly used: AccelerationNumerator, AccelerationDenominator, AccelerationThreshold, and ConstantDeceleration. These four have immediate effects on the pointer movement and thus it's easy to do trial-and-error configuration.

The server has different acceleration profiles (called the 'pointer transfer function' in the literature). Each profile is a function that converts speed into a factor. That factor is then combined with other things like constant deceleration, but eventually our output delta forms as:

deltaout(x, y) = deltain(x, y) * factor * deceleration
The output delta is passed back to the server and the pointer saunters over by few pixels, happily bumping into any screen edge on the way.

The input for the acceleration profile is a speed in mickeys, a threshold (in mickeys) and a max accel factor (unitless). Mickeys are a bit tricky. This means the acceleration is device-specific, the deltas for a mouse at 1000 dpi are 20% larger than the deltas for a mouse at 800 dpi (assuming same physical distance and speed). The "Resolution" option in evdev can work around this, but by default this means that the acceleration factor is (on average) higher for high-resolution mice for the same physical movement. It also means that that xorg.conf snippet you found on stackoverflow probably does not do the same on your device.

The second problem with mickeys is that they require a frequency to map to a physical speed. If a device sends events every N ms, delta/N gives us a speed in units/ms. But we need mickeys for the profiles. Devices generally have a fixed reporting rate and the speed of each mickey is the same as (units/ms * reporting rate). This rate defaults to 10 in the server (the VelocityScaling default value) and thus matches a device reporting at 100Hz (a discussion of this comes later). All graphs below were generated with this default value.

Back to the profile function and how it works: The threshold(usually) defines the mimimum speed at which acceleration kicks in. The max accel factor (usually) limits the acceleration. So the simplest algorithm is

if (velocity < threshold)
return base_velocity;
factor = calculate_factor(velocity);
if (factor > max_accel)
return max_accel;
return factor;
In reality, things are somewhere between this simple and "whoops, what have we done".

Diagram generation

Diagrams were generated by gnuplot, parsing .dat files generated by the ptrveloc tool in the git repo. Helper scripts to regenerate all data are in the repo too. Default values unless otherwise specified:

  • threshold: 4
  • accel: 2
  • dpi: 1000 (used for converting units to mm)
  • constant deceleration: 1
  • profile: classic
All diagrams are limited to 100 mm/s and a factor of 5 so they are directly comparable. From earlier testing I found movements above over 300 mm/s are rare, once you hit 500 mm/s the acceleration doesn't really matter that much anymore, you're going to hit the screen edge anyway.

Acceleration profiles

The server provides a number of profiles, but I have seen very little evidence that people use anything but the default "Classic" profile. Synaptics installs a device-specific profile. Below is a comparison of the profiles just so you get a rough idea what each profile does. For this post, I'll focus on the default Classic only.

First thing to point out here that if you want to have your pointer travel to Mars, the linear profile is what you should choose. This profile is unusable without further configuration to bring the incline to a more sensible level. Only the simple and limited profiles have a maximum factor, all others increase acceleration indefinitely. The faster you go, the more it accelerates the movement. I find them completely unusable at anything but low speeds.

The classic profile transparently maps to the simple profile, so the curves are identical.

Anyway, as said above, profile changes are rare. The one we care about is the default profile: the classic profile which transparently maps to the simple profile (SimpleSmoothProfile() in the source).

Looks like there's a bug in the profile formula. At the threshold value it jumps from 1 to 1.5 before the curve kicks in. This code was added in ~2008, apparently no-one noticed this in a decade.

The profile has deceleration (accel factor < 1 and thus decreasing the deltas) at slow speeds. This provides extra precision at slow speeds without compromising pointer speed at higher physical speeds.

The effect of config options

Ok, now let's look at the classic profile and the configuration options. What happens when we change the threshold?

First thing that sticks out: one of these is not like the others. The classic profile changes to the polynomial profile at thresholds less than 1.0. *shrug* I think there's some historical reason, I didn't chase it up.

Otherwise, the threshold not only defines when acceleration starts kicking in but it also affects steepness of the curve. So higher threshold also means acceleration kicks in slower as the speed increases. It has no effect on the low-speed deceleration.

What happens when we change the max accel factor? This factor is actually set via the AccelerationNumerator and AccelerationDenominator options (because floats used to be more expensive than buying a house). At runtime, the Xlib function of your choice is XChangePointerControl(). That's what all the traditional config tools use (xset, your desktop environment pre-libinput, etc.).

First thing that sticks out: one is not like the others. When max acceleration is 0, the factor is always zero for speeds exceeding the threshold. No user impact though, the server discards factors of 0.0 and leaves the input delta as-is.

Otherwise it's relatively unexciting, it changes the maximum acceleration without changing the incline of the function. And it has no effect on deceleration. Because the curves aren't linear ones, they don't overlap 100% but meh, whatever. The higher values are cut off in this view, but they just look like a larger version of the visible 2 and 4 curves.

Next config option: ConstantDeceleration. This one is handled outside of the profile but at the code is easy-enough to follow, it's a basic multiplier applied together with the factor. (I cheated and just did this in gnuplot directly)

Easy to see what happens with the curve here, it simply stretches vertically without changing the properties of the curve itself. If the deceleration is greater than 1, we get constant acceleration instead.

All this means with the default profile, we have 3 ways of adjusting it. What we can't directly change is the incline, i.e. the actual process of acceleration remains the same.

Velocity calculation

As mentioned above, the profile applies to a velocity so obviously we need to calculate that first. This is done by storing each delta and looking at their direction and individual velocity. As long as the direction remains roughly the same and the velocity between deltas doesn't change too much, the velocity is averaged across multiple deltas - up to 16 in the default config. Of course you can change whether this averaging applies, the max time deltas or velocity deltas, etc. I'm honestly not sure anyone ever used any of these options intentionally or with any real success.

Velocity scaling was explained above (units/ms * reporting rate). The default value for the reporting rate is 10, equivalent to 100Hz. Of the 155 frequencies currently defined in 70-mouse.hwdb, only one is 100 Hz. The most common one here is 125Hz, followed by 1000Hz followed by 166Hz and 142Hz. Now, the vast majority of devices don't have an entry in the hwdb file, so this data does not represent a significant sample set. But for modern mice, the default velocity scale of 10 is probably off between 25% and a factor 10. While this doesn't mean much for the local example (users generally just move the numbers around until they're happy enough) it means that the actual values are largely meaningless for anyone but those with the same hardware.

Of note: the synaptics driver automatically sets VelocityScale to 80Hz. This is correct for the vast majority of touchpads.


The graphs above show the X server's pointer acceleration for mice, trackballs and other devices and the effects of the configuration toggles. I purposely did not put any specific analysis in and/or comparison to libinput. That will come in a future post.

[1] I still have a branch somewhere where the server prints yaml to the log file which can then be extracted by shell scripts, passed on to python for processing and ++++ out of cheese error. redo from start ++++

System76 and the LVFS

Posted by Richard Hughes on May 09, 2018 03:16 PM

tl;dr: Don’t buy System76 hardware and expect to get firmware updates from the LVFS

System76 is a hardware vendor that builds laptops with the Pop_OS! Linux distribution pre-loaded. System76 machines do get firmware updates, but do not use the fwupd and LVFS shared infrastructure. I’m writing this blog post so I can point people at some static text rather than writing out long replies to each person that emails me wanting to know why they don’t just use the LVFS.

In April of last year, System76 contacted me, wanting to work out how to get on the LVFS. We wrote 30+ cordial emails back and forth with technical details. Discussions got stuck when we found out they currently use a nonfree firmware flash tool called afuefi rather than use the UEFI specification called UpdateCapsule. All vendors have support for capsule updates as a requirement for the Windows 10 compliance sticker, so it should be pretty easy to use this instead. Every major vendor of consumer laptops is already using capsules, e.g. Dell, HP, Lenovo and many others.

There was some resistance to not using the proprietary AUEFI executable to do the flashing. I still don’t know if System76 has permission to redistribute afuefi. We certainly can’t include the non-free and non-redistributable afuefi as a binary in the .cab file uploaded to the LVFS, as even if System76 does have special permission to distribute it, as the LVFS would be a 3rd party and is mirrored to various places. IANAL and all that.

An employee of System76 wrote a userspace tool in rust to flash the embedded controller (EC) using a reverse engineered protocol (fwupd is written in C) and the intention was add a plugin to fwupd to do this. Peter Jones suggested that most vendors just include the EC update as part of the capsule as the EC and system firmware typically form a tightly-coupled pair. Peter also thought that afuefi is really just a wrapper for UpdateCapsule, and S76 was going to find out how to make the AMI BIOS just accept a capsule. Apparently they even built a capsule that works using UpdateCapsule.

I was really confused when things went so off-course with a surprise announcement in July that System76 had decided that they would not use the LVFS and fwupd afterall even after all the discussion and how it all looked like it was moving forwards. Looking at the code it seems the firmware update notifier and update process is now completely custom to System76 machines. This means it will only work when running Pop_OS! and not with Fedora, Debian, Ubuntu, SUSE, RHEL or any other distribution.

Apparently System76 decided that having their own client tools and firmware repository was a better fit for them. At this point the founder of System76 got cc’d and told me this wasn’t about politics, and it wasn’t competition. I then got told that I’d made the LVFS and fwupd more complicated than it needed to be, and that I should have adopted the infrastructure that System76 had built instead. This was all without them actually logging into the LVFS and seeing what features were available or what constraints were being handled…

The way forward from my point of view would be for System76 to spend a few hours making UpdateCapsule work correctly, another few days to build an EFI binary with the EC update, and a few more hours to write the metadata for the LVFS. I don’t require an apology, and would happily create them a OEM account on the LVFS. It looks instead that the PR and the exclusivity are more valuable that working with other vendors. I guess it might make sense for them to require Pop_OS! on their hardware but it’s not going to help when people buy System76 hardware and want to run Red Hat Enterprise Linux in a business. It also means System76 also gets to maintain all this security-sensitive server and client code themselves for eternity.

It was a hugely disappointing end to the discussion as I had high hopes System76 would do the right thing and work with other vendors on shared infrastructure. I don’t actually mind if System76 doesn’t use fwupd and the LVFS, I just don’t want people to buy new hardware and be disappointed. I’ve heard nothing more from System76 about uploading firmware to the LVFS or using fwupd since about November, and I’d given multiple people many chances to clarify the way forward.

If you’re looking for a nice laptop that will run Linux really well, I’d suggest you buy a Dell XPS instead — it’ll work with any distribution you choose.

Fedora Atomic Workstation → Team Silverblue

Posted by Matthias Clasen on May 05, 2018 02:51 PM

I have been writing a long series of posts about my experience with
Fedora Atomic Workstation, which I’ve enjoyed it quite a bit. But all good things must come to an end. So, no more Atomic Workstation for me …since we’re renaming it to Team Silverblue.

Team Silverblue logoWith this project, we want to take the Atomic Workstation as it is today, and turn into something that is not just cool for the few who know it, but useful for everybody who has a need for a Workstation.

There is still some work to be done to reach that goal. We have some ideas what is needed, but as with all open projects, those who show up to build it get a chance to shape the end product.

So, come and join us in Team Silverblue!

Fedora Atomic Workstation: Getting comfortable with GNOME Builder

Posted by Matthias Clasen on May 05, 2018 02:40 AM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

I am still going with my attempt to use Fedora Atomic Workstation fulltime as my main  development system. As part of that, I am figuring out how to do GTK+ development on an immutable OS, using GNOME Builder.

As I’ve explained in a previous post, one thing I figured out early on is that while the flatpak support in GNOME Builder is optimized for desktop applications, you can use it for other things. In my case, that means developing GTK+.

Build Configurations

GNOME Builder knows what and how to build via its build configurations. It can have multiple such configurations for each project – the Build Preferences page has a list of them. In the flatpak case, build configurations correspond more or less 1:1 to flatpak manifests that are stored as json files in the source tree. Last time, I created one that launches the  gtk4-demo application.

Currently, creating a new build configuration means copying an existing flatpak manifest to a new file, but in  3.28, GNOME Builder will support copying build configurations directly in the Build Preferences.


Influencing GTK+ at runtime for debugging purposes is often done via environment variables, such as GTK_DEBUG for controlling the debug output. Unfortunately, setting environment variables is not the most obvious in GNOME Builders flatpak support, currently.

There is an Environment section in the Build Preferences, but it sets environment variables at build time, not in the runtime sandbox that gtk4-demo will run in. GNOME Builder currently has no UI for setting up the runtime environment. We can still do it, by manually adding –env options in the finish-args section of the flatpak manifest:

"finish-args" : [

This is hardly elegant, and it has the big downside that any change to the flatpak manifest will cause GNOME Builder to rebuild the project, even if the change only affects the runtime setup, as is the case here. I hope that GNOME Builder will gain proper runtime setup UI at some point.


But maybe it would be more natural to get a command prompt and set environment variables as usual, while launching apps from there.

GNOME Bulder can actually give you a terminal, with Ctrl-Alt-Shift-T, but it is a terminal in the build sandbox, not the runtime sandbox.

Thankfully, we can help ourselves in the same way as before: Just create another build configuration, this time using “sh” as the command – that gives us a shell inside the runtime sandbox, and we can launch test apps from there while setting environment variables.


One aspect of GTK+ that I have worked on in this new setup is module loading. I’ve switched GTK+s printing support to use a GIO extension point, and of course, I wanted to test this before pushing it.

So I was a bit surprised at first that the print dialog in gtk4-demo did not trigger my new module loading code and yet seemed to work just fine. But then I remembered that we are working in a flatpak sandbox now, so GTK+s portal support kicked in and helpfully redirected the print operation to an out-of-process print dialog.

Fedora Atomic Workstation: Theming Possible

Posted by Matthias Clasen on April 30, 2018 02:52 AM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

Today, I decided to try out how themes work with flatpak:ed apps on an Atomic Workstation.

To change themes, I need gnome-tweaks. I already had it installed from the gnome-nightly-apps repository. Unfortunately, it didn’t work, complaining about a missing python package in the runtime. Such is life with nightly builds.

Instead, I used package layering to install a stable version of gnome-tweaks:

$ rpm-ostree install gnome-tweaks

This worked better, and …I found that there are no themes on the Atomic Workstation install, apart from the default ones (Adwaita and HighContrast). So, back to package layering once more. I installed Adapta:

$ rpm-ostree install adapta-gtk-theme

Now I could switch the theme to Adapta in gnome-tweaks:After all this preparation, I was finally ready to see how flatpak:ed apps handle the new theme. But first I wanted to update them:

$ flatpak update
Looking for updates...
Installing for user: org.gtk.Gtk3theme.Adapta/x86_64/3.22 from flathub
[####################] 1 delta parts, 2 loose fetched; 336 KiB transferred in 1 seconds

Neat. Flatpak detected the current GTK+ theme (by reading the gsetting), found a matching runtime extension, and automatically installed it. Flatpak has a similar automatism built-in for matching the host GL drivers and for installing some hardware-specific libraries.

Of course, I could have just manually installed the theme, but this is much more convenient. To see what themes are available on flathub, use:

$ flatpak remote-ls flathub | grep theme

And now, when I run a flatpak app, for example GNOME Characters, it comes up with the Adapta theme:

Lessons learned from this quick experiment: Nightly apps don’t always work, but flatpak can handle themes just fine.

Update: It is probably worth a brief explanation why it is necessary to install the theme twice, on the host and for Flatpak.  With Flatpak, the apps can use different runtimes, and those may contain different version of libraries like GTK+. The host system may be much more stable (think RHEL). So, for every “OS extension” (like fonts, or themes, or drivers) we have to ask ourselves  – will this work across possibly different library versions ? For fonts, the answer we found is yes, font file formats have been stable forever, and it is very unlikely that a font you have on your system will not work with any app or runtime. Therefore, we make the host system fonts available inside flatpak sandboxes – you don’t have to install them again. For themes, we were not so confident, despite GTK+ 3 being stable now, we decided that it is better to have themes match the toolkit version in each runtime.

Fedora Atomic Workstation: Developer Tools, continued

Posted by Matthias Clasen on April 27, 2018 08:33 PM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

Last time, I wrote about using flatpak-builder to do commandline development in a container (namely, in a flatpak sandbox). Flatpak-builder is a pretty versatile and well-documented tool.

Of course, it works well to build desktop apps that already have a flatpak manifest in their git tree. But I have also used it successfully to build and run anything from a library to a session service.

For this reason, I suggested that we should add it to the default Fedora Workstation installation – it is a nice tool to have around.  When the Workstation SIG discussed this idea, it was rightly pointed out that there are quite a few dependencies that flatpak-builder pulls in: git, bzr, svn, meson, autotools, … Not surprising for a meta-build-tool that supports diverse source control and build systems.  But it does make the default install quite a bit heavier.

Maybe some of this can be fixed by turning ‘fringe’ dependencies like svn or bzr into Recommends and make flatpak-builder handle the lack these tools gracefully. But there’s an easier solution here: Just use flatpak! It may not be the premier use case it is designed for, but flatpak can handle commandline apps just fine.

So we created a flatpak for flatpak-builder today, and made it available on flathub. You can get it with:

flatpak install flathub org.flatpak.Builder

As I’ve explained in an earlier post, you can get the familiar command name back by setting up a shell alias:

alias flatpak-builder=org.flatpak.Builder

And, voilá, flatpak-builder works as before! And all its dependencies are in the runtime that it uses (in this case, an SDK). My host system can stay lean and clean, as I like it.

At first I was surprised by this use of flatpak, but then I learned that there are already a few non-graphical applications on flathub. For example, you can install glxinfo from there (under the name org.freedesktop.GlxInfo), and vim is also available on flathub.

I suspect that we might see more commandline tools become available in this form in the future.

Some Native GTK Dialogs in LibreOffice

Posted by Caolán McNamara on April 16, 2018 03:34 PM
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/wL3Rt4v11LY/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/wL3Rt4v11LY?feature=player_embedded" width="320"></iframe>
When the GTK3 backend is active in current LibreOffice master (towards 6.1) some of the dialogs are now comprised of fully native GTK dialogs and widgetery. Instead of VCL widgetery themed to look like GTK, they're the real thing.
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/yOOQ-tOhgjE/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/yOOQ-tOhgjE?feature=player_embedded" width="320"></iframe>

So for these dialogs this means f.e that the animated effects for radio and checkbuttons work, that the scrolling is overlay scrolling, and that the visual feedbacks that scrolling has reached its limit, or that available content is outside the scrolling region are shown. In the above demo, the GtkNotebook is the real thing.
<iframe allowfullscreen="allowfullscreen" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/99v_2rIBL14/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/99v_2rIBL14?feature=player_embedded" width="320"></iframe>
I'm particularly pleased with the special character dialog because it has a custom character grid widget which comes with accessibility support which remains working when reworked as a native gtk widget with custom drawing and interaction callbacks.

Fedora Atomic Workstation: Developer tools

Posted by Matthias Clasen on April 16, 2018 10:28 AM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

A while ago, I wrote about using GNOME Builder for GTK+ work on my Fedora Atomic Workstation. I’ve done this with some success since then. I am using the nightly builds of GNOME Builder from the sdk.gnome.org flatpak repository, since I like to try the latest improvements.

As these things go, sometimes I hit a bug. Recently, I ran into a memory leak that caused GNOME Builder to crash and burn. This was happening just as I was trying to take some screenshots for a blog post. So, what to do?

I figured that I can go back to using the commandline, without giving up the flatpak environment that I’m used to now, by using flatpak-builder, which is a commandline tool to build flatpak applications. In my opinion, it should come out-of-the-box with the Atomic Workstation image, just like other container tools. But  that is not the case right now, so I used the convenient workaround of package layering:

$ rpm-ostree install flatpak-builder

flatpak-builder uses a json manifest that describes what and how to build. GTK+ is shipping manifests for the demo apps in its source tree already, for example this one:


These manifests are used in the GNOME gitlab instance to build testable flatpaks for merge requests, as can be seen here:


This is pretty amazing as a way to let interested parties (designers, translators, everybody) test suggested changes without having to go through a prolonged and painful build process of ever-changing dependencies (the jhbuild experience). You can read more about it in Carlos‘ and Jordan’s posts.

For me, it means that I can just use one of these manifests as input to flatpak-builder to build GTK+:

$ flatpak-builder build \

This produces a local build in the build/ directory, and I can now run commands in a flatpak sandbox that is populated with the build results like this:

$ flatpak-builder --run build \
   build-aux/flatpak/org.gtk.WidgetFactory.json \

A few caveats are in order when you are using flatpak-builder for development:

flatpak-builder will complain if the build/ directory already exists, so for repeated building, you should add the –force-clean option.

The manifest we are using here is referring to the main GTK+ git repository, and will create a clean checkout from there, ignoring local changes in your checkout. To work around this, you can replace the https url pointing at the git repository by a file: url pointing at your checkout:

 "url": "file:///home/mclasen/Sources/gtk"

You still have to remember to create a local commit for all the changes you want to go into the build. I have suggested that flatpak-builder should support a different kind of source to make this a little easier.

Once you have the basic setup working, things should be familiar. You can get a shell in the build sandbox by using ‘sh’ as the command:

$ flatpak-builder --run build \
  build-aux/flatpak/org.gtk.WidgetFactory.json \

flatpak-builder knows to use the sdk as runtime when setting up the sandbox, so tools like gdb are available to you. And the sandbox has access to the display server, so you can run graphical apps without problems.

In the end, I got my screenshots of the font chooser, and this setup should keep me going until GNOME Builder is back on track.

A font update

Posted by Matthias Clasen on April 14, 2018 12:37 AM

At the end of march I spent a few days with the Inkscape team, who were so nice to come to the Red Hat Boston office for their hackfest. We discussed many things, from the GTK3 port of Inkscape, to SVG and CSS, but we also spent some time on one of my favorite topics: fonts.

Font Chooser

One thing Tav showed me which I was immediately envious of is the preview of OpenType features that Inkscape has in its font selector.

Clearly, we want something like that in the GTK+ font chooser as well. So, after coming back from the hackfest, I set out to see if I can get this implemented. This is how far I got so far, it is available in GTK+ master.

This really helps understanding which glyphs are affected by a font feature. I would like to add a preview for ligatures as well, but harfbuzz currently does not offer any API to get at the necessary information (understandably — its main focus is applying font features for shaping) and I’m not prepared to parse those font tables myself in GTK+. So, ligatures will have to wait a bit.

Another thing I would like to explore at some point is possible approaches for letting users apply font features to smaller fragments of text, like a headline or a single word. This could be a ‘font tweak’ dialog or panel. If you have suggestions or ideas for this, I’d love to hear them.

At the request of the Inkscape folks, I’ve also explored a backport of the new font chooser capabilities to the 3.22 branch, but since this involves new API, we’re not sure yet which way to go with this.

Font Browser

While doing font work it is always good to have a supply of featureful fonts, so I end up browing the Google web fonts quite a bit.

Recently, I stumbled over a nice-looking desktop app for doing so, but alas, it wasn’t available as a package, and it is written in rust, which I know very little about (I’m hoping to change that soon, but that’s a topic for another post).

But I’ve mentioned this app on #flatpak, and just a few days later, it appeared on flathub, and thus also in GNOME software on my system, just a click away. So nice of the flathub team!

The new flathub website is awesome, btw. Go check it out.

The best part is that this nice little app is now available not just on my bleeding-edge Fedora Atomic Workstation, but also on Ubuntu, Gentoo and even RHEL, thanks to flatpak.  Something we could only dream of a few years ago.


GNOME 3.28 uses clickfinger behaviour by default on touchpads

Posted by Peter Hutterer on April 08, 2018 10:14 PM

To reduce the number of bugs filed against libinput consider this a PSA: as of GNOME 3.28, the default click method on touchpads is the 'clickfinger' method (see the libinput documentation, it even has pictures). In short, rather than having a separate left/right button area on the bottom edge of the touchpad, right or middle clicks are now triggered by clicking with 2 or 3 fingers on the touchpad. This is the method macOS has been using for a decade or so.

Prior to 3.28, GNOME used the libinput defaults which vary depending on the hardware (e.g. mac touchpads default to clickfinger, most other touchpads usually button areas). So if you notice that the right button area disappeared after the 3.28 update, either start using clickfinger or reset using the gnome-tweak-tool. There are gsettings commands that achieve the same thing if gnome-tweak-tool is not an option:

$ gsettings range org.gnome.desktop.peripherals.touchpad click-method
$ gsettings get org.gnome.desktop.peripherals.touchpad click-method
$ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas'

For reference, the upstream commit is in gsettings-desktop-schemas.

Note that this only affects so-called ClickPads, touchpads where the entire touchpad is a button. Touchpads with separate physical buttons in front of the touchpad are not affected by any of this.

Linux kernel lockdown and UEFI Secure Boot

Posted by Matthew Garrett on April 05, 2018 01:07 AM
David Howells recently published the latest version of his kernel lockdown patchset. This is intended to strengthen the boundary between root and the kernel by imposing additional restrictions that prevent root from modifying the kernel at runtime. It's not the first feature of this sort - /dev/mem no longer allows you to overwrite arbitrary kernel memory, and you can configure the kernel so only signed modules can be loaded. But the present state of things is that these security features can be easily circumvented (by using kexec to modify the kernel security policy, for instance).

Why do you want lockdown? If you've got a setup where you know that your system is booting a trustworthy kernel (you're running a system that does cryptographic verification of its boot chain, or you built and installed the kernel yourself, for instance) then you can trust the kernel to keep secrets safe from even root. But if root is able to modify the running kernel, that guarantee goes away. As a result, it makes sense to extend the security policy from the boot environment up to the running kernel - it's really just an extension of configuring the kernel to require signed modules.

The patchset itself isn't hugely conceptually controversial, although there's disagreement over the precise form of certain restrictions. But one patch has, because it associates whether or not lockdown is enabled with whether or not UEFI Secure Boot is enabled. There's some backstory that's important here.

Most kernel features get turned on or off by either build-time configuration or by passing arguments to the kernel at boot time. There's two ways that this patchset allows a bootloader to tell the kernel to enable lockdown mode - it can either pass the lockdown argument on the kernel command line, or it can set the secure_boot flag in the bootparams structure that's passed to the kernel. If you're running in an environment where you're able to verify the kernel before booting it (either through cryptographic validation of the kernel, or knowing that there's a secret tied to the TPM that will prevent the system booting if the kernel's been tampered with), you can turn on lockdown.

There's a catch on UEFI systems, though - you can build the kernel so that it looks like an EFI executable, and then run it directly from the firmware. The firmware doesn't know about Linux, so can't populate the bootparam structure, and there's no mechanism to enforce command lines so we can't rely on that either. The controversial patch simply adds a kernel configuration option that automatically enables lockdown when UEFI secure boot is enabled and otherwise leaves it up to the user to choose whether or not to turn it on.

Why do we want lockdown enabled when booting via UEFI secure boot? UEFI secure boot is designed to prevent the booting of any bootloaders that the owner of the system doesn't consider trustworthy[1]. But a bootloader is only software - the only thing that distinguishes it from, say, Firefox is that Firefox is running in user mode and has no direct access to the hardware. The kernel does have direct access to the hardware, and so there's no meaningful distinction between what grub can do and what the kernel can do. If you can run arbitrary code in the kernel then you can use the kernel to boot anything you want, which defeats the point of UEFI Secure Boot. Linux distributions don't want their kernels to be used to be used as part of an attack chain against other distributions or operating systems, so they enable lockdown (or equivalent functionality) for kernels booted this way.

So why not enable it everywhere? There's a couple of reasons. The first is that some of the features may break things people need - for instance, some strange embedded apps communicate with PCI devices by mmap()ing resources directly from sysfs[2]. This is blocked by lockdown, which would break them. Distributions would then have to ship an additional kernel that had lockdown disabled (it's not possible to just have a command line argument that disables it, because an attacker could simply pass that), and users would have to disable secure boot to boot that anyway. It's easier to just tie the two together.

The second is that it presents a promise of security that isn't really there if your system didn't verify the kernel. If an attacker can replace your bootloader or kernel then the ability to modify your kernel at runtime is less interesting - they can just wait for the next reboot. Appearing to give users safety assurances that are much less strong than they seem to be isn't good for keeping users safe.

So, what about people whose work is impacted by lockdown? Right now there's two ways to get stuff blocked by lockdown unblocked: either disable secure boot[3] (which will disable it until you enable secure boot again) or press alt-sysrq-x (which will disable it until the next boot). Discussion has suggested that having an additional secure variable that disables lockdown without disabling secure boot validation might be helpful, and it's not difficult to implement that so it'll probably happen.

Overall: the patchset isn't controversial, just the way it's integrated with UEFI secure boot. The reason it's integrated with UEFI secure boot is because that's the policy most distributions want, since the alternative is to enable it everywhere even when it doesn't provide real benefits but does provide additional support overhead. You can use it even if you're not using UEFI secure boot. We should have just called it securelevel.

[1] Of course, if the owner of a system isn't allowed to make that determination themselves, the same technology is restricting the freedom of the user. This is abhorrent, and sadly it's the default situation in many devices outside the PC ecosystem - most of them not using UEFI. But almost any security solution that aims to prevent malicious software from running can also be used to prevent any software from running, and the problem here is the people unwilling to provide that policy to users rather than the security features.
[2] This is how X.org used to work until the advent of kernel modesetting
[3] If your vendor doesn't provide a firmware option for this, run sudo mokutil --disable-validation

comment count unavailable comments

Fedora Atomic Workstation: Beta

Posted by Matthias Clasen on April 04, 2018 02:08 AM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

The Beta release of Fedora 28 is out, and it contains the usual assortment of good stuff:

  • better battery life
  • Thunderbolt support
  • guest additions  make Fedora work better in VirtualBox
  • the latest GNOME release brings polish and new applications
  • updated versions of tons of popular software

The announcement was highlighting all the ways in which you can try it out: there are isos for Workstation, Server and Atomic Host. One thing it forgot to point out is that you can also try Fedora 28 Beta in the form of the Atomic Workstation.

How do you get it ?

To install Fedora Atomic Workstation from scratch, use this iso:


If you already have an Atomic Workstation installation of Fedora 27, you can jump to Fedora 28 Beta with the rpm-ostree rebase command:

# rpm-ostree rebase atomic:fedora/28/x86_64/workstation

This assumes you have already an ostree remote pointing at the Fedora atomic repository. If not, this command creates one:

# ostree remote add --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-28-primary atomic https://kojipkgs.fedoraproject.org/atomic/repo
Whats new ?

Besides all the general Fedora 28 news, there are some improvements that are specific to the Atomic Workstation.

The version of GNOME software that is included in the Beta is now able to update the OS image. This is an outcome of our effort to close the feature gaps between the traditional and Atomic Workstation variants. I hope we can close a few more before Fedora 28 is released.

Where to learn more ?

The Project Atomic website has lots of useful information about Atomic and related technologies, such as this Introduction to Atomic Workstation.

And the #atomic irc channel on freenode is a good place to get answers to Atomic Workstation questions. Feel free to come by!

Update: Don’t forget that a fresh Atomic Workstation installation currently comes without any pre-configured flatpak remotes. You need to add one before GNOME Software can show you flatpak apps. The flathub remote can be added by clicking on this link:



The LVFS CDN will change soon

Posted by Richard Hughes on April 03, 2018 12:48 PM

tl;dr: If you have https://s3.amazonaws.com/lvfsbucket/downloads/firmware.xml.gz in /etc/fwupd/remotes.d/lvfs.conf then you need to nag your distribution to update the fwupd package to 1.0.6.

Slightly longer version:

The current CDN (~$100/month) is kindly sponsored by Amazon, but that won’t last forever and the donations I get for the LVFS service don’t cover the cost of using S3. Long term we are switching to a ‘dumb’ provider (currently BunnyCDN) which works out 1/10th of the cost — which is covered by the existing kind donations. We’ve switched to use a new CNAME to make switching CDN providers easy in the future, which means this should be the only time this changes client side.

If you want to patch older versions of fwupd, you can apply this commit.

LVFS Mailing List

Posted by Richard Hughes on March 23, 2018 07:29 PM

I have created a new low-volume lvfs-announce mailing list for the Linux Vendor Firmware Service, which will only be used to make announcements about new features and planned downtime. If you are interested in what’s happening with the LVFS you can subscribe here. If you need to contact me about anything LVFS-related, please continue to email me (not the mailing list) as normal.

Fedora Atomic Workstation: Almost fool-proof

Posted by Matthias Clasen on March 22, 2018 06:53 PM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

I’ve had a little adventure with my Fedora Atomic Workstation this morning and almost missed a meeting because I couldn’t get to a desktop session.
I’ve been using the rawhide branch of Fedora Atomic Workstation to keep up to speed with the latest developments in Fedora. As is expected of rawhide,  recently, it would not get me to a login screen (much less a working desktop session). I’ve just booted back into my working image and ignored this for a few days.

The Adventure begins

But since it didn’t go away by itself, yesterday, I decided to see if I can debug it a bit. Looking at the journal for the last unsuccessful boot gave some hints:

gnome-shell[2934]: Failed to create backend: Failed to initialize renderer: Missing extension for GBM renderer: EGL_KHR_platform_gbm
gnome-session-binary[2920]: WARNING: App 'org.gnome.Shell.desktop' exited with code 1
gnome-session-binary[2920]: Unrecoverable failure in required component org.gnome.Shell.desktop

Poking the nearest graphics team member about this, I was asked to provide the output of eglinfo in this situation. Since I had an hour to spare before the meeting, I booted back into the broken image in runlevel 3, logged in on a vt, … and found that eglinfo is not in the OS image.

Well, thats easy enough to fix on an Atomic system, using package layering:

rpm-ostree install egl-utils

After that, I proceeded to reboot to get to the OS image with the newly added layer, and when I got to the boot prompt, I realized my mistake: rpm-ostree never replaces the booted image, since it (reasonably) assumes that the booted image is ‘working’.  But it only keeps two images around, so it had to replace the other one – which was the image which successfully boots to my desktop.

Now, at the boot prompt, I was faced with the choice between

  • the broken image
  • the broken image + egl-utils

Ugh. Not what I had hoped for. And my meeting starts in 50 minutes. Admittedly, this was entirely my fault. rpm-ostree behaved as it should and as documented. Since it is a snow day, I need to do the meeting from home and need a web browser for that.

So, what can be done? I remembered that ostree is ‘like git for binaries’, so there should be history, right? After some fiddling with the ostree commandline, I found the log command that shows me the history of my local repository. But sadly, the output was disappointing:

$ ostree log fedora/rawhide/x86_64/workstation
commit fa09fd6d2551a501bcd3670c84123a22e4c704ac30d9cb421fa76821716d8c20
ContentChecksum: 74ff34ccf6cc4b7554d6a8bb09591a42f489388ba986102f6726f9e662b06fcb
Date: 2018-03-20 10:27:42 +0000
Version: Rawhide.20180320.n.0
(no subject)

<< History beyond this commit not fetched >>

rpm-ostree defaults to only keeping the latest commit in the local repository, a bit like a shallow git clone. Thankfully, just like git, ostree is versatile, and bit more searching brought me to the pull command, and its –depth option:

# ostree pull --depth=5 onerepo fedora/rawhide/x86_64/workstation

Receiving metadata objects: 698/(estimating) 2.2 MB/s 23.7 MB

This command writes to the local repo in /sysroot/ostree/repo and thus needs to be run as root.

Now ostree log showed a few older commits. I had to bump the depth a few times to find the last working commit. Then, I made that commit available for booting into again, using the depoy command:

# ostree admin deploy 76723f34b8591434fd9ec0

where that hex string is a prefix of the commit ID of the last working commit.  This command also needs to be run as root.

Now a quick reboot, and… the boot loader menu had an entry for the working image again. I made it back to my desktop with 5 minutes to spare before the meeting. Phew!Update: Since you might be wondering, the output of eglinfo was:

eglinfo: eglInitialize failed

Fedora Atomic Workstation: Ruling the commandline

Posted by Matthias Clasen on March 16, 2018 06:49 PM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

In my recent posts, I’ve mostly focused on finding my way around with GNOME Builder and using it to do development in Flatpak sandboxes. But I am not really the easiest target audience for an IDE like GNOME Builder, having spent most of my life on the commandline with tools like vim and make.

So, what about the commandline in an Atomic Workstation environment? There are many container tools, like buildah, atomic, oc, podman, and so on. I am not going to talk about these, since I don’t know them very well, and they are covered, e.g. on www.projectatomic.io.

But there are a few commands that are essential to life on the Atomic Workstation: rpm-ostree and flatpak.


First of all, there’s rpm-ostree, which is the commandline frontend to the rpm-ostreed daemon that manages the OS image(s) on the Atomic Workstation.

You can run

rpm-ostree status

to get some information about your OS image (and the other images that may be present on your system). And you can run

rpm-ostree upgrade

to get the latest update for your OS image (the terminology clash here is a bit unfortunate; rpm-ostree calls an upgrade what most Linux distros and packaging tools call an update).

You can run this command as normal user in a terminal, and rpm-ostreed will present you with a polkit dialog to do its privileged operations. Recently, rpm-ostreed has also gained the ability to check for and deploy upgrades automatically.

An important thing to keep in mind is that rpm-ostree never changes your running system. You have to reboot into the new image to see the changes, so

systemctl reboot

should be in your repertoire of commands as well. Alternatively, you can use the –reboot option to tell rpm-ostree to reboot when the upgrade command completes.


The other essential command is flatpak. Where rpm-ostree controls your OS image, flatpak rules the applications. flatpak has many commands that are worth exploring, I’ll only mention the most important ones here.

It is quite common to have more than one source for flatpaks enabled.

flatpak remotes

lists them all. If you want to find applications, then

flatpak search

will do that for you, and

flatpak install

will let you install what you found. An important detail to point out here is that applications can be installed in system-wide (in /var) or per-user (in ~/.local/share). You can choose the location with the –user and  –system options. If you choose to install system-wide, you will get a polkit prompt, since this is a privileged operation.

After installing applications, you should keep them up-to-date by installing updates. The most straightforward way to so is to just run

flatpak update

which will install available updates for all applications. To just check if updates are available, you can use

flatpak remote-ls --updates
Launching applications

Probably the most important thing you will want to do with flatpak is to run applications. Unsurprisingly, the command to do so is called run, and it expects you to specify the unique application ID:

flatpak run org.gnome.gitg

This is certainly a departure from the traditional commandline, and could be considered cumbersome (even though it has bash completion for the application ID).

Thankfully, flatpak has recently gained a way to recover the familiar interface. It now installs shell wrappers for the flatpak run command in ~/.local/share/flatpak/bin. After adding that directory to your PATH, you can run gitg like this:


If (like me), you are still not satisfied with this, you can add a shell alias to get the traditional command name back:

alias gitg=org.gnome.gitg

Now gitg works again, as it used to. Nice!


Fedora Atomic Workstation: Trying things out the easy way

Posted by Matthias Clasen on March 08, 2018 01:17 PM

Note: Fedora Atomic Workstation has recently been renamed to Team Silverblue. Learn more here.

If you’ve followed my posts about my first steps with Fedora Atomic Workstation, you may have seen that I’ve had to figure out how to make codecs work in the firefox that is included in the OS image. And then I had to work around some issues with the layered packages that I used for that when I rebased to a newer OS.

But maybe there is a better way to go about this?

Flatpak is the preferred way to install applications on the Atomic Workstation, so maybe that is the way to go. So far, all the flatpaks I’ve installed have come from Flathub, which is a convenient place to find all sorts of desktop apps. But firefox is not there (yet).

Thankfully, flatpak is fundamentally decentralized by design. There is nothing special about Flathub, flatpak can easily install apps from many different sources. A quick search yielded this unofficial firefox flatpak repository, with easy-to-follow instructions. Just clicking on this link will also work. If you open it with a sandboxed firefox, you will see the URI portal in action:

One nice aspect of this is that the nightly firefox is not just a one-off package that I’ve installed, it will receive updates just like any other app on my system. An important detail when it comes to web browsers!

Another nice aspect is that I did not have to remove the firefox that is already installed (I couldn’t anyway, since it is part of the immutable OS image). The two can peacefully coexist.

This is true not just for firefox, but in general: by isolating apps and their dependencies from each other, flatpak makes it very easy to try out different versions of apps without pain.

Want to try some bleeding-edge GNOME apps instead of the stable versions that are available on Flathub? No problem,  just enable the GNOME nightly repository like this:

flatpak remote-add --user --from gnome-apps-nightly https://sdk.gnome.org/gnome-apps-nightly.flatpakrepo

And then the nightly apps will be available for installation next to their stable counterparts. Note that GNOME software will indicate the source for each app in the search results.

Trying out new software has never been safer and easier!

Sorry for the wrong-colored screenshots in this post – there is still a price to pay for living on the bleeding edge of rawhide, even if Atomic Workstation makes it much safer.

native GTK3 message dialogs

Posted by Caolán McNamara on March 04, 2018 04:08 PM
In LibreOffice 6.1, when the GTK3 backend is in use, the message dialogs are now native GTK3 message dialogs rather than vcl message dialogs using GTK theming.

So they are now true GTK dialogs and they look like...

while before they looked like...

We have 475 message dialogs directly instantiated in LibreOffice and 89 indirectly instantiated via GtkBuilder .ui files.

We've changed the format our dialog, etc. translations are stored in from our own legacy custom format to the more commonplace gettext .mo format. In the case of a message dialog described via a GtkBuilder .ui file, we now use GTK's own GtkBuilder to load the .ui, and GTK's own gettext integration to localize it from our new .mo format translations.

Recipes hackfest, day 1

Posted by Matthias Clasen on March 01, 2018 03:43 AM

It has been a bit quiet around GNOME recipes recently, since most of us have other obligations. But this is about to change; we’re currently having a hackfest about GNOME recipes in Jogyakarta, Indonesia, and we’ve already made some interesting plans for for future work in this app.

The hackfest takes place at the AMIKOM university in Jogyakarta.


Before the event started, we had an outreach day where our group met some of the students and talked to them about GNOME and what we do. I was unfortunately not able to make it in time for that part, but I hear that it was a good day.


The hackfest itself started on Wednesday. We collected a list of goals and tasks, and started the day with a walkthrough/review of both GNOME recipes and the Endless cooking app.


A good part of the first day was spent discussing the storage layer of recipes. What we have now is very home-grown and primitive. I was hoping for somebody to show up and write a proper storage layer for me. That hasn’t happened …until now.

The outcome of our discussion is a plan to use the technologies from the Endless knowledge libs to store recipes and other data. Some adaptations will be necessary since most Endless knowledge apps are basically readonly, in contrast to GNOME recipes, which has a strong focus on creating and editing recipes. But we have a fairly concrete plan for what needs to happen.

And to not lose the momentum, we’ve set ourselves a goal to present on this at Guadec. No pressure!


The other big topic we tackled on the first day is encouraging more recipe contributions. We had a good start last year, when we collected around 60 recipes, but it would be good to have some more. We’ve come up with several ideas for this.

Local recipes

One idea is that we will start accepting recipes in languages other than English, since translating a recipe into English is a big extra hurdle that makes contributing harder.

There were several fascinating tangents in this discussion. One thing we discussed at some length is the challenges of translating recipes. To do a good job at that takes a lot more effort than just translating the words; you may have to substitute ingredients for what is available in other parts of the world, and find out about local variations of ingredients, and so on. And to add an extra twist to this, the translation here will go from a local language to English, which is the opposite of what our translation teams normally do.

The exact form that this will take is not quite decided yet. We may end up introducing a concept like “recipe packs”, say: “Indonesian recipes”, and those might only be available in a local language.

A contest

Another idea for contributions is to have a contest for contributing recipes. The details will have to be worked out, but the prizes could include a prominent spot in the ‘featured chefs’ section, or having your recipe featured on the front page.

Cook it!

We can do a contest online, but since this is about cooking and food, it might also be very nice to do this as a local get-together. This could not only include entering recipes together, but also cooking them, taking photos, and eating them. We will try this idea in one or two locations.

A hackfest for food – maybe that’s a cookfest ?

We ended this very productive first day with a very pleasant dinner.  On to day 2!


Searching for hardware on the LVFS

Posted by Richard Hughes on February 28, 2018 03:31 PM

The LVFS acquired another new feature today.

You can now search for firmware and hardware vendors — but the algorithm is still very much WIP and we need some real searches from real users. If you have a spare 10 seconds, please search for your hardware on the LVFS. I’ll be fixing up the algorithm as we find problems. I’ll also be using the search data to work out what other vendors we need to reach out to. Comments welcome.

python-libevdev - a python wrapper for libevdev

Posted by Peter Hutterer on February 26, 2018 01:20 AM
Edit 2018-02-26: renamed from libevdev-python to python-libevdev. That seems to be a more generic name and easier to package.

Last year, just before the holidays Benjamin Tissoires and I worked on a 'new' project - python-libevdev. This is, unsurprisingly, a Python wrapper to libevdev. It's not exactly new since we took the git tree from 2016 when I was working on it the first time round but this time we whipped it into a better shape. Now it's at the point where I think it has the API it should have, pythonic and very easy to use but still with libevdev as the actual workhorse in the background. It's available via pip3 and should be packaged for your favourite distributions soonish.

Who is this for? Basically anyone who needs to work with the evdev protocol. While C is still a thing, there are many use-cases where Python is a much more sensible choice. The python-libevdev documentation on ReadTheDocs provides a few examples which I'll copy here, just so you get a quick overview. The first example shows how to open a device and then continuously loop through all events, searching for button events:

import libevdev

fd = open('/dev/input/event0', 'rb')
d = libevdev.Device(fd)
if not d.has(libevdev.EV_KEY.BTN_LEFT):
print('This does not look like a mouse device')

# Loop indefinitely while pulling the currently available events off
# the file descriptor
while True:
for e in d.events():
if not e.matches(libevdev.EV_KEY):

if e.matches(libevdev.EV_KEY.BTN_LEFT):
print('Left button event')
elif e.matches(libevdev.EV_KEY.BTN_RIGHT):
print('Right button event')
The second example shows how to create a virtual uinput device and send events through that device:

import libevdev
d = libevdev.Device()
d.name = 'some test device'

uinput = d.create_uinput_device()
print('new uinput test device at {}'.format(uinput.devnode))
events = [InputEvent(libevdev.EV_REL.REL_X, 1),
InputEvent(libevdev.EV_REL.REL_Y, 1),
InputEvent(libevdev.EV_SYN.SYN_REPORT, 0)]
And finally, if you have a textual or binary representation of events, the evbit function helps to convert it to something useful:

>>> import libevdev
>>> print(libevdev.evbit(0))
>>> print(libevdev.evbit(2))
>>> print(libevdev.evbit(3, 4))
>>> print(libevdev.evbit('EV_ABS'))
>>> print(libevdev.evbit('EV_ABS', 'ABS_X'))
>>> print(libevdev.evbit('ABS_X'))
The latter is particularly helpful if you have a script that needs to analyse event sequences and look for protocol bugs (or hw/fw issues).

More explanations and details are available in the python-libevdev documentation. That doc also answers the question why python-libevdev exists when there's already a python-evdev package. The code is up on github.

Fedora Atomic Workstation on the road

Posted by Matthias Clasen on February 25, 2018 03:21 AM

I am currently travelling with my laptop, after I’ve recently switched it to Fedora Atomic Workstation and then rebased to rawhide.

While I am sitting by the pool, I have some time to ponder: how has it been going ?

I have upgraded to a newer image a few times now, and I won’t lie: rawhide is still rawhide. Sometimes, the new image was a disappointment and would not give me a working system. In one case, it hung before getting to the login screen. The typical rawhide experience.

In the past, hitting such a bad compose meant a painful struggle to recover: Boot into single user-mode, do some live debugging, try to find a few problematic packages to downgrade, work around breakage, etc. You end up with a half-working system in a mixed state, and hope that the next update will get you back on track.

An ostree-based system like Fedora Atomic Workstation makes the recovery painless. In the case I mentioned, I simply rebooted, selected my previous, working image from the grub menu, and was back to a working system in 3 minutes. No time lost.

The tree that didn’t work is still present on my system. If I wanted, I could make changes to it in an attempt to understand and fix the problem and try booting it again. Having both trees on the system is useful even if I just want to report the problem, since rpm-ostree can tell me what exactly changed between the working and the broken tree:

$rpm-ostree db diff d15a65950972 21ac24694b6e
ostree diff commit old: d15a65950972
ostree diff commit new: 21ac24694b6e
 apr 1.6.3-5.fc28 -> 1.6.3-6.fc29
 atomic 1.21.1-1.fc28 -> 1.22.1-1.fc29

(I found the checksums to pass to rpm-ostree db diff by looking for the Commit fields in the output of rpm-ostree status)

Or I can simply wait for the next compose to see if rawhide ‘fixed itself’. Even if it takes more than one compose before things get back to working, this is a safe thing to do: ostree will never replace the current image that I am booted into. This makes it entirely safe to try newer composes until I find one that works for me.

It would of course be nice if broken composes never made it to my system. The current efforts at ‘gating’ updates in Fedora should get us to a place where we detect non-booting composes before they get sent out to users. But even in this (hopefully not too distant) future, the easy rollback with Atomic Workstation provides a useful safety net.

Automated QA will never be able to detect every problem that could affect my workflow. Maybe the system boots fine, but frequently loses network connections, or maybe it can’t handle my dual-monitor setup, or something else… I may only discover the problem hours later. But up until I run rpm-ostree upgrade again, I can just reboot back into my previous image.

ostree provides fearless upgrades.

Just right for the poolside.

Fedora Atomic Workstation: Works on the beach

Posted by Matthias Clasen on February 22, 2018 02:41 AM

My trip is getting really close, so I decided to upgrade my system to rawhide. Wait, what ? That is usually what everybody would tell you not to do. Rawhide has this reputation for frequent breakage, and who knows if my apps will work any given day. Not something you want to deal with while traveling.

Image result for project atomic logo

With rpm-ostree, installing a newer OS is very similar to updating your current OS: a new image is downloaded in the background, and when the download is complete, you boot into the new image. The previous image is still available to boot back into, as a safety net.  That is the reason that I felt confident enough to try this a day before a major trip:

rpm-ostree rebase \
systemctl reboot

I would love to say that things went perfectly and I was back to a working system in 10 minutes. But it was not quite as easy, and i did encounter a few (solvable) problems. It is worth pointing out that while I was solving these problems, rpm-ostree had already downloaded all of the rawhide image, but I was still safely running my F27 OS. At no point was there a mess of a half-upgraded system with a mix of old and new rpms. I was running my old system until I had solved all the problems and had a OS image that was ready, and then I booted into it. A safe, atomic switch.

Problem 1: The rpmfusion repo is not available for f28 yet. It is a common occurrence that 3rd party repositories lag behind the Fedora releases a bit, so this is not surprising. It is a bit unfortunate that i had to remove my layered rpms from the repository to work around this.

Problem 2: buildah now in the base image. This is a good thing, of course, but it caused rpm-ostree to complain about the conflict between the OS image and the layered package. In this case, I removed the layered rpm without any qualms.

Problem 3: Rawhide repositories had a bad day. For some reason,  they were missing the repomd.xml file today.

This is a good reminder that as long as you are using package layering, you haven’t really left the world of yum repositories and out-of-sync mirrors behind. rpm-ostree has to check the yum repositories for updates to the layered packages, which means that it can be hit by the same issues as dnf on a traditional Fedora workstation.

For my rebase to proceed,  I had to remove everything that was layered on top of the OS image. After I did that, rpm-ostree no longer needed to look at yum repositories, and switched my system to the already-downloaded rawhide image.

After the reboot, I’m now running rawhide… and all my applications are just the same as they were before. A nice aspect of the Atomic Workstation approach is that (flatpak) applications are decoupled from the OS. I can update the one without the other. We are not entirely there yet: as you can see in the screenshot below, a number of applications are still installed as part of the OS image.

More importantly, the screenshot shows that GNOME software will support updating the OS on the Atomic Workstation in Fedora 28. It does so by talking to the rpm-ostree daemon.

Switching from one Fedora release to the next ls already working pretty well in the last few releases. With the Atomic Workstation, it can become as undramatic as installing the latest updates.

One could almost do it on the beach.

Fedora Atomic Workstation for development

Posted by Matthias Clasen on February 16, 2018 11:47 PM

I’m frequently building GTK+.  Since I am using Fedora Atomic Workstation now, i have to figure out how to do GTK+ development in this new environment. GTK+ may be a good example for the big middle ground of things that are not desktop applications, but also not part of the OS itself.

Image result for project atomic logo

Last week I figured out how to use a buildah container to build release tarballs for GNOME modules, and I actually used that setup to produce a GTK+ release as well.

But for fixing bugs and other development, I generally need to run test cases and demo apps, like the venerable gtk-demo. Running these outside the container does not work, since the GTK+ libraries I built are linked against libraries that are installed inside the container and not present on the host, such as libvulkan. I could of course resort to package layering to install them on the host, but that would miss the point of using Atomic Workstation.

The alternative is running the demo apps inside the container, which should work – its the same filesystem that they were built in. But they can’t talk to the compositor, since the Wayland socket is on the outside: /run/user/1000/wayland-0. I tried to work around this by making the socket visible in the container, but my knowledge of container tools and buildah is too limited to make it work. My apps still complain about not being able to open a display connection.

What now ? I decided that while GTK+ is not a desktop application, I can treat my test apps like one and write a flatpak manifest for them. This way, I can use GNOME builders awesome flatpak support to build and run them, like I already did for  GNOME recipes.

Here is a minimal  flatpak manifest that works:

  "id" : "org.gtk.gtk-demo",
  "runtime" : "org.gnome.Sdk",
  "runtime-version" : "master",
  "sdk" : "org.gnome.Sdk",
  "command" : "gtk4-demo",
  "finish-args" : [
  "modules" : [
     "name" : "graphene",
     "buildsystem" : "meson",
     "builddir" : true,
     "sources" : [
         "type" : "git",
         "url" : "https://github.com/ebassi/graphene.git"
     "name" : "gtk+",
     "buildsystem" : "meson",
     "builddir" : true,
     "sources" : [
         "type" : "git",
         "url" : "https://gitlab.gnome.org/GNOME/gtk.git"

After placing this json file into the toplevel directory of my  GTK+ checkout, it appears as a new build configuration in GNOME builder:

If you look closely, you’ll notice that I added another manifest, for gtk4-widget-factory. You can have multiple manifests in your tree, and GNOME builder will let you switch between them in the Build Preferences.

After all this preparation, I can now hit the play button and have my demo app run right from inside GNOME builder. Note that the application is running inside a flatpak sandbox, using the runtime that was specified in the Build Preferences, so it is cleanly separated from the OS. And I can easily build and run against different runtimes, to test compatibility with older GNOME releases.

This may be the final push that makes me switch to GNOME Builder for day-to-day development on Fedora Atomic Workstation: It just works!

LVFS will block old versions of fwupd for some firmware

Posted by Richard Hughes on February 16, 2018 11:59 AM

The ability to restrict firmware to specific versions of fwupd and the existing firmware version was added to fwupd in version 0.8.0. This functionality was added so that you could prevent the firmware being deployed if the upgrade was going to fail, either because:

  • The old version of fwupd did not support the new hardware quirks
  • If the upgraded-from firmware had broken upgrade functionality

The former is solved by updating fwupd, the latter is solved by following the vendor procedure to manually flash the hardware, e.g. using a DediProg to flash the EEPROM directly. Requiring a specific fwupd version is used by the Logitech Unifying receiver update for example, and requiring a previous minimum firmware version is used by one (soon to be two…) laptop OEMs at the moment.

Although fwupd 0.8.0 was released over a year ago it seems people are still downloading firmware with older fwupd versions. 98% of the downloads from the LVFS are initiated from gnome-software, and 2% of people using the fwupdmgr command line or downloading the .cab file from the LVFS using a browser manually.

At the moment, fwupd is being updated in Ubuntu xenial to 0.8.3 but it is still stuck at the long obsolete 0.7.4 in Debian stable. Fedora, or course, is 100% up to date with 1.0.5 in F27 and 0.9.6 in F26 and F25. Even RHEL 7.4 has 0.8.2 and RHEL 7.5 will be 1.0.1.

Detecting the fwupd version also gets slightly more complicated, as the user agent only gives us the ‘client version’ rather than the ‘fwupd version’ in most software. This means we have to use the minimum fwupd version required by the client when choosing if it is safe to provide the file. GNOME Software version 3.26.0 was the first version to depend on fwupd ≥ 0.8.0 and so anything newer than that would be safe. This gives a slight problem, as Ubuntu will be shipping an old gnome-software 3.20.x and a new-enough fwupd 0.8.x and so will be blacklisted for any firmware that requires a specific fwupd version. Which includes the Logitech security update…

The user agent we get from gnome-software is gnome-software/3.20.1 and so we can’t do anything very clever. I’m obviously erring on not bricking a tiny amount of laptop hardware rather than making a lot of Logitech hardware secure on Ubuntu 16.04, given the next LTS 18.04 is out on April 26th anyway. This means people might start getting a detected fwupd version too old message on the console if they try updating using 16.04.

A workaround for xenial users might be if someone at Canonical could include this patch that changes the user agent in gnome-software package to be gnome-software/3.20.1 fwupd/0.8.3 and I can add a workaround in the LVFS download code to parse that. Comments welcome.

Fedora Atomic Workstation: Building flatpaks

Posted by Matthias Clasen on February 14, 2018 11:10 PM

In order to use my new Atomic Workstation for real, I need to be able to build things locally,  including creating flatpaks.

Image result for project atomic logo

One of the best tools for the  job (building flatpaks) is GNOME builder. I had already installed the stable build from flathub, but Christian told me that the nightly build is way better for flatpak building, so I went to install it from here.

Getting GNOME Builder

This highlights one of the nice aspects of flatpak: it is fundamentally decentralized. While flathub serves as a convenient one-stop-shop for many apps, it is entirely possible to have other remotes. Flathub is not privileged at all.

It is also perfectly possible to have both the stable gnome-builder from flathub and the nightly installed at the same time.

The only limitation is that only one of them will get to be presented as ‘the’ GNOME Builder by the desktop, since they use the same app id.  You can change between the installed versions of an application using the flatpak cli:

flatpak make-current --user org.gnome.Builder master

Building flatpaks

Now on to building flatpaks! Naturally, my testcase is GNOME Recipes. I have a git checkout of it, so I proceeded to open it in GNOME Builder, started a build … and it failed, with a somewhat cryptic error message about chdir() failing :-(

After quite a bit of head-scratching and debugging, we determined that this happens because flatpak is doing builds in a sandbox as well, and it is replacing /var with its own bind mount to do so. This creates a bit of confusion with the /home -> /var/home symlink that is part of the Atomic Workstation image. We are still trying to determine the best fix for this, you can follow along in this issue.

Since I am going to travel soon, I can’t wait for the official fix, so I came up with a workaround: Remove the /home -> /var/home symlink, create a regular /home directory in its place, and change /etc/fstab to mount my home partition there instead of /var/home. One reason why this is ugly is that I am modifying the supposedly immutable OS image. How ? By removing the immutable attribute with chattr -i /.  Another reason why it is ugly is that this has to be repeated everytime a new image gets installed (regardless whether it is via an update or via package layering).

But, with this workaround in place, there is no longer a troublesome symlink to cause trouble for flatpak, and my build succeeds. Once it is built, I can run the recipes flatpak with one click on the play button in builder.

Neat! I am almost ready to take Atomic Workstation on the road.

fwupd now tells you about known issues

Posted by Richard Hughes on February 13, 2018 02:57 PM

After a week of being sick and not doing much, I’m showing the results of a day-or-so of hacking:

So, most of that being familiar to anyone that’s followed my previous blog posts. But wait, what’s that about a known issue?

That one little URL for the user to click on is the result of a rule engine being added to the LVFS. Of course, firmware updates shouldn’t ever fail, but in the real world they do, because distros don’t create /boot/efi correctly (cough, Arch Linux) or just because some people are running old versions of efivar, a broken git snapshot of libfwupdate or because a vendor firmware updater doesn’t work with secure boot turned on (urgh). Of all the failures logged on the LVFS, 95% fall into about 3 or 4 different failure causes, and if we know hundreds of people are hitting an issue we already understand we can provide them with some help.

So, how does this work? If you’re a user you don’t see any of this, you just download the metadata and firmware semi-automatically and get on with your life. If you’re a blessed hardware vendor on the LVFS (i.e. you can QA the updates into the stable branch) you can also create and view the rules for firmware owned by just your vendor group:

This new functionality will be deployed to the LVFS during the next downtime window. Comments welcome.

Fedora Atomic Workstation: What about apps ?

Posted by Matthias Clasen on February 12, 2018 12:10 AM

I recently switched my main system to Fedora Atomic Workstation, and described my initial experience here. But I am going to travel soon, so I won’t have much time to fiddle with my laptop, and need to get things into working order.

Image result for project atomic logoConnections

One thing I needed to investigate is getting my VPN connections over from the old system. After a bit of consideration, I decided that it was easiest to just copy the relevant files from the old installation – /etc is not part of the immutable OS image, so this works just fine. I booted back into the old system and:

cp /etc/NetworkManager/system-connections/* /ostree/boot-1/.../etc/NetworkManager/system-connections

Note that you may also have to copy certificates over in a similar way.


But the bigger task for getting this system into working order is, of course, getting the applications back.

I could of course just use rpm-ostree’s layering and install fedora rpms for many apps. But, that would mean sliding back into the old world where applications are part of the OS, and dependencies force the OS and the apps to be updated together, etc. Since I want to explore the proper workflows and advantages of the Atomic model, I’ll instead try to install the apps separately from the OS, as flatpaks.

Currently, the best way to get flatpaks is to use flathub, so i went back there to see if I can find all I need. flathub has a bit more than 200 applications currently. That may not seem much, compared to the Android playstore, but lets see whats actually there:

With  Telegram, Spotify, Gimp, LibreOffice and some others, I find most of what I need frequently. And Skype, Slack, Inkscape, Blender and others are there too. Not bad.


But what about web browsers? firefox is included in the atomic workstation image. To make it play media, you have to do the same things as on the traditional workstation – find an ffmpeg package and use rpm layering to make it part of the image.

chrome is unfortunately hard to package as a flatpak, since its own sandboxing technology conflicts with the sandboxing that is applied by flatpak. There is of course a chrome rpm, but it installs into /opt, which is not currently supported by rpm-ostree. See this issue for more details and possible solutions.

Beyond the major browsers, there’s some other choices available in flathub, such as GNOME Web or Eolie. These browsers use gstreamer for multimedia support, so they will pick up codecs that are available on the host system via the gstreamer runtime extension.

Next steps

The trip I’m going on is for a hackfest that will focus on an application (GNOME Recipes, in fact), so I will need a well-working setup for building flatpaks locally.

I’ll try out how well GNOME Builder handles this task on an Atomic System.

Razer doesn’t care about Linux

Posted by Richard Hughes on February 11, 2018 10:44 PM

tl;dr: Don’t buy hardware from Razer and expect firmware updates to fix security problems on Linux.

Razer is a vendor that makes high-end gaming hardware, including laptops, keyboards and mice. I opened a ticket with Razor a few days ago asking them if they wanted to support the LVFS project by uploading firmware and sharing the firmware update protocol used. I offered to upstream any example code they could share under a free license, or to write the code from scratch given enough specifications to do so. This is something I’ve done for other vendors, and doesn’t take long as most vendor firmware updaters all do the same kind of thing; there are only so many ways to send a few kb of data to USB devices. The fwupd project provides high-level code for accessing USB devices, so yet-another-update-protocol is no big deal. I explained all about the LVFS, and the benefits it provided to a userbase that is normally happy to vote using their wallet to get hardware that’s supported on the OS of their choice.

I just received this note on the ticket, which was escalated appropriately:

I have discussed your offer with the dedicated team and we are thankful for your enthusiasm and for your good idea.
I am afraid I have also to let you know that at this moment in time our support for software is only focused on Windows and Mac.

The CEO of Razer Min-Liang Tan said recently “We’re inviting all Linux enthusiasts to weigh in at the new Linux Corner on Insider to post feedback, suggestions and ideas on how we can make it the best notebook in the world that supports Linux.” If this is true, and more than just a sound-bite, supporting the LVFS for firmware updates on the Razer Blade to solve security problems like Meltdown and Spectre ought to be a priority?

Certainly if peripheral updates or system firmware UpdateCapsule are not supportable on Linux, it would be good to correct well read articles as those makes it sound like Razor is interested in Linux users, of which the reality seems somewhat less optimistic. I’ve updated the vendor list with this information to avoid other people asking or filing tickets. Disappointing, but I’ll hopefully have some happier news soon about a different vendor.

First steps with Fedora Atomic Workstation

Posted by Matthias Clasen on February 09, 2018 01:30 AM

There’s been a lot of attention for the Fedora Atomic Workstation project recently, with several presentations at devconf (Kalev LemberColin Walters, Jonathan Lebon) and fosdem (Sanja Bonic), blog posts and other docs.

Image result for project atomic logoI’ve played with the Atomic Workstation before, but it was always in a VM. That is a low-risk way to try it out, but the downside is that you can jump back to your ‘normal’ system at the first problem… which, naturally,  I did. The recent attention inspired me to try again.

This time, I wanted to try it for real and get some actual work done on the Atomic side. So this morning, I set out to convert my main system to Atomic Workstation. The goal I’ve set myself for today was to create a gnome-font-viewer release tarball using a container-based workflow.

There are two ways to install Atomic Workstation. You can either download an .iso and install from scratch, or you can convert an existing system. I chose the second option, following these instructions.  By and large, the instructions were accurate and led me to a successful installation. A few notes:

  • You need ~10G of free space on your root filesystem
  • I got server connection errors several time – just restarting the ostree pull command will eventually let it complete
  • The instructions recommend copying grub.cfg from /boot/loader to /boot/grub2/, but that only works for the current tree – if you install updates or add a layer to your ostree image, you have to repeat it. An easier solution is to create a symlink instead.

After a moment of fear, I decided to reboot, and found myself inside the Atomic Workstation – it just worked. After opening a terminal and finding my git checkouts, I felt a little helpless – none of git, gitg, gcc (or many other of the developer tools I’m used to) are around. What now ?

Thankfully, firefox was available, so I went to http://flathub.org and installed gitg as a flatpak, with a single click.

For the other developer tools, remember that my goal was to use a container-based workflow, so my next step was to install buildah, which is a tool to work with containers without the need for docker.   Installing the buildah rpm on Atomic Workstation feels a bit like a magic trick – after all, isn’t this an immutable image-based OS ?

What happens when you run

rpm-ostree install buildah

is that rpm-ostree is composing a new image by layering the rpm on top of the existing image. As expected, I had to reboot into the new image to see the newly installed tool.

Next, I tried to figure out some of the basics of working with buildah – here is a brief introduction to buildah that i found helpful. After creating and starting a Fedora-based container with

buildah from fedora
buildah run fedora-working-container bash

I could use dnf to install git, gcc and a few other things in the container. So far, so good.  But in order to make a gnome-font-viewer release, there is still one thing missing: I need access to my git checkout inside the container.  After some browsing around, I came up with this command:

buildah run -v /srv:/srv:rslave fedora-working-container bash

which should make /srv from the host system appear inside the container. And… i was stuck – trying to enumerate the contents of /src in the container was giving me permission errors, despite running as root.

Eventually, it dawned on me that selinux is to blame… The command

sudo chcon -R -h -t container_file_t /srv

is needed to make things work as expected. Alternatively, you could set selinux to be permissive.

From here on, things were pretty straightforward. I additionally needed to make my ssh keys available so I could push my commits from inside the container, and I needed a series of dnf commands to make enough build dependencies and tools available:

dnf install  git
dnf install meson
dnf install gtk3-devel

But eventually,

meson . build
ninja -Cbuild
ninja -Cbuild dist

worked and produced this tarball – success!

So, when you try gnome-font-viewer 3.27.90 remember: it was produced in a container.

The first steps are always the hardest. I expect things to get easier as I learn more about this way of working.



Posted by Benjamin Otte on February 03, 2018 12:53 PM

An idiom that has shown up in GTK4 development is the idea of immutable objects and builders. The idea behind an immutable object is that you can be sure that it doesn’t change under you, so you don’t need to track changes, you can expose it in your API without having to fear users of the API are gonna change that object under you, you can use it as a key when caching and last but not least you can pass it into multiple threads without requiring synchronization.
Examples of immutable objects in GTK4 are GdkCursor, GdkTexture, GdkContentFormats or GskRenderNode. An example outside of GTK would be GBytes

Sometimes these objects are easy to create using a series of constructors, but oftentimes these objects are more complex. And because those objects are immutable, we can’t just provide setters for the various properties. Instead, we provide builder objects. They are short-lived objects whose only purpose is to manage the construction of an immutable object.
Here’s an example of how that would look:

Sandwich *
make_me_a_sandwich (void)
  SandwichBuilder *builder;

  /* create builder */
  builder = sandwich_builder_new ();
  /* setup the object to create */
  sandwich_builder_set_bread (builder, white_bread);
  sandwich_builder_add_ingredient (builder, cheese);
  sandwich_builder_add_ingredient (builder, ham);
  sandwich_builder_set_toasted (builder, TRUE);
  /* free the builder, create the object and return it */
  return sandwich_builder_free_to_sandwich (builder);

This approach works well in C, but does not work when trying to make the builder accessible to bindings. Bindings need no explicit memory management, so they want to write code like this:

def make_me_a_sandwich():
    # create builder
    builder = SandwichBuilder ()
    # setup the object to create
    builder.set_bread (white_bread)
    builder.add_ingredient (cheese)
    builder.add_ingredient (ham)
    builder.set_toasted (True)
    # create the object and return it
    return builder.to_sandwich ()

We spent the hackfest arguing about how to create a C API that works for both of those use cases and the consensus so far has been to turn builders into refcounted boxed types that provide both the above APIs – but advertises the C-specific parts only to the C APIs and the binding-specific parts only to bindings. So the C header for the above examples would look like this:

SandwichBuilder *sandwich_builder_new (void);
/* (skip) in bindings */
Sandwich *sandwich_builder_free_to_sandwich (SandwichBuilder *builder) G_GNUC_WARN_UNUSED_RESULT;

/* to be used by bindings only */
GType *sandwich_builder_get_type (void) G_GNUC_CONST;
SandwichBuilder *sandwich_builder_ref (SandwichBuilder *builder);
void sandwich_builder_unref (SandwichBuilder *builder);
Sandwich sandwich_builder_to_sandwich (SandwichBuilder *builder);

/* shared API */
void sandwich_builder_set_bread (SandwichBuilder *builder, Bread *bread);
void sandwich_builder_add_ingredient (SandwichBuilder *builder, Ingredient *ingredient);
void sandwich_builder_set_toasted (SandwichBuilder *builder, gboolean should_toast);

/* and in the .c file: */
G_DEFINE_BOXED_TYPE (SandwichBuilder, sandwich_builder, sandwich_builder_ref, sandwich_builder_unref)

And now I’m off to review all our builder APIs so they conform to this idea.

GTK+ hackfest, day 2

Posted by Matthias Clasen on February 03, 2018 08:28 AM

The second day of the GTK+ hackfest in Brussels started with an hour of patch review. We then went through scattered items from the agenda and collected answers to some questions.

We were lucky to have some dear friends join us for part of the day.  Allison came by for an extended GLib bug review session with Philip, and Adrien discussed strategies for dealing with small and changing form factors with us.  Allison, Adrien: thanks for coming!

The bulk of the day was taking up by a line-by-line review of our GTK+ 4 task list. Even though some new items appeared, it got a bit shorter, and many of the outstanding ones are much clearer now.

Our (jumbled and unedited) notes from the day are here.

The day ended with a nice dinner that was graciously sponsored by Purism. Thanks!

Decisions, decisions

We discussed too many things during these two days for a concise summary of every result, but here are some of the highlights:

  • gitlab migration: We want to migrate the GTK+ git repository as soon as possible. The bug migration needs preparation and will follow later
  • GTK+ 4 roadmap: We are aiming for an initial release in the fall of this year. We’ll reevaluate this target date at GUADEC

GTK+ hackfest, day 1

Posted by Matthias Clasen on February 01, 2018 10:21 PM

A number of GTK+ developers met today in Brussels for 2-day hackfest ahead of FOSDEM. Sadly, a few friends who we’d have loved to see couldn’t make it, but we still have enough of the core team together for a productive meeting.

We decided that it would be a good idea to start the day with ‘short overviews of rewritten subsystems’, to get everybody on the same page. The quick overviews turned out to take most of the day, but it was intermixed with a lot of very productive discussions and decisions.

My (jumbled and undedited) notes from the day are here.

<figure class="wp-caption aligncenter" id="attachment_1945" style="max-width: 225px"><figcaption class="wp-caption-text">Benjamin explaining the GTK+ clipboard on an actual clipboard</figcaption></figure>

At the end of the day, we’ve found a nice Vietnamese restaurant around the corner from the venue, and Shaun came by for food and beer.

I hope that day 2 of this short hackfest will be similarly productive!

Thanks to GNOME for sponsoring this event.




Firmware Telemetry for Vendors

Posted by Richard Hughes on February 01, 2018 12:10 PM

We’ve shipped nearly 1.2 MILLION firmware updates out to Linux users since we started the LVFS project.

I found out this nugget of information using a new LVFS vendor feature, soon to be deployed: Telemetry. This builds on the previously discussed success/failure reporting and adds a single page for the vendor to get statistics about each bit of hardware. Until more people are running the latest fwupd and volunteering to share their update history it’s less useful, but still interesting until then.

No new batches of ColorHug2

Posted by Richard Hughes on February 01, 2018 09:48 AM

I was informed by AMS (the manufacturer that makes the XYZ sensor that’s the core of the CH2 device) that the AS73210 (aka MTCSiCF) and the MTI08D are end of life products. The replacement for the sensor the vendor offers is the AS73211, which of course is more expensive and electrically incompatible with the AS73210.

The somewhat-related new AS7261 sensor does look interesting as it somewhat crosses the void between a colorimeter and something that can take non-emissive readings, but it’s a completely different sensor to the one on the ColorHug2, and mechanically to the now-abandoned ColorHug+. I’m also feeling twice burned buying specialist components from single-source suppliers.

Being a parents to a 16 week old baby doesn’t put Ania and I in a position where I can go through the various phases of testing, prototypes, test batch, production batch etc for a device refresh like we did with the ColorHug -> ColorHug2. I’m hoping I can get a chance to play with some more kinds of sensors from different vendors, although that’s not going to happen before I start getting my free time back. At the moment I have about 50 fully completed ColorHug2 devices in boxes ready to be sold.

In the true spirit of OpenHardware and free enterprise, if anyone does want to help with the design of a new ColorHug device I’m open for ideas. ColorHug was really just a hobby that got out of control, and I’d love for someone else to have the thrill and excitement of building a nano-company from scratch. Taking me out the equation completely, I’d be as equally happy referring on people who want to buy a ColorHug upgrade or replacement to a different project, if the new product met with my approval :)

So, 50 ColorHugs should last about 3 months before stock runs out, but I know a few people are using devices on production lines and other sorts of industrial control — if that sounds familiar, and you’d like to buy a spare device, now is the time to do so. Of course, I’ll continue supporting all the existing 3162 devices well into the future. I hope to be back building OpenHardware soon, and hopefully with a new and improved ColorHug3.

tuhi - a daemon to support Wacom SmartPad devices

Posted by Peter Hutterer on January 30, 2018 08:59 AM

For the last few weeks, Benjamin Tissoires and I have been working on a new project: Tuhi [1], a daemon to connect to and download data from Wacom SmartPad devices like the Bamboo Spark, Bamboo Slate and, eventually, the Bamboo Folio and the Intuos Pro Paper devices. These devices are not traditional graphics tablets plugged into a computer but rather smart notepads where the user's offline drawing is saved as stroke data in vector format and later synchronised with the host computer over Bluetooth. There it can be converted to SVG, integrated into the applications, etc. Wacom's application for this is Inkspace.

There is no official Linux support for these devices. Benjamin and I started looking at the protocol dumps last year and, luckily, they're not completely indecipherable and reverse-engineering them was relatively straightforward. Now it is a few weeks later and we have something that is usable (if a bit rough) and provides the foundation for supporting these devices properly on the Linux desktop. The repository is available on github at https://github.com/tuhiproject/tuhi/.

The main core is a DBus session daemon written in Python. That daemon connects to the devices and exposes them over a custom DBus API. That API is relatively simple, it supports the methods to search for devices, pair devices, listen for data from devices and finally to fetch the data. It has some basic extras built in like temporary storage of the drawing data so they survive daemon restarts. But otherwise it's a three-way mapper from the Bluez device, the serial controller we talk to on the device and the Tuhi DBus API presented to the clients. One such client is the little commandline tool that comes with tuhi: tuhi-kete [2]. Here's a short example:

$> ./tools/tuhi-kete.py
Tuhi shell control
tuhi> search on
INFO: Pairable device: E2:43:03:67:0E:01 - Bamboo Spark
tuhi> pair E2:43:03:67:0E:01
INFO: E2:43:03:67:0E:01 - Bamboo Spark: Press button on device now
INFO: E2:43:03:67:0E:01 - Bamboo Spark: Pairing successful
tuhi> listen E2:43:03:67:0E:01
INFO: E2:43:03:67:0E:01 - Bamboo Spark: drawings available: 1516853586, 1516859506, [...]
tuhi> list
E2:43:03:67:0E:01 - Bamboo Spark
tuhi> info E2:43:03:67:0E:01
E2:43:03:67:0E:01 - Bamboo Spark
Available drawings:
* 1516853586: drawn on the 2018-01-25 at 14:13
* 1516859506: drawn on the 2018-01-25 at 15:51
* 1516860008: drawn on the 2018-01-25 at 16:00
* 1517189792: drawn on the 2018-01-29 at 11:36
tuhi> fetch E2:43:03:67:0E:01 1516853586
INFO: Bamboo Spark: saved file "Bamboo Spark-2018-01-25-14-13.svg"
I won't go into the details because most should be obvious and this is purely a debugging client, not a client we expect real users to use. Plus, everything is still changing quite quickly at this point.

The next step is to get a proper GUI application working. As usual with any GUI-related matter, we'd really appreciate some help :)

The project is young and relying on reverse-engineered protocols means there are still a few rough edges. Right now, the Bamboo Spark and Slate are supported because we have access to those. The Folio should work, it looks like it's a re-packaged Slate. Intuos Pro Paper support is still pending, we don't have access to a device at this point. If you're interested in testing or helping out, come on over to the github site and get started!

[1] tuhi: Maori for "writing, script"
[2] kete: Maori for "kit"

GCab and CVE-2018-5345

Posted by Richard Hughes on January 23, 2018 01:22 PM

tl;dr: Update GCab from your distributor.

Longer version: Just before Christmas I found a likely exploitable bug in the libgcab library. Various security teams have been busy with slightly more important issues, and so it’s taken a lot longer than usual to be verified and assigned a CVE. The issue I found was that libgcab attempted to read a large chunk into a small buffer, overwriting lots of interesting things past the end of the buffer. ALSR and SELinux saves us in nearly all cases, so it’s not the end of the world. Almost a textbook C buffer overflow (rust, yada, whatever) so it was easy to fix.

Some key points:

  • This only affects libgcab, not cabarchive or libarchive
  • All gcab versions less than 0.8 are affected
  • Anything that links to gcab is affected, so gnome-software, appstream-glib and fwupd at least
  • Once you install the fixed gcab you need to restart anything that’s using it, e.g. fwupd
  • There is no silly branded name for this bug
  • The GCab project is incredibly well written, and I’ve been hugely impressed with the code quality
  • You can test if your GCab has been fixed by attempting to decompress this file, if the program crashes, you need to update

With Marc-André’s blessing, I’ve released version v0.8 of gcab with this fix. I’ve also released v1.0 which has this fix (and many more nice API additions) which also switches the build system to Meson and cleans up a lot of leaks using g_autoptr(). If you’re choosing a version to update to, the answer is probably 1.0 unless you’re building for something more sedate like RHEL 5 or 6. You can get the Fedora 27 packages here or they’ll be on the mirrors tomorrow.

Privacy expectations and the connected home

Posted by Matthew Garrett on January 17, 2018 09:45 PM
Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.

Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.

There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?

To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.

(Thanks to Nikki Everett for conversations that inspired this post)

(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's)

comment count unavailable comments

Phoning home after updating firmware?

Posted by Richard Hughes on January 10, 2018 03:42 PM

Somebody made a proposal on the fwupd mailing list that the machine running fwupd should “phone home” to the LVFS with success or failure after the firmware update has been attempted.

This would let the hardware vendor that uploaded firmware know there are problems straight away, rather than waiting for thousands of frustrated users to file bugs. The report should needs to contain something that identifies the machine and a boolean, and in the event of an error, enough debug information to actually be useful. It would obviously involve sending the users IP address to the server too.

I ran a poll on my Google+ page, and this was the result:

So, a significant minority of people felt like it stepped over the line of privacy v.s. pragmatism. This told me I couldn’t just forge onward with automated collection, and this blog entry outlines what we’ve done for the 1.0.4 release. I hope this proposal is acceptable to even the most paranoid of users.

The fwupd daemon now stores the result of each attempted update in a local SQLite database. In the event there’s a firmware update that’s been attempted, we now ask the user if they would like to upload this information to the LVFS. Using GNOME this would just be a slider in the control center Privacy panel, and I’ll leave it to the distros to decide if this slider should be on or off by default. If you’re using the fwupdmgr tool this is what it shows:

$ fwupdmgr report-history
Target:                  https://the-lvfs-server/lvfs/firmware/report
Payload:                 {
                           "ReportVersion" : 1,
                           "MachineId" : "9c43dd393922b7edc16cb4d9a36ac01e66abc532db4a4c081f911f43faa89337",
                           "DistroId" : "fedora",
                           "DistroVersion" : "27",
                           "DistroVariant" : "workstation",
                           "Reports" : [
                               "DeviceId" : "da145204b296610b0239a4a365f7f96a9423d513",
                               "Checksum" : "d0d33e760ab6eeed6f11b9f9bd7e83820b29e970",
                               "UpdateState" : 2,
                               "Guid" : "77d843f7-682c-57e8-8e29-584f5b4f52a1",
                               "FwupdVersion" : "1.0.4",
                               "Plugin" : "unifying",
                               "Version" : "RQR12.05_B0028",
                               "VersionNew" : "RQR12.07_B0029",
                               "Flags" : 674,
                               "Created" : 1515507267,
                               "Modified" : 1515507956
Proceed with upload? [Y|n]: 

Using this new information that the user volunteers, we can display a new line in the LVFS web-console:

Which expands out to the report below:

This means vendors using the LVFS know first of all how many downloads they have, and also the number of success and failures. This allows us to offer the same kind of staged deployment that Microsoft Update does, where you can limit the number of updated machines to 10,000/day or automatically pause the specific firmware deployment if > 1% of the reports come back with failures.

Some key points:

  • We don’t share the IP address with the vendor, in fact it’s not even saved in the MySQL database
  • The MachineId is a salted hash of your actual /etc/machine-id
  • The LVFS doesn’t store reports for firmware that it did not sign itself, i.e. locally built firmware archives will be ignored and not logged
  • You can disable the reporting functionality in all applications by editing /etc/fwupd/remotes.d/*.conf
  • We have an official GDPR document too — we’ll probably link to that from the Privacy panel in GNOME

Comments welcome.

More fun with fonts

Posted by Matthias Clasen on January 04, 2018 01:04 AM

Just before Christmas, I spent some time in New York to continue font work with Behdad that we had begun earlier this year.

As you may remember from my last post on fonts, our goal was to support OpenType font variations. The Linux text rendering stack has multiple components: freetype, fontconfig, harfbuzz, cairo, pango. Achieving our goal required a number of features and fixes in all these components.

Getting all the required changes in place is a bit time-consuming, but the results are finally starting to come together. If you use the master branches of freetype, fontconfig, harfbuzz, cairo, pango and GTK+, you can try this out today.


But beyond variations, we want to improve font support in general. To start off, we fixed a few bugs in the color Emoji support in cairo and GTK+.


Next was small improvements to the font chooser, such as a cleaner look for the font list, type-to-search and maintaining the sensitivity of the select button:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-035205-PM.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-035205-PM.webm</video>


I also spent some time on OpenType features, and making them accessible to users.  When I first added feature support in Pango, I wrote a GTK+ demo that shows them in action, but without a ready-made GTK+ dialog, basically no applications have picked this up.

Time to change this! After some experimentation, I came up with what I think is an acceptable UI for customizing features of a font:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-063819-PM-3.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-063819-PM-3.webm</video>

It is still somewhat limited since we only show features that are supported by the selected font and make sense for entire documents or paragraphs of text.  Many OpenType features can really only be selected for smaller ranges of text, such as fractions or subscripts. Support for those may come at a later time.

Part of the necessary plumbing for making this work nicely was to implement the font-feature-settings CSS property, which brings GTK+ closer to full support for level 3 of the CSS font module. For theme authors, this means that all OpenType font features are accessible from CSS.

One thing to point out here is that font feature settings are not part of the PangoFont  object, but get specified via attributes (or markup, if you like). For the font chooser, this means that we’ve had to add new API to return the selected features: pango_font_chooser_get_font_features(). Applications need to apply the returned features to their text by wrapping them in a PangoAttribute.


Once we had this ‘tweak page’ added to the font chooser, it was the natural place to expose variations as well, so this is what we did next. Remember that variations define number of ‘axes’ for the font, along which the characteristics of the font can be continuously changed. In UI terms, this means we that we add sliders similar to the one we already have for the font size:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-3" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-065307-PM.webm?_=3" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-065307-PM.webm</video>

Again, fully supporting variations meant implementing the corresponding  font-variation-settings CSS property (yes, there is a level 4 of the CSS fonts module). This will enable some fun experiments, such as animating font changes:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-4" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-070645-PM.webm?_=4" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-070645-PM.webm</video>

All of this work would be hard to do without some debugging and exploration tools. gtk-demo already contained the Font Features example. During the week in New York, I’ve made it handle variations as well, and polished it in various ways.

To reflect that it is no longer just about font features, it is now called Font Explorer. One fun thing I added is a combined weight-width plane, so you can now explore your fonts in 2 dimensions:

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1920-5" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-074638-PM.webm?_=5" type="video/webm">https://blogs.gnome.org/mclasen/files/2018/01/Screencast-from-01-03-2018-074638-PM.webm</video>

Whats next

As always, there is more work to do. Here is an unsorted list of ideas for next steps:

  • Backport the font chooser improvements to GTK+ 3. Some new API is involved, so we’ll have to see about it.
  • Add pango support for variable families. The current font chooser code uses freetype and harfbuzz APIs to find out about OpenType features and variations. It would be nice to have some API in pango for this.
  • Improve font filtering. It would be nice to support filtering by language or script in the font chooser. I have code for this, but it needs some more pango API to perform acceptably.
  • Better visualization for features. It would be nice to highlight the parts of a string that are affected by certain features. harfbuzz does not currently provide this information though.
  • More elaborate feature support. For example, it would be nice to have a way to enable character-level features such as fractions or superscripts.
  • Support for glyph selection. Several OpenType features provide (possibly multiple) alternative glyphs,  with the expectation that the user will be presented with a choice. harfbuzz does not have convenient API for implementing this.
  • Add useful font metadata to fontconfig, such as ‘Is this a serif, sans-serif or handwriting font ?’ and use it to offer better filtering
  • Implement @font-face rules in CSS and use them to make customized fonts first-class objects.

Help with any of this is more than welcome!

When should behaviour outside a community have consequences inside it?

Posted by Matthew Garrett on December 21, 2017 10:09 AM
Free software communities don't exist in a vacuum. They're made up of people who are also members of other communities, people who have other interests and engage in other activities. Sometimes these people engage in behaviour outside the community that may be perceived as negatively impacting communities that they're a part of, but most communities have no guidelines for determining whether behaviour outside the community should have any consequences within the community. This post isn't an attempt to provide those guidelines, but aims to provide some things that community leaders should think about when the issue is raised.

Some things to consider

Did the behaviour violate the law?

This seems like an obvious bar, but it turns out to be a pretty bad one. For a start, many things that are common accepted behaviour in various communities may be illegal (eg, reverse engineering work may contravene a strict reading of US copyright law), and taking this to an extreme would result in expelling anyone who's ever broken a speed limit. On the flipside, refusing to act unless someone broke the law is also a bad threshold - much behaviour that communities consider unacceptable may be entirely legal.

There's also the problem of determining whether a law was actually broken. The criminal justice system is (correctly) biased to an extent in favour of the defendant - removing someone's rights in society should require meeting a high burden of proof. However, this is not the threshold that most communities hold themselves to in determining whether to continue permitting an individual to associate with them. An incident that does not result in a finding of criminal guilt (either through an explicit finding or a failure to prosecute the case in the first place) should not be ignored by communities for that reason.

Did the behaviour violate your community norms?

There's plenty of behaviour that may be acceptable within other segments of society but unacceptable within your community (eg, lobbying for the use of proprietary software is considered entirely reasonable in most places, but rather less so at an FSF event). If someone can be trusted to segregate their behaviour appropriately then this may not be a problem, but that's probably not sufficient in all cases. For instance, if someone acts entirely reasonably within your community but engages in lengthy anti-semitic screeds on 4chan, it's legitimate to question whether permitting them to continue being part of your community serves your community's best interests.

Did the behaviour violate the norms of the community in which it occurred?

Of course, the converse is also true - there's behaviour that may be acceptable within your community but unacceptable in another community. It's easy to write off someone acting in a way that contravenes the standards of another community but wouldn't violate your expected behavioural standards - after all, if it wouldn't breach your standards, what grounds do you have for taking action?

But you need to consider that if someone consciously contravenes the behavioural standards of a community they've chosen to participate in, they may be willing to do the same in your community. If pushing boundaries is a frequent trait then it may not be too long until you discover that they're also pushing your boundaries.

Why do you care?

A community's code of conduct can be looked at in two ways - as a list of behaviours that will be punished if they occur, or as a list of behaviours that are unlikely to occur within that community. The former is probably the primary consideration when a community adopts a CoC, but the latter is how many people considering joining a community will think about it.

If your community includes individuals that are known to have engaged in behaviour that would violate your community standards, potential members or contributors may not trust that your CoC will function as adequate protection. A community that contains people known to have engaged in sexual harassment in other settings is unlikely to be seen as hugely welcoming, even if they haven't (as far as you know!) done so within your community. The way your members behave outside your community is going to be seen as saying something about your community, and that needs to be taken into account.

A second (and perhaps less obvious) aspect is that membership of some higher profile communities may be seen as lending general legitimacy to someone, and they may play off that to legitimise behaviour or views that would be seen as abhorrent by the community as a whole. If someone's anti-semitic views (for example) are seen as having more relevance because of their membership of your community, it's reasonable to think about whether keeping them in your community serves the best interests of your community.


I've said things like "considered" or "taken into account" a bunch here, and that's for a good reason - I don't know what the thresholds should be for any of these things, and there doesn't seem to be even a rough consensus in the wider community. We've seen cases in which communities have acted based on behaviour outside their community (eg, Debian removing Jacob Appelbaum after it was revealed that he'd sexually assaulted multiple people), but there's been no real effort to build a meaningful decision making framework around that.

As a result, communities struggle to make consistent decisions. It's unreasonable to expect individual communities to solve these problems on their own, but that doesn't mean we can ignore them. It's time to start coming up with a real set of best practices.

comment count unavailable comments

More Bluetooth (and gaming) features

Posted by Bastien Nocera on December 15, 2017 03:57 PM
In the midst of post-release bug fixing, we've also added a fair number of new features to our stack. As usual, new features span a number of different components, so integrators will have to be careful picking up all the components when, well, integrating.

PS3 clones joypads support

Do you have a PlayStation 3 joypad that feels just a little bit "off"? You can't find the Sony logo anywhere on it? The figures on the face buttons look like barbed wire? And if it were a YouTube video, it would say "No copyright intended"?

Bingo. When plugged in via USB, those devices advertise themselves as SHANWAN or Gasia, and implement the bare minimum to work when plugged into a PlayStation 3 console. But as a Linux computer would behave slightly differently, we need to fix a couple of things.

The first fix was simple, but necessary to be able to do any work: disable the rumble motor that starts as soon as you plug the pad through USB.

Once that's done, we could work around the fact that the device isn't Bluetooth compliant, and hard-code the HID service it's supposed to offer.

Bluetooth LE Battery reporting

Bluetooth Low Energy is the new-fangled (7-year old) protocol for low throughput devices, from a single coin-cell powered sensor, to input devices. What's great is that there's finally a standardised way for devices to export their battery statuses. I've added support for this in BlueZ, which UPower then picks up for desktop integration goodness.

There are a number of Bluetooth LE joypads available for pickup, including a few that should be firmware upgradeable. Look for "Bluetooth 4" as well as "Bluetooth LE" when doing your holiday shopping.

gnome-bluetooth work

Finally, this is the boring part. Benjamin and I reworked code that's internal to gnome-bluetooth, as used in the Settings panel as well as the Shell, to make it use modern facilities like GDBusObjectManager. The overall effect of this is, less code, less brittle and more reactive when Bluetooth adapters come and go, such as when using airplane mode.

Apart from the kernel patch mentioned above (you'll know if you need it :), those features have been integrated in UPower 0.99.7 and in the upcoming BlueZ 5.48. And they will of course be available in Fedora, both in rawhide and as updates to Fedora 27 as soon as the releases have been done and built.


The Intel ME vulnerabilities are a big deal for some people, harmless for most

Posted by Matthew Garrett on December 14, 2017 01:31 AM
(Note: all discussion here is based on publicly disclosed information, and I am not speaking on behalf of my employers)

I wrote about the potential impact of the most recent Intel ME vulnerabilities a couple of weeks ago. The details of the vulnerability were released last week, and it's not absolutely the worst case scenario but it's still pretty bad. The short version is that one of the (signed) pieces of early bringup code for the ME reads an unsigned file from flash and parses it. Providing a malformed file could result in a buffer overflow, and a moderately complicated exploit chain could be built that allowed the ME's exploit mitigation features to be bypassed, resulting in arbitrary code execution on the ME.

Getting this file into flash in the first place is the difficult bit. The ME region shouldn't be writable at OS runtime, so the most practical way for an attacker to achieve this is to physically disassemble the machine and directly reprogram it. The AMT management interface may provide a vector for a remote attacker to achieve this - for this to be possible, AMT must be enabled and provisioned and the attacker must have valid credentials[1]. Most systems don't have provisioned AMT, so most users don't have to worry about this.

Overall, for most end users there's little to worry about here. But the story changes for corporate users or high value targets who rely on TPM-backed disk encryption. The way the TPM protects access to the disk encryption key is to insist that a series of "measurements" are correct before giving the OS access to the disk encryption key. The first of these measurements is obtained through the ME hashing the first chunk of the system firmware and passing that to the TPM, with the firmware then hashing each component in turn and storing those in the TPM as well. If someone compromises a later point of the chain then the previous step will generate a different measurement, preventing the TPM from releasing the secret.

However, if the first step in the chain can be compromised, all these guarantees vanish. And since the first step in the chain relies on the ME to be running uncompromised code, this vulnerability allows that to be circumvented. The attacker's malicious code can be used to pass the "good" hash to the TPM even if the rest of the firmware has been tampered with. This allows a sufficiently skilled attacker to extract the disk encryption key and read the contents of the disk[2].

In addition, TPMs can be used to perform something called "remote attestation". This allows the TPM to provide a signed copy of the recorded measurements to a remote service, allowing that service to make a policy decision around whether or not to grant access to a resource. Enterprises using remote attestation to verify that systems are appropriately patched (eg) before they allow them access to sensitive material can no longer depend on those results being accurate.

Things are even worse for people relying on Intel's Platform Trust Technology (PTT), which is an implementation of a TPM that runs on the ME itself. Since this vulnerability allows full access to the ME, an attacker can obtain all the private key material held in the PTT implementation and, effectively, adopt the machine's cryptographic identity. This allows them to impersonate the system with arbitrary measurements whenever they want to. This basically renders PTT worthless from an enterprise perspective - unless you've maintained physical control of a machine for its entire lifetime, you have no way of knowing whether it's had its private keys extracted and so you have no way of knowing whether the attestation attempt is coming from the machine or from an attacker pretending to be that machine.

Bootguard, the component of the ME that's responsible for measuring the firmware into the TPM, is also responsible for verifying that the firmware has an appropriate cryptographic signature. Since that can be bypassed, an attacker can reflash modified firmware that can do pretty much anything. Yes, that probably means you can use this vulnerability to install Coreboot on a system locked down using Bootguard.

(An aside: The Titan security chips used in Google Cloud Platform sit between the chipset and the flash and verify the flash before permitting anything to start reading from it. If an attacker tampers with the ME firmware, Titan should detect that and prevent the system from booting. However, I'm not involved in the Titan project and don't know exactly how this works, so don't take my word for this)

Intel have published an update that fixes the vulnerability, but it's pretty pointless - there's apparently no rollback protection in the affected 11.x MEs, so while the attacker is modifying your flash to insert the payload they can just downgrade your ME firmware to a vulnerable version. Version 12 will reportedly include optional rollback protection, which is little comfort to anyone who has current hardware. Basically, anyone whose threat model depends on the low-level security of their Intel system is probably going to have to buy new hardware.

This is a big deal for enterprises and any individuals who may be targeted by skilled attackers who have physical access to their hardware, and entirely irrelevant for almost anybody else. If you don't know that you should be worried, you shouldn't be.

[1] Although admins should bear in mind that any system that hasn't been patched against CVE-2017-5689 considers an empty authentication cookie to be a valid credential

[2] TPMs are not intended to be strongly tamper resistant, so an attacker could also just remove the TPM, decap it and (with some effort) extract the key that way. This is somewhat more time consuming than just reflashing the firmware, so the ME vulnerability still amounts to a change in attack practicality.

comment count unavailable comments

CSR devices now supported in fwupd

Posted by Richard Hughes on December 11, 2017 12:42 PM

On Friday I added support for yet another variant of DFU. This variant is called “driverless DFU” and is used only by BlueCore chips from Cambridge Silicon Radio (now owned by Qualcomm). The driverless just means that it’s DFU like, and routed over HID, but it’s otherwise an unremarkable protocol. CSR is a huge ODM that makes most of the Bluetooth audio chips in vendor hardware. The hardware vendor can enable or disable features on the CSR microcontroller depending on licensing options (for instance echo cancellation), and there’s even a little virtual machine to do simple vendor-specific things. All the CSR chips are updatable in-field, and most vendors issue updates to fix sound quality issues or to add support for new protocols or devices.

The BlueCore CSR chips are used everywhere. If you have a “wireless” speaker or headphones that uses Bluetooth there is a high probability that it’s using a CSR chip inside. This makes the addition of CSR support into fwupd a big deal to access a lot of vendors. It’s a lot easier to say “just upload firmware” rather than “you have to write code” so I think it’s useful to have done this work.

The vendor working with me on this feature has been the awesome AIAIAI who make some very nice modular headphones. A few minutes ago we uploaded the H05 v1.5 firmware to the LVFS testing stream and v1.6 will be coming soon with even more bug fixes. To update the AIAIAI H05 firmware you just need to connect the USB cable and press and hold the top and bottom buttons on the headband until the LED goes out. You can then update the firmware using fwupdmgr update or just using GNOME Software. The big caveat is that you have to be running fwupd >= 1.0.3 which isn’t scheduled to be released until after Christmas.

I’ve contacted some more vendors I suspect are using the CSR chips. These include:

  • Jarre Technologies
  • RIVA Audio
  • Avantree
  • Zebra
  • Fugoo
  • Bowers&Wilkins
  • Plantronics
  • BeoPlay
  • JBL

If you know of any other “wireless speaker” companies that have issued at least one firmware update to users, please let me know in a comment here or in an email. I will follow up all suggestions and put the status on the Naughty&Nice vendorlist so please check that before suggesting a company. It would also be really useful to know the contact details (e.g. the web-form URL, or the email address) and also the model name of the device that might be updatable, although I’m happy to google myself if required. Thanks as always to Red Hat for allowing me to work on this stuff.

OARS Gets a New Home

Posted by Richard Hughes on December 07, 2017 09:50 AM

The Open Age Ratings Service is a simple website that lets you generate some content rating XML for your upstream AppData file.

In the last few months it’s gone from being hardly used to being used multiple times an hour, probably due to the requirement that applications on Flathub need it as part of the review process. After some complaints, I’ve added a ton more explanation to each question and made it easier to use. In particular if you specify that you’re creating metadata for a “non-game” then 80% of the questions get hidden from view.

As part of the relaunch, we now have a proper issue tracker and we’re already pushed out some minor (API compatible) enhancements which will become OARS v1.1. These include several cultural sensitivity questions such as:

  • Homosexuality
  • Prostitution
  • Adultery
  • Desecration
  • Slavery
  • Violence towards places of worship

The cultural sensitivity questions are work in progress. If you have any other ideas, or comments, please let me know. Also, before I get internetted-to-death, this is just for advisory purposes, not for filtering. Thanks.

UTC and Anywhere on Earth support

Posted by Bastien Nocera on December 06, 2017 04:06 PM
A quick post to tell you that we finally added UTC support to Clocks' and the Shell's World Clocks section. And if you're into it, there's also Anywhere on Earth support.

You will need to have git master versions of libgweather (our cities and timezones database), and gnome-clocks. This feature will land in GNOME 3.28.

Many thanks to Giovanni for coming up with an API he was happy with after I attempted a couple of iterations on one. Enjoy!

Update: As expected, a bug crept in. Thanks to Colin Guthrie for spotting the error in the "Anywhere on Earth" timezone. See this section for the fun we have to deal with.