Fedora People


Posted by Richard W.M. Jones on October 21, 2019 12:00 AM

How do you talk to a virtual machine from the host? How does the virtual machine talk to the host? In one sense the answer is obvious: virtual machines should be thought of just like regular machines so you use the network. However the connection between host and guest is a bit more special. Suppose you want to pass a host directory up to the guest? You could use NFS, but that’s sucky to set up and you’ll have to fiddle around with firewalls and ports. Suppose you run a guest agent reporting stats back to the hypervisor. How do they talk? Network, sure, but again that requires an extra network interface and the guest has to explicitly set up firewall rules.

A few years ago my colleague Stefan Hajnoczi ported VMware’s vsock to qemu. It’s a pure guest⟷host (and guest⟷guest) sockets API. It doesn’t use regular networks so no firewall issues or guest network configuration to worry about.

You can run NFS over vsock [PDF] if you want.

And now you can of course run NBD over vsock. nbdkit supports it, and libnbd is (currently the only!) client.

Episode 166 - Every day should be cybersecurity awareness month!

Posted by Open Source Security Podcast on October 21, 2019 12:00 AM
Josh and Kurt about cybersecurity awareness month. What's our actionable advice we can give out? There isn't much which is a fundamental part of the problem.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11714378/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    Migrating from Docker to Podman

    Posted by Elliott Sales de Andrade on October 20, 2019 10:33 PM
    If you use Docker, you may or may not have already heard of Podman. It is an alternative container engine, and while I don’t have much knowledge of the details, there are a few reasons why I’m switching: Podman runs in rootless mode, i.e., it does not need a daemon running as root; Podman supports new things like cgroupsv2 (coming in Fedora 31); Docker (actually moby-engine) is difficult to keep up-to-date in Fedora (which may correlate with point 2), and people seem to complain about this (though I’ve not cared too much.

    Music with the Synthstrom Deluge

    Posted by Richard W.M. Jones on October 20, 2019 03:33 PM

    I bought a Deluge a while back, and I’ve owned synthesizers and kaossilators and all kinds of other things for years. The Deluge is several things: expensive, awkward to use, but (with practice) it can make some reasonable music. Here are some ambient tunes I’ve written with it:

    Soundscape (with Japanese TV)


    Cookie Sunday

    Sunday Bells

    I’m not going to pretend that any of this is good music, but it’s a lot of fun to make.

    Disney+ streaming uses draconian DRM, avoid

    Posted by Hans de Goede on October 20, 2019 01:23 PM
    First of all, as always my opinions are my own, not those of my employer.

    Since I have 2 children I was happy to learn that the Netherlands would be one of the first countries to get Disney+ streaming.

    So I subscribed for the testing period, problem all devices in my home run Fedora. I started up Firefox and was greeted with an "Error Code 83", next I tried Chrome, same thing.

    So I mailed the Disney helpdesk about this, explaining how Linux works fine with Netflix, AmazonPrime video and even the web-app from my local cable provider. They promised to get back to me in 24 hours, the eventually got back to me in about a week. They wrote: "We are familiar with Error 83. This often happens if you want to play Disney + via the web browser or certain devices. Our IT department working hard to solve this. In the meantime, I want to advise you to watch Disney + via the app on a phone or tablet. If this error code still occurs in a few days, you can check the help center ..." this was on September 23th.

    So I thought, ok they are working on this lets give them a few days. It is almost a month later now and nothing has changed. Their so called help-center does not even know about "Error Code 83" even though the internet is full of people experiencing this. Note that this error also happens a lot on other platforms, it is not just Linux.

    Someone on tweakers.net has done some digging and this is a Widevine error: "the response is: {"errors":[{"code":"platform-verification-failed","description":"Platform verification status incompatible with security level"}]}". Widevine has 3 security levels and many devices, including desktop Linux and many Android devices only support level 1. In this case e.g. Netflix will not offer full HD or 4k resolutions, but otherwise everything works fine, which is a balance between DRM and usability which I can accept. Disney+ OTOH seems to have the drm features kranked up to maximum draconian settings and simply will not work on a lot of android devices, nor on Chromebooks, nor on desktop Linux.

    So if you care about Linux in any way, please do not subscribe to Disney+, instead send them a message letting them know that you are boycotting them until they get their Linux support in order.

    Started a newsletter

    Posted by Kushal Das on October 20, 2019 11:31 AM

    I started a newsletter, focusing on different stories I read about privacy, security, programming in general. Following the advice from Martijn Grooten, I am storing all the interesting links I read (for many months). I used to share these only over Twitter, but, as I retweet many things, it was not easy to share a selected few.

    I also did not want to push them in my regular blog. I wanted a proper newsletter over email service. But, keeping the reader’s privacy was a significant point to choose the service. I finally decided to go with Write.as Letters service. I am already using their open source project WriteFreely. This is an excellent excuse to use their tool more and also pay them for the fantastic tools + service.

    Feel free to subscribe to the newsletter and share the link with your friends.

    AdamW’s Debugging Adventures: “dnf is locked by another application”

    Posted by Adam Williamson on October 18, 2019 08:45 PM

    Gather round the fire, kids, it’s time for another Debugging Adventure! These are posts where I write up the process of diagnosing the root cause of a bug, where it turned out to be interesting (to me, anyway…)

    This case – Bugzilla #1750575 – involved dnfdragora, the package management tool used on Fedora Xfce, which is a release-blocking environment for the ARM architecture. It was a pretty easy bug to reproduce: any time you updated a package, the update would work, but then dnfdragora would show an error “DNF is locked by another process. dnfdragora will exit.”, and immediately exit.

    The bug sat around on the blocker list for a while; Daniel Mach (a DNF developer) looked into it a bit but didn’t have time to figure it out all the way. So I got tired of waiting for someone else to do it, and decided to work it out myself.

    Where’s the error coming from?

    As a starting point, I had a nice error message – so the obvious thing to do is figure out where that message comes from. The text appears in a couple of places in dnfdragora – in an exception handler and also in a method for setting up a connection to dnfdaemon. So, if we didn’t already know (I happened to) this would be the point at which we’d realize that dnfdragora is a frontend app to a backend – dnfdaemon – which does the heavy lifting.

    So, to figure out in more detail how we were getting to one of these two points, I hacked both the points where that error is logged. Both of them read logger.critical(errmsg). I changed this to logger.exception(errmsg). logger.exception is a very handy feature of Python’s logging module which logs whatever message you specify, plus a traceback to the current state, just like the traceback you get if the app actually crashes. So by doing that, the dnfdragora log (it logs to a file dnfdragora.log in the directory you run it from) gave us a traceback showing how we got to the error:

    2019-10-14 17:53:29,436 <a href="ERROR">dnfdragora</a> dnfdaemon client error: g-io-error-quark: GDBus.Error:org.baseurl.DnfSystem.LockedError: dnf is locked by another application (36)
    Traceback (most recent call last):
    File "/usr/bin/dnfdragora", line 85, in <module>
    File "/usr/lib/python3.7/site-packages/dnfdragora/ui.py", line 1273, in handleevent
    if not self._searchPackages(filter, True) :
    File "/usr/lib/python3.7/site-packages/dnfdragora/ui.py", line 949, in _searchPackages
    packages = self.backend.search(fields, strings, self.match_all, self.newest_only, tags )
    File "/usr/lib/python3.7/site-packages/dnfdragora/misc.py", line 135, in newFunc
    rc = func(*args, **kwargs)
    File "/usr/lib/python3.7/site-packages/dnfdragora/dnf_backend.py", line 464, in search
    newest_only, tags)
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 508, in Search
    fields, keys, attrs, match_all, newest_only, tags))
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 293, in _run_dbus_async
    result = self._get_result(data)
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 277, in _get_result
    File "/usr/lib/python3.7/site-packages/dnfdaemon/client/__init__.py", line 250, in _handle_dbus_error
    raise DaemonError(str(err))
    dnfdaemon.client.DaemonError: g-io-error-quark: GDBus.Error:org.baseurl.DnfSystem.LockedError: dnf is locked by another application (36)</module>

    So, this tells us quite a bit of stuff. We know we’re crashing in some sort of ‘search’ operation, and dbus seems to be involved. We can also see a bit more of the architecture here. Note how we have dnfdragora/dnf_backend.py and dnfdaemon/client/__init__.py included in the trace, even though we’re only in the dnfdragora executable here (dnfdaemon is a separate process). Looking at that and then looking at those files a bit, it’s quite easy to see that the dnfdaemon Python library provides a sort of framework for a client class called (oddly enough) DnfDaemonBase which the actual client – dnfdragora in our case – is expected to subclass and flesh out. dnfdragora does this in a class called DnfRootBackend, which inherits from both dnfdragora.backend.Backend (a sort of abstraction layer for dnfdragora to have multiple of these backends, though at present it only actually has this one) and dnfdaemon.client.Client, which is just a small extension to DnfDaemonBase that adds some dbus signal handling.

    So now we know more about the design we’re dealing with, and we can also see that we’re trying to do some sort of search operation which looks like it works by the client class communicating with the actual dnfdaemon server process via dbus, only we’re hitting some kind of error in that process, and interpreting it as ‘dnf is locked by another application’. If we dig a little deeper, we can figure out a bit more. We have to read through all of the backtrace frames and examine the functions, but ultimately we can figure out that DnfRootBackend.Search() is wrapped by dnfdragora.misc.ExceptionHandler, which handles dnfdaemon.client.DaemonError exceptions – like the one that’s ultimately getting raised here! – by calling the base class’s own exception_handler() on them…and for us, that’s BaseDragora.exception_handler, one of the two places we found earlier that ultimately produces this “DNF is locked by another process. dnfdragora will exit” text. We also now have two indications (the dbus error itself, and the code in exception_handler() that the error we’re dealing with is “LockedError”.

    A misleading error…

    At this point, I went looking for the text LockedError, and found it in two files in dnfdaemon that are kinda variants on each other – daemon/dnfdaemon-session.py and daemon/dnfdaemon-system.py. I didn’t actually know offhand which of the two is used in our case, but it doesn’t really matter, because the codepath to LockedError is the same in both. There’s a function called check_lock() which checks that self._lock == sender, and if it doesn’t, raises LockedError. That sure looks like where we’re at.

    So at this point I did a bit of poking around into how self._lock gets set and unset in the daemon. It turns out to be pretty simple. The daemon is basically implemented as a class with a bunch of methods that are wrapped by @dbus.service.method, which makes them accessible as DBus methods. (One of them is Search(), and we can see that the client class’s own Search() basically just calls that). There are also methods called Lock() and Unlock(), which – not surprisingly – set and release this lock, by setting the daemon class’ self._lock to be either an identifier for the DBus client or None, respectively. And when the daemon is first initialized, the value is set to None.

    At this point, I realized that the error we’re dealing with here is actually a lie in two important ways:

    1. The message claims that the problem is the lock being held “by another application”, but that’s not what check_lock() checks, really. It passes only if the caller holds the lock. It does fail if the lock is held “by another application”, but it also fails if the lock is not held at all. Given all the code we looked at so far, we can’t actually trust the message’s assertion that something else is holding the lock. It is also possible that the lock is not held at all.
    2. The message suggests that the lock in question is a lock on dnf itself. I know dnf/libdnf do have locking, so up to now I’d been assuming we were actually dealing with the locking in dnf itself. But at this point I realized we weren’t. The dnfdaemon lock code we just looked at doesn’t actually call or wrap dnf’s own locking code in any way. This lock we’re dealing with is entirely internal to dnfdaemon. It’s really a lock on the dnfdaemon instance itself.

    So, at this point I started thinking of the error as being “dnfdaemon is either locked by another DBus client, or not locked at all”.

    So what’s going on with this lock anyway?

    My next step, now I understood the locking process we’re dealing with, was to stick some logging into it. I added log lines to the Lock() and Unlock() methods, and I also made check_lock() log what sender and self._lock were set to before returning. Because it sets self._lock to None, I also added a log line to the daemon’s __init__ that just records that we’re in it. That got me some more useful information:

    2019-10-14 18:53:03.397784 XXX In DnfDaemon.init now!
    2019-10-14 18:53:03.402804 XXX LOCK: sender is :1.1835
    2019-10-14 18:53:03.407524 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    2019-10-14 18:53:07.556499 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    [...snip a bunch more calls to check_lock where the sender is the same...]
    2019-10-14 18:53:13.560828 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    2019-10-14 18:53:13.560941 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is :1.1835
    2019-10-14 18:53:16.513900 XXX In DnfDaemon.init now!
    2019-10-14 18:53:16.516724 XXX CHECK LOCK: sender is :1.1835
    XXX CHECK LOCK: self._lock is None

    so we could see that when we started dnfdragora, dnfdaemon started up and dnfdragora locked it almost immediately, then throughout the whole process of reproducing the bug – run dnfdragora, search for a package to be updated, mark it for updating, run the transaction, wait for the error – there were several instances of DBus method calls where everything worked fine (we see check_lock() being called and finding sender and self._lock set to the same value, the identifier for dnfdragora), but then suddenly we see the daemon’s __init__ running again for some reason, not being locked, and then a check_lock() call that fails because the daemon instance’s self._lock is None.

    After a couple of minutes, I guessed what was going on here, and the daemon’s service logs confirmed it – dnfdaemon was crashing and automatically restarting. The first attempt to invoke a DBus method after the crash and restart fails, because dnfdragora has not locked this new instance of the daemon (it has no idea it just crashed and restarted), so check_lock() fails. So as soon as a DBus method invocation is attempted after the dnfdaemon crash, dnfdragora errors out with the confusing “dnf is locked by another process” error.

    The crash was already mentioned in the bug report, but until now the exact interaction between the crash and the error had not been worked out – we just knew the daemon crashed and the app errored out, but we didn’t really know what order those things happened in or how they related to each other.

    OK then…why is dnfdaemon crashing?

    So, the question now became: why is dnfdaemon crashing? Well, the backtrace we had didn’t tell us a lot; really it only told us that something was going wrong in libdbus, which we could also tell from the dnfdaemon service log:

    Oct 14 18:53:15 adam.happyassassin.net dnfdaemon-system[226042]: dbus[226042]: arguments to dbus_connection_unref() were incorrect, assertion "connection->generation == _dbus_current_generation" failed in file ../../dbus/dbus-connection.c line 2823.
    Oct 14 18:53:15 adam.happyassassin.net dnfdaemon-system[226042]: This is normally a bug in some application using the D-Bus library.
    Oct 14 18:53:15 adam.happyassassin.net dnfdaemon-system[226042]:   D-Bus not built with -rdynamic so unable to print a backtrace

    that last line looked like a cue, so of course, off I went to figure out how to build DBus with -rdynamic. A bit of Googling told me – thanks “the3dfxdude”! – that the trick is to compile with –enable-asserts. So I did that and reproduced the bug again, and got a bit of a better backtrace. It’s a long one, but by picking through it carefully I could spot – in frame #17 – the actual point at which the problem happened, which was in dnfdaemon.server.DnfDaemonBase.run_transaction(). (Note, this is a different DnfDaemonBase class from dnfdaemon.client.DnfDaemonBase; I don’t know why they have the same name, that’s just confusing).

    So, the daemon’s crashing on this self.TransactionEvent('end-run', NONE) call. I poked into what that does a bit, and found a design here that kinda mirrors what happens on the client side: this DnfDaemonBase, like the other one, is a framework for a full daemon implementation, and it’s subclassed by a DnfDaemon class here. That class defines a TransactionEvent method that emits a DBus signal. So…we’re crashing when trying to emit a dbus signal. That all adds up with the backtrace going through libdbus and all. But, why are we crashing?

    At this point I tried to make a small reproducer (which basically just set up a DnfDaemon instance and called self.TransactionEvent in the same way, I think) but that didn’t work – I didn’t know why at the time, but figured it out later. Continuing to trace it out through code wouldn’t be that easy because now we’re in DBus, which I know from experience is a big complex codebase that’s not that easy to just reason your way through. We had the actual DBus error to work from too – “arguments to dbus_connection_unref() were incorrect, assertion “connection->generation == _dbus_current_generation” failed” – and I looked into that a bit, but there were no really helpful leads there (I got a bit more understanding about what the error means exactly, but it didn’t help me understand *why it was happening* at all).

    Time for the old standby…

    So, being a bit stuck, I fell back on the most trusty standby: trial and error! Well, also a bit of logic. It did occur to me that the dbus broker is itself a long-running daemon that other things can talk to. So I started just wondering if something was interfering with dnfdaemon’s connection with the dbus broker, somehow. This was in my head as I poked around at stuff – wherever I wound up looking, I was looking for stuff that involved dbus.

    But to figure out where to look, I just started hacking up dnfdaemon a bit. Now this first part is probably pure intuition, but that self._reset_base() call on the line right before the self.TransactionEvent call that crashes bugged me. It’s probably just long experience telling me that anything with “reset” or “refresh” in the name is bad news. 😛 So I thought, hey, what happens if we move it?

    I stuck some logging lines into this run_transaction so I knew where we got to before we crashed – this is a great dumb trick, btw, just stick lines like self.logger('XXX HERE 1'), self.logger('XXX HERE 2') etc. between every significant line in the thing you’re debugging, and grep the logs for “XXX” – and moved the self._reset_base() call down under the self.TransactionEvent call…and found that when I did that, we got further, the self.TransactionEvent call worked and we crashed the next time something else tried to emit a DBus signal. I also tried commenting out the self._reset_base() call entirely, and found that now we would only crash the next time a DBus signal was emitted after a subsequent call to the Unlock() method, which is another method that calls self._reset_base(). So, at this point I was pretty confident in this description: “dnfdaemon is crashing on the first interaction with DBus after self._reset_base() is called”.

    So my next step was to break down what _reset_base() was actually doing. Turns out all of the detail is in the DnfDaemonBase skeleton server class: it has a self._base which is a dnf.base.Base() instance, and that method just calls that instance’s close() method and sets self._base to None. So off I went into dnf code to see what dnf.base.Base.close() does. Turns out it basically does two things: it calls self._finalize_base() and then calls self.reset(True, True, True).

    Looking at the code it wasn’t immediately obvious which of these would be the culprit, so it was all aboard the trial and error train again! I changed the call to self._reset_base() in the daemon to self._base.reset(True, True, True)…and the bug stopped happening! So that told me the problem was in the call to _finalize_base(), not the call to reset(). So I dug into what _finalize_base() does and kinda repeated this process – I kept drilling down through layers and splitting up what things did into individual pieces, and doing subsets of those pieces at a time to try and find the “smallest” thing I could which would cause the bug.

    To take a short aside…this is what I really like about these kinds of debugging odysseys. It’s like being a detective, only ultimately you know that there’s a definite reason for what’s happening and there’s always some way you can get closer to it. If you have enough patience there’s always a next step you can take that will get you a little bit closer to figuring out what’s going on. You just have to keep working through the little steps until you finally get there.

    Eventually I lit upon this bit of dnf.rpm.transaction.TransactionWrapper.close(). That was the key, as close as I could get to it: reducing the daemon’s self._reset_base() call to just self._base._priv_ts.ts = None (which is what that line does) was enough to cause the bug. That was the one thing out of all the things that self._reset_base() does which caused the problem.

    So, of course, I took a look at what this ts thing was. Turns out it’s an instance of rpm.TransactionSet, from RPM’s Python library. So, at some point, we’re setting up an instance of rpm.TransactionSet, and at this point we’re dropping our reference to it, which – point to ponder – might trigger some kind of cleanup on it.

    Remember how I was looking for things that deal with dbus? Well, that turned out to bear fruit at this point…because what I did next was simply to go to my git checkout of rpm and grep it for ‘dbus’. And lo and behold…this showed up.

    Turns out RPM has plugins (TIL!), and in particular, it has this one, which talks to dbus. (What it actually does is try to inhibit systemd from suspending or shutting down the system while a package transaction is happening). And this plugin has a cleanup function which calls something called dbus_shutdown() – aha!

    This was enough to get me pretty suspicious. So I checked my system and, indeed, I had a package rpm-plugin-systemd-inhibit installed. I poked at dependencies a bit and found that python3-dnf recommends that package, which means it’ll basically be installed on nearly all Fedora installs. Still looking like a prime suspect. So, it was easy enough to check: I put the code back to a state where the crash happened, uninstalled the package, and tried again…and bingo! The crash stopped happening.

    So at this point the case was more or less closed. I just had to do a bit of confirming and tidying up. I checked and it turned out that indeed this call to dbus_shutdown() had been added quite recently, which tied in with the bug not showing up earlier. I looked up the documentation for dbus_shutdown() which confirmed that it’s a bit of a big cannon which certainly could cause a problem like this:

    “Frees all memory allocated internally by libdbus and reverses the effects of dbus_threads_init().

    libdbus keeps internal global variables, for example caches and thread locks, and it can be useful to free these internal data structures.

    You can’t continue to use any D-Bus objects, such as connections, that were allocated prior to dbus_shutdown(). You can, however, start over; call dbus_threads_init() again, create new connections, and so forth.”

    and then I did a scratch build of rpm with the commit reverted, tested, and found that indeed, it solved the problem. So, we finally had our culprit: when the rpm.TransactionSet instance went out of scope, it got cleaned up, and that resulted in this plugin’s cleanup function getting called, and dbus_shutdown() happening. The RPM devs had intended that call to clean up the RPM plugin’s DBus handles, but this is all happening in a single process, so the call also cleaned up the DBus handles used by dnfdaemon itself, and that was enough (as the docs suggest) to cause any further attempts to communicate with DBus in dnfdaemon code to blow up and crash the daemon.

    So, that’s how you get from dnfdragora claiming that DNF is locked by another process to a stray RPM plugin crashing dnfdaemon on a DBus interaction!

    FPgM report: 2019-42

    Posted by Fedora Community Blog on October 18, 2019 07:35 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Fedora 31 was declared No-Go. We are currently under the Final freeze.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


    Help wanted

    Upcoming meetings

    Fedora 31


    • 29 October — Final release target #1

    Blocker bugs

    Bug IDBlocker statusComponentBug status
    1747408Accepted (previous release)distributionMODIFIED
    1728240Accepted (final)sddmPOST
    1691430Accepted (final)dnfON_QA
    1762689Proposed (final)gnome-softwareON_QA
    1762751Proposed (Final)PackageKitNEW

    Fedora 32



    Submitted to FESCo

    CPE update

    Community Application Handover & Retirement Updates

    • Nuancier: Maintainer(s) found. Changes discussion happening on the infrastructure mailing list
    • Fedocal: Maintainer Found! Admin domain is handed over and the CPE team are engaging with the new maintainer to fully transition & Taiga board is created
    • Elections: Blocked by PostgreSQL database is missing in application catalogue
    • Badges: Discussion still ongoing for maintainers – please come forward if interested!
    • Pastebin: Updated to point to CentOS and updated in F30 & F31

    Other Project updates

    • Rawhide Gating: Still on track for early November release.
    • repoSpanner: Email from one of our team detailing their discoveries during a two week performance sprint is on the infrastructure mailing list
    • JMS messaging plugin is working now ** PR to jms upstream submitted, waiting for review from upstream maintainer
    • CentOS mirror is migrated to CentOS 7 node (ansible managed), and now fully working for CentOS Stream
    • CentOS 7.7 aarch64 was retired from EPEL – it no longer works

    The post FPgM report: 2019-42 appeared first on Fedora Community Blog.

    rpminspect-0.8 released (and a new rpminspect-data-fedora)

    Posted by David Cantrell on October 18, 2019 03:25 PM
    Work on the test suite continues with rpminspect and it is finding a lot of corner-case type runtime scenarios.  Fixing those up in the code is nice.  I welcome contributions to the test suite.  You can look at the tests/test_*.py files to see what I'm doing and then work through one inspection and do the different types of checks.  Look in the lib/inspect_NAME.c file and for all of the add_result() calls to figure out what tests should exist in the test suite.  If this is confusing, feel free to reach out via email or another means and I can provide you with a list for an inspection.

    Changes in rpminspect-0.8:

    • Integration test suite continues to grow and fix problems.

    • The javabytecode inspection will report the JAR relative path as well as the path to the embedded class file when a problem is found. (#56)

    • libmandoc 1.14.5 API support. rpminspect will continue to work with 1.14.4 and previous releases and will detect which one to use at build time. The mandoc API changed completely between the 1.14.4 and 1.14.5 release. This is not entirely their fault as we are using it built as a shared library and the upstream project does not officially do that.

    • rpminspect now exits with code 2 when there is a program error. Exit code 0 means inspections passed and exit code 1 means there was at least one inspection failure. (#57)

    • If there is a Python json module exception raised in the test suite, print the inspection name, captured stdout, and captured stderr. This is meant to help debug the integration test suite.

    • Fix the Icon file check in the desktop inspection. Look at all possible icon path trees (set in rpminspect.conf). Also honor the extensionless syntax in the desktop file.

    • Fix the Exec file check in the desktop inspection so it honors arguments specified after the program name.

    • Fix a SIGSEGV when the before and/or after arguments on the command line contain ".." in the pathspec.

    • [MAJOR] Fix fundamental problems with the peer detection code. The integration test suite caught this and was leading to false results.

    • Add the IPv6 function blacklist check. The configuration file can carry a list of forbidden IPv6 functions and raise a failure if it finds any of those used.

    Changes in rpminspect-data-fedora-0.6:
    • Change bytecode version to be JDK 8
    • Add desktop_icon_paths to rpminspect.conf

    Many thanks to the contributors, reporters, and testers.  I am continuing on with the test suite work and new inspections.  Keep the reports coming in.

    New badge: Fedora 32 Change Accepted !

    Posted by Fedora Badges on October 18, 2019 01:10 PM
    Fedora 32 Change AcceptedYou got a "Change" accepted into the Fedora 32 Change list

    Letting Birds scooters fly free

    Posted by Matthew Garrett on October 18, 2019 11:44 AM
    (Note: These issues were disclosed to Bird, and they tell me that fixes have rolled out. I haven't independently verified)

    Bird produce a range of rental scooters that are available in multiple markets. With the exception of the Bird Zero[1], all their scooters share a common control board described in FCC filings. The board contains three primary components - a Nordic NRF52 Bluetooth controller, an STM32 SoC and a Quectel EC21-V modem. The Bluetooth and modem are both attached to the STM32 over serial and have no direct control over the rest of the scooter. The STM32 is tied to the scooter's engine control unit and lights, and also receives input from the throttle (and, on some scooters, the brakes).

    The pads labeled TP7-TP11 near the underside of the STM32 and the pads labeled TP1-TP5 near the underside of the NRF52 provide Serial Wire Debug, although confusingly the data and clock pins are the opposite way around between the STM and the NRF. Hooking this up via an STLink and using OpenOCD allows dumping of the firmware from both chips, which is where the fun begins. Running strings over the firmware from the STM32 revealed "Set mode to Free Drive Mode". Challenge accepted.

    Working back from the code that printed that, it was clear that commands could be delivered to the STM from the Bluetooth controller. The Nordic NRF52 parts are an interesting design - like the STM, they have an ARM Cortex-M microcontroller core. Their firmware is split into two halves, one the low level Bluetooth code and the other application code. They provide an SDK for writing the application code, and working through Ghidra made it clear that the majority of the application firmware on this chip was just SDK code. That made it easier to find the actual functionality, which was just listening for writes to a specific BLE attribute and then hitting a switch statement depending on what was sent. Most of these commands just got passed over the wire to the STM, so it seemed simple enough to just send the "Free drive mode" command to the Bluetooth controller, have it pass that on to the STM and win. Obviously, though, things weren't so easy.

    It turned out that passing most of the interesting commands on to the STM was conditional on a variable being set, and the code path that hit that variable had some impressively complicated looking code. Fortunately, I got lucky - the code referenced a bunch of data, and searching for some of the values in that data revealed that they were the AES S-box values. Enabling the full set of commands required you to send an encrypted command to the scooter, which would then decrypt it and verify that the cleartext contained a specific value. Implementing this would be straightforward as long as I knew the key.

    Most AES keys are 128 bits, or 16 bytes. Digging through the code revealed 8 bytes worth of key fairly quickly, but the other 8 bytes were less obvious. I finally figured out that 4 more bytes were the value of another Bluetooth variable which could be simply read out by a client. The final 4 bytes were more confusing, because all the evidence made no sense. It looked like it came from passing the scooter serial number to atoi(), which converts an ASCII representation of a number to an integer. But this seemed wrong, because atoi() stops at the first non-numeric value and the scooter serial numbers all started with a letter[2]. It turned out that I was overthinking it and for the vast majority of scooters in the fleet, this section of the key was always "0".

    At that point I had everything I need to write a simple app to unlock the scooters, and it worked! For about 2 minutes, at which point the network would notice that the scooter was unlocked when it should be locked and sent a lock command to force disable the scooter again. Ah well.

    So, what else could I do? The next thing I tried was just modifying some STM firmware and flashing it onto a board. It still booted, indicating that there was no sort of verified boot process. Remember what I mentioned about the throttle being hooked through the STM32's analogue to digital converters[3]? A bit of hacking later and I had a board that would appear to work normally, but about a minute after starting the ride would cut the throttle. Alternative options are left as an exercise for the reader.

    Finally, there was the component I hadn't really looked at yet. The Quectel modem actually contains its own application processor that runs Linux, making it significantly more powerful than any of the chips actually running the scooter application[4]. The STM communicates with the modem over serial, sending it an AT command asking it to make an SSL connection to a remote endpoint. It then uses further AT commands to send data over this SSL connection, allowing it to talk to the internet without having any sort of IP stack. Figuring out just what was going over this connection was made slightly difficult by virtue of all the debug functionality having been ripped out of the STM's firmware, so in the end I took a more brute force approach - I identified the address of the function that sends data to the modem, hooked up OpenOCD to the SWD pins on the STM, ran OpenOCD's gdb stub, attached gdb, set a breakpoint for that function and then dumped the arguments being passed to that function. A couple of minutes later and I had a full transaction between the scooter and the remote.

    The scooter authenticates against the remote endpoint by sending its serial number and IMEI. You need to send both, but the IMEI didn't seem to need to be associated with the serial number at all. New connections seemed to take precedence over existing connections, so it would be simple to just pretend to be every scooter and hijack all the connections, resulting in scooter unlock commands being sent to you rather than to the scooter or allowing someone to send fake GPS data and make it impossible for users to find scooters.

    In summary: Secrets that are stored on hardware that attackers can run arbitrary code on probably aren't secret, not having verified boot on safety critical components isn't ideal, devices should have meaningful cryptographic identity when authenticating against a remote endpoint.

    Bird responded quickly to my reports, accepted my 90 day disclosure period and didn't threaten to sue me at any point in the process, so good work Bird.

    (Hey scooter companies I will absolutely accept gifts of interesting hardware in return for a cursory security audit)

    [1] And some very early M365 scooters
    [2] The M365 scooters that Bird originally deployed did have numeric serial numbers, but they were 6 characters of type code followed by a / followed by the actual serial number - the number of type codes was very constrained and atoi() would terminate at the / so this was still not a large keyspace
    [3] Interestingly, Lime made a different design choice here and plumb the controls directly through to the engine control unit without the application processor having any involvement
    [4] Lime run their entire software stack on the modem's application processor, but because of [3] they don't have any realtime requirements so this is more straightforward

    comment count unavailable comments

    Managing user accounts with Cockpit

    Posted by Fedora Magazine on October 18, 2019 08:00 AM

    This is the latest in a series of articles on Cockpit, the easy-to-useintegratedglanceable, and open web-based interface for your servers. In the first article, we introduced the web user interface. The second and third articles focused on how to perform storage and network tasks respectively.

    This article demonstrates how to create and modify local accounts. It also shows you how to install the 389 Directory Server add-on (or plugin). Finally, you’ll see how 389 DS integrates into the Cockpit web service.

    Managing local accounts

    To start, click the Accounts option in the left column. The main screen provides an overview of local accounts. From here, you can create a new user account, or modify an existing account.

    <figure class="wp-block-image">Accounts screen overview in Cockpit<figcaption>Accounts screen overview in Cockpit</figcaption></figure>

    Creating a new account in Cockpit

    Cockpit gives sysadmins the ability to easily create a basic user account. To begin, click the Create New Account button. A box appears, requesting basic information such as the full name, username, and password. It also provides the option to lock the account. Click Create<mark class="annotation-text annotation-text-yoast" id="annotation-text-67c391fb-4728-4e72-8db9-a1a5020bf101"></mark> to complete the process. The example below creates a new user named Demo User.

    <figure class="wp-block-image">Creating a local account in Cockpit<figcaption>Creating a local account in Cockpit</figcaption></figure>

    Managing accounts in Cockpit

    Cockpit also provides basic management of local accounts. Some of the features include elevating the user’s permissions, password expiration, and resetting or changing the password.

    Modifying an account

    To modify an account, go back to the accounts page and select the user you wish to modify. Here, we can change the full name and elevate the user’s role to Server Administrator — this adds user to the wheel group. It also includes options for access and passwords.

    The Access options allow admins to lock the account. Clicking Never lock account will open the “Account Expiration” box. From here we can choose to Never lock the account, or to lock it on a scheduled date.

    Password management

    Admins can choose to Set password and Force Change. The first option prompts you to enter a new password. The second option forces users to create a new password the next time they login.

    Selecting the Never change password option opens a box with two options. The first is Never expire the password. This allows the user to keep their password without the need to change it. The second option is Require Password change every … days. This determines the amount of days a password can be used before it must be changed.

    Adding public keys

    We can also add public SSH keys from remote computers for password-less authentication. This is equivalent to the ssh-copy-id command. To start, click the Add Public Key (+) button. Finally, copy the public key from a remote machine and paste it into the box.

    To remove the key, click the remove (-) button to the right of the key.

    Terminating the session and deleting an account

    Near the top right-corner are two buttons: Terminate Session, and Delete. Clicking the Terminate Session button immediately disconnects the user. Clicking the Delete button removes the user and offers to delete the user’s files with the account.

    <figure class="wp-block-image">Modifying and deleting a local account with Cockpit<figcaption>Modifying and deleting a local account with Cockpit</figcaption></figure>

    Managing 389 Directory Server

    Cockpit has a plugin for managing the 389 Directory Service. To add the 389 Directory Server UI, run the following command using sudo:

    $ sudo dnf install cockpit-389-ds

    Because of the enormous number of settings, Cockpit provides detailed optimization of the 389 Directory Server. Some of these settings include:

    • Server Settings: Options for server configuration, tuning & limits, SASL, password policy, LDAPI & autobind, and logging.
    • Security: Enable/disable security, certificate management, and cipher preferences.
    • Database: Configure the global database, chaining, backups, and suffixes.
    • Replication: Pertains to agreements, Winsync agreements, and replication tasks.<mark class="annotation-text annotation-text-yoast" id="annotation-text-32c139be-02ab-493b-8c86-720abe508099"></mark>
    • Schema: Object classes, attributes, and matching rules.
    • Plugins: Provides a list of plugins associated with 389 Directory Server. Also gives admins the opportunity to enable/disable, and edit the plugin.
    • Monitoring: Shows database performance stats. View DB cache hit ratio and normalized DN cache. Admins can also configure the amount of tries, and hits. Furthermore, it provides server stats and SNMP counters.

    Due to the abundance of options, going through the details for 389 Directory Server is beyond the scope of this article. For more information regarding 389 Directory Server, visit their documentation site.

    <figure class="wp-block-image">Managing 389 DS with Cockpit<figcaption>Managing 389 Directory Server with Cockpit</figcaption></figure>

    As you can see, admins can perform quick and basic user management tasks. However, the most noteworthy is the in-depth functionality of the 389 Directory Server add-on.

    The next article will explore how Cockpit handles software and services.

    Photo by Daniil Vnoutchkov on Unsplash.

    Foliate - A simple and modern ebook viewer for linux

    Posted by Robbi Nespu on October 18, 2019 12:31 AM

    Looking for best e-book viewer on linux? Then use Foliate! This is my favourite e-book viewer!!

    Foliate viewer support epub, .mobi, .azw, and .azw3 files. Have few mode for you such as light, dark, sepia and invert theme mode.

    How to install? Luckly, they also release distribution package for Fedora (sudo dnf install foliate , Arch and Void linux (xbps-install -S foliate). For DEB based such a Ubuntu or Debian can be download on latest release page. For others distribution, just download the source code and build yourself. Else, just download from Flatpak.

    I really like the stylish interface and having fun experience compare to others viewer. Two-page view, scrolled view, metadata viewer and reading progress features that made me happy using this software.

    There, I hope you will like this software too. Adios!

    libinput and tablet pad keys

    Posted by Peter Hutterer on October 17, 2019 11:23 PM

    Upcoming in libinput 1.15 is a small feature to support Wacom tablets a tiny bit better. If you look at the higher-end devices in Wacom's range, e.g. the Cintiq 27QHD you'll notice that at the top right of the device are three hardware-buttons with icons. Those buttons are intended to open the config panel, the on-screen display or the virtual keyboard. They've been around for a few years and supported in the kernel for a few releases. But in userspace, they events from those keys were ignored, casted out in the wild before eventually running out of electrons and succumbing to misery. Well, that's all changing now with a new interface being added to libinput to forward those events.

    Step back a second and let's look at the tablet interfaces. We have one for tablet tools (styli) and one for tablet pads. In the latter, we have events for rings, strips and buttons. The latter are simply numerically ordered, so button 1 is simply button 1 with no special meaning. Anything more specific needs to be handled by the compositor/client side which is responsible for assigning e.g. keyboard shortcuts to those buttons.

    The special keys however are different, they have a specific function indicated by the icon on the key itself. So libinput 1.15 adds a new event type for tablet pad keys. The events look quite similar to the button events but they have a linux/input-event-codes.h specific button code that indicates what they are. So the compositor can start the OSD, or control panel, or whatever directly without any further configuration required.

    This interface hasn't been merged yet, it's waiting for the linux kernel 5.4 release which has a few kernel-level fixes for those keys.

    libinput and button scrolling locks

    Posted by Peter Hutterer on October 17, 2019 10:56 PM

    For a few years now, libinput has provided button scrolling. Holding a designated button down and moving the device up/down or left/right creates the matching scroll events. We enable this behaviour by default on some devices (e.g. trackpoints) but it's available on mice and some other devices. Users can change the button that triggers it, e.g. assign it to the right button. There are of course a couple of special corner cases to make sure you can still click that button normally but as I said, all this has been available for quite some time now.

    New in libinput 1.15 is the button lock feature. The button lock removes the need to hold the button down while scrolling. When the button lock is enabled, a single button click (i.e. press and release) of that button holds that button logically down for scrolling and any subsequent movement by the device is translated to scroll events. A second button click releases that button lock and the device goes back to normal movement. That's basically it, though there are some extra checks to make sure the button can still be used for normal clicking (you will need to double-click for a single logical click now though).

    This is primarily an accessibility feature and is likely to find it's way into the GUI tools under the accessibility headers.

    Riddle me this

    Posted by Benjamin Otte on October 17, 2019 10:46 PM

    Found this today while playing around, thought people might enjoy this riddle.

    $> echo test.c
    typedef int foo;
    int main()
      foo foo = 1;
      return (foo) +0;
    $> gcc -Wall -o test test.c && ./test && echo $?

    What does this print?

    1. 0
    2. 1
    3. Some compilation warnings, then 0.
    4. Some compilation warnings, then 1.
    5. It doesn’t compile.

    I’ll put an answer in the comments.

    IBus 1.5.21 is released

    Posted by Takao Fujiwara on October 17, 2019 08:47 AM

    IBus 1.5.21 is now released and available in Fedora 31.

    # dnf update ibus

    This release enhances the IBus compose features. The maximum number of the compose key sequences was 7. Also the output character was limited in 16 bit and only one character could be output so the latest emoji characters or custom long compose characters were not supported.
    The following is the demo.

    <iframe allowfullscreen="true" class="youtube-player" height="349" src="https://www.youtube.com/embed/S0iFQTrBles?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="425"></iframe>

    IBus can loads either $HOME/.config/ibus/Compose or $HOME/.config/gtk-3.0/Compose or $HOME/.XCompose file and save the cache files in $HOME/.cache/ibus/compose/

    You can customize the compose keys with gnome-tweaks in GNOME desktop or any utilities in other desktops or setxkbmap -option in Xorg desktops.

    Currently IBus XKB engines and a few language engines support the IBus compose features. To enable IBus in text applications in GNOME desktop, you need to enable more than one IBus engines using gnome-control-center region, likes ibus-typing-booster, ibus-m17n or else. Otherwise GtkIMContextSimple is used and the compose feature is not available. To enable IBus in non-GNOME desktop, you can use any IBus engines by default or customize with `ibus-setup` command.

    Also now ibus-daemon exits with the parent program’s death.

    Also IBus provides ibus.its file which can i18n IBus component files in /usr/share/ibus/component/ to detect “longname” and “description” tag.

    ibus-anthy 1.5.11 and anthy-unicode are released

    Posted by Takao Fujiwara on October 17, 2019 08:06 AM

    ibus-anthy 1.5.11 is released and available in Fedora 30 or later.
    # dnf update ibus-anthy

    The default input mode is now Eisu (direct) mode but not Hiragana mode.

    Eisu mode now can load a user compose file of either $HOME/.config/ibus/Compose or $HOME/.XCompose although the system compose files has been already loaded.

    The emoji dictionary is updated for emoji 12.0 beta.

    The ibus-anthy build now uses gettext instead of intltool.

    This release now supports to use anthy-unicode which converts the internal EUC-JP data to UTF-8 data and enhanced some functions and the ibus-anthy build detects /usr/lib*/pkgconfig/anthy-unicode.pc for anthy-unicode or /usr/lib*/pkgconfig/anthy.pc for anthy. anthy-unicode is still an unofficial or testing release.

    Rclone to GDrive

    Posted by Paul Mellors [MooDoo] on October 17, 2019 07:54 AM
    I have 2tb storage space with google, so I wanted to sync the files from my Fedora 30 installation to GDrive.  I didn't want to have to drag and drop on a Chrome window or click the upload button, I wanted to fire and forget.

    With this in mind I discovered rclone [https://rclone.org/], it's basically rsync for cloud storage.  I set mine up like this.

    dnf install rclone

    rclone config
    This will allow you to setup the cloud connection

    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config
    n/s/q> n
    name> GDrive
    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / 1Fichier
       \ "fichier"
     2 / Alias for an existing remote
       \ "alias"
     3 / Amazon Drive
       \ "amazon cloud drive"
     4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
       \ "s3"
     5 / Backblaze B2
       \ "b2"
     6 / Box
       \ "box"
     7 / Cache a remote
       \ "cache"
     8 / Dropbox
       \ "dropbox"
     9 / Encrypt/Decrypt a remote
       \ "crypt"
    10 / FTP Connection
       \ "ftp"
    11 / Google Cloud Storage (this is not Google Drive)
       \ "google cloud storage"
    12 / Google Drive
       \ "drive"
    13 / Google Photos
       \ "google photos"
    14 / Hubic
       \ "hubic"
    15 / JottaCloud
       \ "jottacloud"
    16 / Koofr
       \ "koofr"
    17 / Local Disk
       \ "local"
    18 / Mega
       \ "mega"
    19 / Microsoft Azure Blob Storage
       \ "azureblob"
    20 / Microsoft OneDrive
       \ "onedrive"
    21 / OpenDrive
       \ "opendrive"
    22 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
       \ "swift"
    23 / Pcloud
       \ "pcloud"
    24 / Put.io
       \ "putio"
    25 / QingCloud Object Storage
       \ "qingstor"
    26 / SSH/SFTP Connection
       \ "sftp"
    27 / Union merges the contents of several remotes
       \ "union"
    28 / Webdav
       \ "webdav"
    29 / Yandex Disk
       \ "yandex"
    30 / http Connection
       \ "http"
    31 / premiumize.me
       \ "premiumizeme"
    Storage> 12
    ** See help for drive backend at: https://rclone.org/drive/ **

    Google Application Client Id
    Setting your own is recommended.
    See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
    If you leave this blank, it will use an internal key which is low performance.
    Enter a string value. Press Enter for the default ("").
    Google Application Client Secret
    Setting your own is recommended.
    Enter a string value. Press Enter for the default ("").
    Scope that rclone should use when requesting access from drive.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
     1 / Full access all files, excluding Application Data Folder.
       \ "drive"
     2 / Read-only access to file metadata and file contents.
       \ "drive.readonly"
       / Access to files created by rclone only.
     3 | These are visible in the drive website.
       | File authorization is revoked when the user deauthorizes the app.
       \ "drive.file"
       / Allows read and write access to the Application Data folder.
     4 | This is not visible in the drive website.
       \ "drive.appfolder"
       / Allows read-only access to file metadata but
     5 | does not allow any access to read or download file content.
       \ "drive.metadata.readonly"
    scope> 1
    ID of the root folder
    Leave blank normally.
    Fill in to access "Computers" folders. (see docs).
    Enter a string value. Press Enter for the default ("").
    Service Account Credentials JSON file path
    Leave blank normally.
    Needed only if you want use SA instead of interactive login.
    Enter a string value. Press Enter for the default ("").
    Edit advanced config? (y/n)
    y) Yes
    n) No
    y/n> n
    Remote config
    Use auto config?
     * Say Y if not sure
     * Say N if you are working on a remote or headless machine
    y) Yes
    n) No
    y/n> y
    If your browser doesn't open automatically go to the following link: <redacted>

    Log in and authorize rclone for access
    Waiting for code...

    At this point a web browser should open and you need to sign into google and authorise the app

    Got code
    Configure this as a team drive?
    y) Yes
    n) No

    You then get token information and a request to confirm all ok, you can then quit the config and all should be ok.

    I then use the command below, there are a large amount of options, these worked for me.

    rclone sync /home/paulmellors/Pictures GDrive:Pictures --progress --tpslimit 10 --bwlimit 900K

    rclone sync <what you want to sync> <the connection>:<remote folder> <show progress> <Limit HTTP transactions per second to this> <This option controls the bandwidth limit. >

    Seems to be working so far with my 400GB of photos :) 

    Contribute to Fedora Magazine

    Posted by Fedora Magazine on October 16, 2019 08:00 AM

    Do you love Linux and open source? Do you have ideas to share, enjoy writing, or want to help run a blog with over 60k visits every week? Then you’re at the right place! Fedora Magazine is looking for contributors. This article walks you through various options of contributing and guides you through the process of becoming a contributor.

    There are three main areas of contribution:

    1. Proposing ideas
    2. Writing articles
    3. Keeping it all running

    Proposing ideas

    Everything starts with an idea. We discuss ideas and how to turn them into articles that are interesting and useful to the Magazine’s audience.

    Everyone is welcome to submit an idea. It can be a very specific article proposal, or really just an idea. The Editorial Board discusses each proposal and decides about the next step.

    Many ideas are turned into a so-called Article Spec, which is a specific description of an article to get written for the Magazine. It usually describes the desired structure and other aspects of the article.

    By submitting a proposal you’re not automatically committing to write it. It’s a separate step by design. But, of course, you’re very welcome to do both!

    Submit an idea by opening an issue in our issue tracker. To do that, you’ll need a FAS (Fedora Account System) account.

    See the docs on proposing articles for more info.

    Writing articles

    If you enjoy writing, you’re welcome to write for the Magazine! Being a good writer doesn’t necessarily mean that you also need to come up with the topic — we have a list of article specs ready to be written.

    The Editorial Board maintains a Kanban board with cards representing specific articles. Each article starts as an Article Spec, and goes through various states to the very end when it’s published. Each column on the board represents a state.

    If you want to write an article, just pick any card in the Article Spec column you like. First, assign yourself to the card of your choice, and move it to the In Progress column. That’s how you indicate to the rest of the community you’re working on it. Writing itself is done in the Magazine WordPress — log in, click new/post at the very top, and start writing.

    We strongly encourage writers to read the Tips for Writers page in the docs.

    Once you’re done writing, paste the preview URL from WordPress into the card. (You can get it using the Preview button at the top-right in the WordPress editor.) Then move the card to the Review column. An editor then reviews and moves it forward.

    In some cases, an editor might ask for certain changes. When that happens, the card is moved back to In Progress. All you need to do is to make those changes and move it to the Review column again.

    If you’re a first-time contributor, you’ll need to get access to WordPress and Taiga first. Start by introducing yourself on the Fedora Magazine mailing list, and an editor will set everything up for you.

    See what article specs are ready to be written in the Article Spec column and you can just start writing.

    Also, you can see the docs on writing articles for more info.

    Becoming an editor

    Looking for a longer-term contribution to the Magazine? Perhaps by setting the publishing schedule every week, reviewing ideas, editing articles, and attending the regular meeting? Become a member of the Editorial Board!

    There are a few ways to start:

    Help review ideas

    The easiest start might be reviewing ideas and turning them into article spec. Provide feedback, suggest what should be included, and help decide what the article should look like overall. To do that, simply go to the issue tracker and start commenting.

    Sometimes, we also receive ideas on the mailing list. Engaging with people on the mailing list is also a good way to contribute.

    Attend the Editorial meeting

    The Fedora Magazine editorial meeting is the place when we set the publishing schedule for the next week, and discuss various ideas regarding the Magazine.

    You are very welcome to just attend one of the Editorial meetings we have. Just say hi, and maybe volunteer to edit an article (read below), create an image (read below), or even write something when we’re short on content. 

    Article reviews

    When there is any card in the Review column on the board, that means the writer is asking for a final review. You can read their article and put a comment in the card with what you think about it. You might say it looks great, or point out specific things you believe should be changed (although that is rare).

    Design a cover image

    Every article published on the Magazine has a cover image. If you enjoy making graphics, you can contribute some. See the cover image guidelines in our docs for more info, and either ask on the list or come to one of our editorial meetings to get assigned one. 


    Fedora Magazine is a place to share useful and interesting content with people who love Linux by people who love Fedora. And it all happens thanks to people contributing ideas, writing articles, and helping to keep the Magazine running. If you like the idea of Fedora being popular, or people using open source, Fedora Magazine is a great place for anyone to discover and learn about all of that. Join us and be a part of Fedora’s success!

    Planet Fedora

    Posted by Paul Mellors [MooDoo] on October 16, 2019 07:35 AM
    I just wanted to post this onto the planet, it's a test really to see if I've edited my .planet correctly.  Wasn't sure if the last post worked.

    Nothing else here yet, move along :)

    libinput's bus factor is 1

    Posted by Peter Hutterer on October 16, 2019 05:56 AM

    A few weeks back, I was at XDC and gave a talk about various current and past input stack developments (well, a subset thereof anyway). One of the slides pointed out libinput's bus factor and I'll use this blog to make this a bit more widely known.

    If you don't know what the bus factor is, Wikipedia defines it as:

    The "bus factor" is the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel.
    libinput has a bus factor of 1.

    Let's arbitrarily pick the 1.9.0 release (roughly 2 years ago) and look at the numbers: of the ~1200 commits since 1.9.0, just under 990 were done by me. In those 2 years we had 76 contributors in total, but only 24 of which have more than one commit and only 6 contributors have more than 5 commits. The numbers don't really change much even if we go all the way back to 1.0.0 in 2015. These numbers do not include the non-development work: release maintenance for new releases and point releases, reviewing CI failures [1], writing documentation (including the stuff on this blog), testing and bug triage. Right now, this is effectively all done by one person.

    This is... less than ideal. At this point libinput is more-or-less the only input stack we have [2] and all major distributions rely on it. It drives mice, touchpads, tablets, keyboards, touchscreens, trackballs, etc. so basically everything except joysticks.

    Anyway, I'm largely writing this blog post in the hope that someone gets motivated enough to dive into this. Right now, if you get 50 patches into libinput you get the coveted second-from-the-top spot, with all the fame and fortune that entails (i.e. little to none, but hey, underdogs are big in popular culture). Short of that, any help with building an actual community would be appreciated too.

    Either way, lest it be said that no-one saw it coming, let's ring the alarm bells now before it's too late. Ding ding!

    [1] Only as of a few days ago can we run the test suite as part of the CI infrastructure, thanks to Benjamin Tissoires. Previously it was run on my laptop and virtually nowhere else.
    [2] fyi, xf86-input-evdev: 5 patches in the same timeframe, xf86-input-synaptics: 6 patches (but only 3 actual changes) so let's not pretend those drivers are well-maintained.

    Τι κάνεις FOSSCOMM 2019

    Posted by Julita Inca Chiroque on October 16, 2019 02:29 AM

    Thanks to the sponsorship of Fedora, I was able to travel to Lamia, Greece from October 10 to October 14 to attend at FOSSCOMM (Free and Open Source Software Communities Meeting), the pan-Hellenic conference of free and open source software communities.

    Things I did in the event:

    1.- Set up a Fedora booth

    I arranged the booth during the first hours when I arrived Lamia. The event registration started at 4:00 p.m. and thanks to the help of enthusiastic volunteers and Alex Angelo (I met him in GUADEC 2019), the booth was all ready to go since the first day of the event.

    The Fedora project sent swags directly to the University of Central Greece, and I created my own handmade decoration. I used Fedora and GNOME ballons to have a nice booth 🙂 Thanks to the tools provided by the university I was able to finish what I had in mind:

    2.- Spread up the Fedora word

    When the students visited our Fedora booth, they were excited to take some Fedora gifts, especially the tattoo sticker. I was asking how many of them used Fedora, and most of them were using Ubuntu, Linux Mint, Kali Linux and Elementary OS. It was an opportunity to share the Fedora 30 edition and give the beginner’s guide that the Fedora community wrote in a little book. Most of them enjoyed taking photos with the Linux frame I did in Edinburgh 💙  Alex shared also his Linux knowledge in our Fedora booth.

    3.- Do a keynote about Linux on Supercomputers

    I was invited to the conference to do a talk about Linux in supercomputers. Only 9 out of 42 attendees were non-Linux users. However, I am so glad that they attended to know what is going on in the supercomputer world that uses Linux. Then, I started by asking questions about Linux in general, and some linuxers were able to answer part of the questions but not all of them. I have been told by professor Thanos that Greece has a supercomputer called Aris, as well as the students were aware about GPUs technologies. When I asked a question about GPUs, a female student answered correctly about the use of GPUs and that is why she won the t-shirt of the event I offered as a prize to the audience. You might see my entire talk in the live streaming video. 

    4.- Do a workshop of GTK on C

    I was planning to teach the use of the GTK library with C, Python, and Vala. However, because of the time and the preference of the attendees, we only worked with C. The workshop was supported by Alex Angelo who also traduced some of my expressions in Greek. I was flexible in using different Operating Systems such as Linux Mint, Ubuntu, Kubuntu among other distros. There were only two users that used Fedora. Almost half of the audience did not bring a laptop, and then I grouped in groups to work together. I enjoyed to see young students eager to learn, they took their own notes, and asked questions. You might see the video of the workshop that was recorded by the organizers.

    My feelings about the event:

    The agenda of the event was so interesting, I was quite sad to not attend because I had to take care of the booth, and most of the talks were done in Greek. As you can see in the pictures, there were a variety of technical talks in charge of women. I was impressed by Greek ladies because they are well prepared, most of them were self-taught in Linux and in trending technologies such as IoT, security, programming, Linux, and in bio-science.

    Authorities supported this kind of Linux events and I think that was an important factor to have a successful event. Miss Catherine and Mister Thanos were pictured with minorities, women and kids were very excited to be part of FOSSCOMM 2019. Additionally, its local government also supported this event. Here a post in the magazine.

    Greek people are warm and happy.  Thank you so much to everyone for the kindness!

    Food for everyone

    I was surprised by time and schedules, they started the journey every day at 8:00 am and the talks finished at 8p.m. The lunch break was set at 2:30p.m. and a local guy told me that just for breakfast they usually take a cup of coffee. We had a very delicious and consistent dinner on the first day of the event with the professors of the Informatics and Biology department of the University Central Greece. Free lunch and coffee breaks were served carefully to all. I enjoyed Greece food, we had a variety of salads and sweeties. 

    Turistic places I visited

    I only had a few hours before leaving Lamia, I had time to visit the castle and the museum where I learned more about the different ancient eras and legends of Greece.

    Special Thanks

    Thanks to Alex for being my local guide during the whole event! Thanks to Iris for the welcoming, to Argiris for the invitation and the t-shirt he promised me, and to Kath for being so nice in the thousand pictures we took and for the touristic guide and her help.

    Thanks to Stathis who encouraged me to apply to FOSSCOMM, to each volunteer for the help they gave me and all the effort they did, I know that most of them live an hour and a half far from the university. Thanks again to Fedora for the travel sponsorship!

    Libosinfo (Part I)

    Posted by Fabiano Fidêncio on October 16, 2019 12:00 AM

    This is the first blog post of a series which will cover Libosinfo, what it is, who uses it, how it is used, how to manage it, and, finally, how to contribute to it.

    A quick overview

    Libosinfo is the operating system information database. As a project, it consists of three different parts, with the goal to provide a single place containing all the required information about an operating system in order to provision and manage it in a virtualized environment.

    The project allows management applications to:

    • Automatically identify for which operating system an ISO image or an installation tree is intended to;

    • Find the download location of installable ISOs and LiveCDs images;

    • Find the location of installation trees;

    • Query the minimum, recommended, and maximum CPU / memory / disk resources for an operating system;

    • Query the hardware supported by an operating system;

    • Generate scripts suitable for automating “Server” and “Workstation” installations;

    The library (libosinfo)

    The library API is written in C, taking advantage of GLib and GObject. Thanks to GObject Introspection, the API is automatically available in all dynamic programming languages with bindings for GObject (JavaScript, Perl, Python, and Ruby). Auto-generated bindings for Vala are also provided.

    As part of libosinfo, three tools are provided:

    • osinfo-detect: Used to detect an Operating System from a given ISO or installation tree.

    • osinfo-install-script: Used to generate a “Server” or “Workstation” install-script to perform automated installation of an Operating System;

    • osinfo-query: Used to query information from the database;

    The database (osinfo-db)

    The database is written in XML and it can either be consumed via libosinfo APIs or directly via management applications’ own code.

    It contains information about the operating systems, devices, installation scripts, platforms, and datamaps (keyboard and language mappings for Windows and Linux OSes).

    The database tools (osinfo-db-tools)

    These are tools that can be used to manage the database, which is distributed as a tarball archive.

    • osinfo-db-import: Used to import an osinfo database archive;

    • osinfo-db-export: Used to export an osinfo database archive;

    • osinfo-db-validate: Used to validate the XML files in one of the osinfo database locations for compliance with the RNG schema.

    • osinfo-db-path: Used to report the paths associated with the standard database locations;

    The consumers …

    Libosinfo and osinfo-db have management applications as their target audience. Currently the libosinfo project is consumed by big players in the virtual machine management environment such as OpenStack Nova, virt-manager, GNOME Boxes, and Cockpit Machines.

    … a little bit about them …

    • OpenStack Nova: An OpenStack project that provides a way to provision virtual machines, baremetal servers, and (limited supported for) system containers.

    • virt-manager: An application for managing virtual machines through libvirt.

    • GNOME Boxes: A simple application to view, access, and manage remote and virtual systems.

    • Cockpit Machines: A Cockpit extension to manage virtual machines running on the host.

    … and why they use it

    • Download ISOs: As libosinfo provides the ISO URLs, management applications can offer the user the option to download a specific operating system;

    • Automatically detect the ISO being used: As libosinfo can detect the operating system of an ISO, management applications can use this info to set reasonable default values for resources, to select the hardware supported, and to perform unattended installations.

    • Start tree installation: As libosinfo provides the tree installation URLs, management applications can use it to start a network-based installation without having to download the whole operating system ISO;

    • Set reasonable default values for RAM, CPU, and disk resources: As libosinfo knows the values that are recommended by the operating system’s vendors, management applications can rely on that when setting the default resources for an installation.

    • Automatically set the hardware supported: As libosinfo provides the list of hardware supported by an operating system, management applications can choose the best defaults based on this information, without taking the risk of ending up with a non-bootable guest.

    • Unattended install: as libosinfo provides unattended installations scripts for CentOS, Debian, Fedora, Fedora Silverblue, Microsoft Windows, OpenSUSE, Red Hat Enterprise Linux, and Ubuntu, management applications can perform unattended installations for both “Workstation” and “Server” profiles.

    What’s next?

    The next blog post will provide a “demo” of an unattended installation using both GNOME Boxes and virt-install and, based on that, explain how libosinfo is internally used by these projects.

    By doing that, we’ll both cover how libosinfo can be used and also demonstrate how it can ease the usage of those management applications.

    Cockpit 205

    Posted by Cockpit Project on October 16, 2019 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 205.

    Firewall: UI restructuring

    The firewall page was redesigned. Instead of having separate listings for services and zones, the services are now listed per zone. This aims to make the relationship between zones and services clearer.

    Firewall Redesign

    Machines: Refactor Create VM dialog and introduce a download option

    A guest operating system can now be downloaded automatically by only selecting its name. Memory and storage size will default to recommended values for the selected OS.

    Create VM dialog

    Adjust menu to PatternFly’s current navigation design

    The pages menu now has a dark theme, the recommended current design from PatternFly after an user study.

    Searching with keywords

    Enable searching by page names and keywords. Also works with translated page names and translated keywords. Searching by page content is not available yet.

    Dark navigation

    Software Updates: Use notifications for available updates info

    Cockpit will notify you about available updates in the navigation menu.

    Notify about available updates

    Web server security hardening

    The cockpit-tls proxy and the cockpit-ws instances now run as different system users, and the instances are controlled by systemd. This provides better isolation and robustness.

    Try it out

    Cockpit 205 is available now:

    Fedora 30 : News about python 3.8.0 and install on Linux.

    Posted by mythcat on October 15, 2019 09:09 PM
    The new release of python development comes today.
    You can see on the official webpage the new versions of Python 3.7.5 Oct. 15, 2019 and Python 3.8.0 Oct. 14, 2019.
    I wrote about how to install version 3.8.0 on Fedora 30.
    See the full tutorial here.

    Extending the Minimization objective

    Posted by Fedora Community Blog on October 15, 2019 02:48 PM
    Fedora community elections

    Earlier this summer, the Fedora Council approved the first phase of the Minimization objective. Minimization looks at package dependencies and tries to minimize the footprint for a variety of use cases. The first phase resulted in the development of a feedback pipeline, a better understanding of the problem space, and some initial ideas for policy improvements.

    Phase two is now submitted to the Council for approval. In this phase, the team will select specific use cases to target and work to develop a minimized set of packages for them. You can read the updated objective in pull request #64. Please provide feedback there or on the council-discuss mailing list. The Council will vote on this in two weeks.

    The post Extending the Minimization objective appeared first on Fedora Community Blog.

    Building GDB on a freshly installed machine FAQ

    Posted by Gary Benson on October 15, 2019 01:35 PM

    So you just installed Fedora, RHEL or CentOS and now you want to build GDB from source.

    1. How do you make sure everything you need to build it is installed?
      # dnf builddep gdb
    2. Did it say, No such command: builddep? Do this, then try again:
      # dnf install dnf-plugins-core
    3. Did it say, dnf: command not found…? You’re using yum, try this:
      # yum-builddep gdb
    4. Did it say, yum-builddep: command not found…? Do this, then try again:
      # yum install yum-utils

    Thank you, you’re welcome.

    syslog-ng in two words at One Identity UNITE: reduce and simplify

    Posted by Peter Czanik on October 15, 2019 10:44 AM

    UNITE is the partner and user conference of One Identity, the company behind syslog-ng. This time the conference took place in Phoenix, Arizona where I talked to a number of American business customers and partners about syslog-ng. They were really enthusiastic about syslog-ng and emphasized two major reasons why they use syslog-ng or plan to introduce it to their infrastructure: syslog-ng allows them to reduce the log data volume and greatly simplify their infrastructure by introducing a separate log management layer.


    Log messages are very important both for the operation and security of a company. This is why you do not just simply store them, but feed the log messages to SIEM and other log analysis systems that create reports and actionable alerts from your messages.

    Applications can produce tremendous amount of log data. This is a problem for SIEM and other log analysis systems for two major reasons:

    • hardware costs, as the more data you have the more storage place and processing power you need to analyze the data

    • licensing costs, as most analysis platforms are priced on data volume

    You can easily reduce message volume by parsing and filtering your log messages and only forwarding the logs for analysis which are really necessary. Many people started to use syslog-ng just for this use case, as it is really easy to create complex filters using syslog-ng.

    This is why I was surprised to learn about another approach: sending all log messages, but not whole messages, only the necessary parts. This needs a bit of extra work, as you need to figure out which part of the log message is used by your log analysis application. But once you are ready with your research, you can easily halve the log messages, or in some special cases even reduce the message volume by 90%.

    Some examples are:

    • Reading the name-value pairs from the systemd journal, but forwarding only selected name-value pairs.

    • Parsing HTTP access logs and forwarding only those columns which are actually analyzed by your software.

    The syslog-ng application has powerful parsers to segment the log messages to name-value pairs, after which you can use templates and template functions of syslog-ng for such selective log delivery.

    If your log analysis infrastructure is already in place, it is still worth to make the switch to syslog-ng and reduce your log volume using these techniques. You can use the current log analysis infrastructure for a lot longer time without having to expand it with further storage and processing power.


    Most SIEM and log analysis solutions come with their own client applications to collect log messages. So, why bother installing a separate application from yet another vendor to collect your log messages? Installing syslog-ng as a separate log management layer does not actually complicate your infrastructure, but rather simplifies it:

    • No vendor lock-in: replacing your SIEM is pain free and quick, as you do not have to replace all the agents as well

    • Operations, security and different teams of the company use different software solutions to analyze log messages: instead of installing 3-4 or even more agents, you only install one that can deliver the required log messages to the different solutions.

    When you collect log messages to a central location using syslog-ng, you can archive all of the messages there. If you add a new log analysis application to your infrastructure, you can just point syslog-ng at it and forward the necessary subset of log data there.

    Life at both security and operations in your environment becomes easier, as there is only a single software to check for security problems and distribute on your systems instead of many.

    What is next?

    If you are on the technical side, I recommend you reading two chapters from the syslog-ng documentation:

    These explain you how you can reformat your log messages using syslog-ng, giving you a way to reduce your data volume significantly by including only necessary name-value pairs.

    If you want to learn more about this topic, our Optimize SIEM white paper explains it in more depth.

    The open source version of syslog-ng is part of most Linux distributions, but packages might be outdated. For up-to-date packages check the 3rd party binaries page for information.

    If you need commercial level support and help in integrating syslog-ng to your environment, start an evaluation of syslog-ng Premium Edition.

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

    Unoon, a tool to monitor network connections from my system

    Posted by Kushal Das on October 14, 2019 01:46 PM

    I always wanted to have a tool to monitor the network connections from my laptop/desktop. I wanted to have alerts for random processes making network connections, and a way to block those (if I want to).

    Such a tool can provide peace of mind in a few cases. A reverse shell is one the big one, just in case if I manage to open any random malware (read downloads) on my regular Linux system, I want to be notified about the connections it will make. The same goes for trying out any new application. I prefer to use Qubes OS based VMs testing random binaries and applications, and it is also my daily driver. But, the search for a proper tool continued for some time.

    Introducing unoon

    Unoon main screen

    Unoon is a desktop tool that I started writing for monitoring network connections for my system. It has two parts, the backend is written in Go and that monitor and adds details to a local Redis instance (this should be password protected).

    I started writing this backend in Rust, but then I had to rewrite it in Go as I wanted to reuse parts of my code from another project so that I can track all DNS queries from the system. This helps to make sense of the data; otherwise, we will see some random IP numbers in the UI.

    The frontend is written using PyQt5. Around 14 years ago, I released my first ever released tool using PyQt, and it is still my favorite library to create a desktop application.

    Using the development version of unoon

    The README has the build steps. You have to start the backend as a daemon, the easiest option is to run it inside of a tmux shell. At first, it will show all the currently running processes in the first “Current processes” tab. If you add any executable (via the absolute path) in the Edit->whitelists dialog and then save (and then restart the UI app), those will turn up the whitelisted processes.

    Unoon alert

    For any new process making network calls, you will get an alert dialog. In the future, we will have the option to block hosts/ips via this alert dialog.

    Unoon history

    The history tabs will show all alerts history in the runtime. Again, we will have to save this information in a local database, so that we can have better statistics shown to the users.

    You can move between different tabs/tables via Alt+1 or Alt+2 and Alt+3 key combinations.

    I will add more options to create better-whitelisted processes. There is also ongoing work to mark any normal process as a whitelisted one from the UI (by right-clicking).

    Last week, Micah and I managed to spend some late-night hotel room hacking on this tool.

    How can you help?

    You can start by testing the code base, and provide suggestions on how to improve the tool. Help in UX (major concern) and patches are always welcome.

    A small funny story

    A few weeks back, on a Sunday late night, I was demoing the very initial version of the tool to Saptak. While we were talking about the tool, suddenly, an entry popped up in the UI /usr/bin/ssh, to a random host. A little bit of search showed that the IP belongs to an EC2 instance. For the next 40 minutes, we both were trying to debug to find out what happened and if the system was already compromised or not. Luckily I was talking about something else before, and to demo something (we totally forgot that topic), I was running Wireshark on the system. From there, we figured that the IP belongs to github.com. It took some more time to figure out that one of my VS Code extension was updating the git, and was using ssh. This is when I understood that I need to show the real domain names on the UI than random IP addresses.

    Build in epel8 branch with fedpkg

    Posted by Ding-Yi Chen on October 14, 2019 08:07 AM

    This article assumes you have commit right to the package, but you don’t have the epel8 branch in https://src.fedoraproject.org/rpms/YourPackage

    1. PKG=<YourPackage>
    2. fedpkg clone $PKG # if you have not done this
    3. cd $PKG # if you have not done this
    4. git branch epel8
    5. # Make the branch work with epel8
    6. git commit
    7. git push –set-upstream origin epel8
    8. fedpkg build
    9. Go to bodhi and create -> New update


    Use sshuttle to build a poor man’s VPN

    Posted by Fedora Magazine on October 14, 2019 08:00 AM

    Nowadays, business networks often use a VPN (virtual private network) for secure communications with workers. However, the protocols used can sometimes make performance slow. If you can reach reach a host on the remote network with SSH, you could set up port forwarding. But this can be painful, especially if you need to work with many hosts on that network. Enter sshuttle — which lets you set up a quick and dirty VPN with just SSH access. Read on for more information on how to use it.

    The sshuttle application was designed for exactly the kind of scenario described above. The only requirement on the remote side is that the host must have Python available. This is because sshuttle constructs and runs some Python source code to help transmit data.

    Installing sshuttle

    The sshuttle application is packaged in the official repositories, so it’s easy to install. Open a terminal and use the following command with sudo:

    $ sudo dnf install sshuttle

    Once installed, you may find the manual page interesting:

    $ man sshuttle

    Setting up the VPN

    The simplest case is just to forward all traffic to the remote network. This isn’t necessarily a crazy idea, especially if you’re not on a trusted local network like your own home. Use the -r switch with the SSH username and the remote host name:

    $ sshuttle -r username@remotehost

    However, you may want to restrict the VPN to specific subnets rather than all network traffic. (A complete discussion of subnets is outside the scope of this article, but you can read more here on Wikipedia.) Let’s say your office internally uses the reserved Class A subnet and the reserved Class B subnet The command above becomes:

    $ sshuttle -r username@remotehost

    This works great for working with hosts on the remote network by IP address. But what if your office is a large network with lots of hosts? Names are probably much more convenient — maybe even required. Never fear, sshuttle can also forward DNS queries to the office with the –dns switch:

    $ sshuttle --dns -r username@remotehost

    To run sshuttle like a daemon, add the -D switch. This also will send log information to the systemd journal via its syslog compatibility.

    Depending on the capabilities of your system and the remote system, you can use sshuttle for an IPv6 based VPN. You can also set up configuration files and integrate it with your system startup if desired. If you want to read even more about sshuttle and how it works, check out the official documentation. For a look at the code, head over to the GitHub page.

    Photo by Kurt Cotoaga on Unsplash.

    Episode 165 - Grab Bag of Microsoft Security News

    Posted by Open Source Security Podcast on October 13, 2019 11:56 PM
    Josh and Kurt about a number of Microsoft security news items. They've changed how they are handling encrypted disks and are now forcing cloud logins on Windows users.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11626439/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes

      DNSSEC sur les sous-domaines

      Posted by Casper on October 12, 2019 04:58 PM

      Bonjour à toi Lecteur, ça faisait un moment.

      Aujourd'hui on va parler de quelque chose dont on a beaucoup entendu parler, qui inspire la crainte et l'espoir, l'orgueil et la peur. DNSSEC, ce système, permet de bloquer les serveurs DNS menteurs. Les serveurs DNS menteurs sont les résolveurs utilisés par les utilisateurs d'Internet, ce sont ceux des FAI pour le grand public, ce sont ceux des entreprises dans les grandes entreprises, et ceux d'OpenDNS ou Google pour les geeks.

      Ces résolveurs ne sont pas trop à craindre, il y a peu d'abus, sauf sur les résolveurs des FAI qui suivent les censures d'État, OpensDNS qui applique une politique non-neutre, Google qui doit bien défendre ses intérêts, etc...

      Et puis, en dehors des résolveurs, il y a d'autres mécanismes pour résoudre les noms en adresse IP directement implantés dans le système de chaque machine (/etc/hosts), et il y a le cache du navigateur en plus.

      Comme tu le constates, cher Lecteur, il y a plein d'endroits où trouver des réponses à des requêtes DNS, mais il n'y a rien qui dit que la réponse est la bonne !

      C'est pour résoudre ce problème simple qu'a été inventé DNSSEC. Il est très compliqué à mettre en place et demande beaucoup d'actions manuelles. Et le pire, c'est qu'il faut renouveler toutes ces actions tous les mois pour compenser la petite taille des clés cryptographiqes afin de maintenir un bon niveau de sécurité sur la zone DNS.

      Autant dire qu'il parait impensable de le mettre en place dans ces conditions. Ceci était mon premier constat.

      Il y a plein d'articles sur Internet qui expliquent comment installer DNSSEC, je mets de coté le fait qu'ils ont l'air compliqués, ce n'est pas le problème. Il y a deux problèmes en réalité.

      Le premier est que tous ces articles expliquent comment mettre en place DNSSEC avec un bureau d'enregistrement (Registrar), sans expliquer le principe de base. Pour moi, c'est trop obscure, et je pense que ça a dû rebutter beaucoup de personnes.

      Le second problème est que, si l'on ne peux pas comprendre le principe de base, comment peut-on appliquer le principe à ses sous-domaines. Je ne critique pas le fait qu'il n'y ait presque aucune documentation (vraiment aucune) sur le DNSSEC pour ses sous-domaines. Je ne suis pas un consommateur et je n'aime pas quand c'est tout cuit.

      J'ai donc écumé le web, et j'ai trouvé cet article, qui explique à la perfection le principe de base, et qui rend applicable la technique à ses sous-domaines. On chauffe...

      Et là, cher Lecteur, tu te dis :

      Aoutch, il faut multiplier le travail par le nombre de zone DNS pour les sous-domaines ! /o\

      Oui mais non. Grâce à cet article, j'ai pu suffisament bien comprendre la technique pour faire un script qui automatise *tout*.

      Absolument "tout" ?

      C'est obligatoire, le DNSSEC est une chaine, on ne peut pas reprendre la chaine à mi-chemin, en cours de route. Ça ne fonctionne pas comme ça.

      C'est pourquoi le script ne peut pas fonctionner si l'on ne lui indique pas le domaine principal. Si on lui indique que le domaine principal, il va faire le job. Mais l'idée de ce script, c'est de lui indiquer autant de zones DNS de sous-domaine que l'on veut : il va faire le travail pour intégrer les sous-domaines à la zone du domaine principal. Le gain de temps est monstrueux.

      On peut le relancer tous les mois ou toutes les semaines pour changer le salage. On peut le lancer tous les 3 mois pour changer toutes les clés cryptographiques. Il affiche dans la console toutes les informations utiles pour mettre à jours les enregistrements chez son Registrar.

      Alors certes, ce billet n'est pas une énième doc sur le DNSSEC, mais je tenais à partager mon script, je le fais tourner intensément depuis 2 petites semaines, et les résultats sont bons...

      Voici les dessins produits par un validateur en ligne, un schéma vaut mieux qu'un long discours :

      Capture d’écran du 2019-09-28 23-04-09.png

      Une idée d'amélioration : pouvoir intégrer DNSSEC sur une zone DNS d'un sous-sous-domaine.

      Capture d’écran du 2019-09-28 23-04-54.png

      libnbd + FUSE = nbdfuse

      Posted by Richard W.M. Jones on October 12, 2019 04:48 PM

      I’ve talked before about libnbd, our NBD client library. New in libnbd 1.2 is a tool called nbdfuse which lets you turn NBD servers into virtual files.

      A couple of weeks ago I mentioned you can use libnbd as a C library to edit qcow2 files. Now you can turn qcow2 files into virtual raw files:

      $ mkdir dir
      $ nbdfuse dir/file.raw \
            --socket-activation qemu-nbd -f qcow2 file.qcow2
      $ ls -l dir/
      total 0
      -rw-rw-rw-. 1 nbd nbd 1073741824 Jan  1 10:10 file.raw

      Reads and writes to file.raw are backed by the original qcow2 file which is updated in real time.

      Another fun thing to do is to use nbdkit, xz filter and curl to turn xz-compressed remote disk images into uncompressed local files:

      $ mkdir dir
      $ nbdfuse dir/disk.img \
            --command nbdkit -s curl --filter=xz \
      $ ls -l dir/
      total 0
      -rw-rw-rw-. 1 nbd nbd 6442450944 Jan  1 10:10 disk.img
      $ file dir/disk.img
      dir/disk.img: DOS/MBR boot sector
      $ qemu-system-x86_64 -m 4G \
            -drive file=dir/disk.img,format=raw,if=virtio,snapshot=on

      Major service disruption

      Posted by Fedora Infrastructure Status on October 11, 2019 09:43 PM
      Service 'The Koji Buildsystem' now has status: major: We are performing some database maintenance on Koji.

      FPgM update: 2019-41

      Posted by Fedora Community Blog on October 11, 2019 08:49 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week. The Go/No-Go meeting is next week. We are currently under the Final freeze.

      No office hours next week, but normally I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


      Help wanted

      • The QA team wants to know when to run live onboarding sessions. Let them know what works best for you.

      Upcoming meetings & test days

      Fedora 31


      • 22 October — Final release preferred target


      Blocker bugs

      Bug IDBlocker statusComponentBug status
      1749433Accepted (final)mutterPOST
      1747408Accepted (final)distributionNEW
      1760474Accepted (final)fedora-releaseMODIFIED
      1760415Accepted (final)fedora-reposMODIFIED
      1756553Accepted (final)gnome-control-centerON_QA
      1728240Accepted (final)sddmNEW
      1755813Accepted (final)blivet-guiPOST
      1759358Accepted (final)uboot-toolsON_QA
      1760937Proposed (final)dnfNEW
      1758588Proposed (final)dnf-plugins-extraPOST
      1703700Proposed (final)grub2ON_QA
      1750575Proposed (final)libdnfNEW
      1759644Proposed (final)mutterPOST
      1759193Proposed (final)gnome-softwareMODIFIED

      Fedora 32



      Submitted to FESCo

      CPE update

      Community Application Handover & Retirement Updates

      • Nuancier: New maintainer has been found & discussion is happening on the infrastructure mailing list
      • Fedocal: A possible maintainer has been found & the team is engaging in conversation. App will be retired on 15th October if there is no commitment.
      • Packagedb-cli: being retired
      • Elections: blocked by PostgreSQL database is missing in application catalogue
      • Badges: Discussion happening here for maintainers
      • Pastebin: The new maintainer and CPE team are currently working on moving this application to CentOS.

      In addition, the team is creating comprehensive documentation for Communishift.

      Other Project updates

      • Rawhide Gating: Still on track for early November release.
      • repoSpanner: Push request performance increased by 51%!
      • CentOS CI MS messaging plugin now in testing. Ack of messages is still missing but the fix is in progress. Both publishing and consuming are working.
      • Mirrorlist code change for CentOS 8

      The post FPgM update: 2019-41 appeared first on Fedora Community Blog.

      All systems go

      Posted by Fedora Infrastructure Status on October 11, 2019 04:07 PM
      Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

      Minor service disruption

      Posted by Fedora Infrastructure Status on October 11, 2019 01:25 PM
      Service 'The Koji Buildsystem' now has status: minor: The Koji database is in a degraded speed state. We are investigating

      Python hacking

      Posted by Gary Benson on October 11, 2019 12:06 PM

      Python‘s had this handy logging module since July 2003. A lot of things use it, so if you’re trying to understand or debug some Python code then a handy snippet to insert somewhere is:

      import logging

      Those two lines cause all loggers to log everything to the console. Check out the logging.basicConfig docs to see what else you could do.

      Anaconda debugging and testing – part 1.

      Posted by rhinstaller on October 11, 2019 11:00 AM

      Anaconda is quite a complex project with a variety of dependencies on system tools and libraries. Additionally, there are installation mods (graphical, text and non-interactive) and these can be controlled manually, partially (default configuration preset) or fully automatic. Users can also run the Anaconda application locally too to create installation media.
      Thanks to this complexity it is not an easy task to just do the required changes and run Anaconda to test it. To test Anaconda properly it is required to run it many times with all the modifications. In addition Anaconda is supported on a variety of platforms and it behaves differently on each. To have a really good feeling that everything is tested then the test should have been done on all the supported platforms, including IBM Z systems and Power PC. And yes, not even developers are able to handle that. For most of the common changes, luckily a simple x64 Virtual Machine is a good enough solution.

      To cover these issues, we (Anaconda developers) are investing quite a lot of our time to make this situation better. For example we are working on the Anaconda modularization effort. Thanks to this effort, developers can connect to modules and play with them. Also thanks to modularization, we have much greater test coverage than before. We have automatic installation tests called kickstart tests. Aside from that, we have unit tests, and we test if RPM files were created successfully.

      In this article I want to create a short tutorial about tools which should help with debugging and development of Anaconda. I will describe useful methods to get your changes into the installation environment and how to test them easily with all your required use cases. Because this topic is quite big I will split it into a few articles.

      Updates image

      Probably every developer who has contributed complex code to Anaconda development has heard about updates images. This is the beginning of our journey and the first part of this blog post series. Updates images are a basic tool used on an everyday basis to test changes targeted to Anaconda. With an updates image, you can replace any file in the installation environment. It is possible to replace for example sshd configuration file in the /etc directory or any source file of Anaconda. Because Anaconda is written almost entirely Python, it is easy to just replace specific source files before Anaconda is launched.

      At its core, an updates image is just an image containing files created by the cpio and pigz utilities. There is no magic at all. But why is there even a need to create these images? The reason is how the main Anaconda use-case looks like. There is an ISO containing everything used for installation, including Anaconda. This environment is called stage 2. There is also stage 1, but it will be described later in a different post. An updates image is created to rewrite any file in stage 2 before Anaconda and other tools are started.

      To apply an updates image, you need to add inst.updates on the kernel boot parameters or you can add the link to the kickstart file. Both solutions should have the same result. Please follow the links to find out more.


      You can easily create an updates image from the command line:

      find . | cpio -c -o | pigz -9cv > ../updates.img

      When this is called all the content in the current directory will be placed into the updates image. File paths relative from this directory will be preserved in the installation environment, e.g. ./etc/ssh/sshd_config will replace /etc/ssh/sshd_config in the installation environment. This solution is an easy way to create an updates image, but it is not very nice to apply changes from Anaconda or other advanced use cases.

      Using makeupdates script

      Because we are recreating updates images many, many times a day we have created tools to make our lives easier. The most important and oldest method is the makeupdates script in the Anaconda repository. This script will add every Anaconda file that changed from a point in the git history. The script has additional features like adding RPM package content to the updates image. I’ll now describe the most interesting features you can use with this script. To find out more about the makeupdates script please see the --help parameter.

      You can combine any of the parameters described below.

      Basic usage

      ./scripts/makeupdates --tag HEAD

      Add all files which were not committed yet to the updates image.

      ./scripts/makeupdates --tag anaconda-29.1.1-1

      Add all files changed since the commit with the anaconda-29.1.1-1 git tag was created. In other words add all changes to make anaconda version 29.1.1-1 on stage 2 the current anaconda in the git repository.

      Readers familiar with git probably already know that the --tag parameter accepts anything which can be used to specify a git commit and add all the files changed from that point to the current state. That means you can specify version tag e.g. anaconda-31.25.2-1 as well as HEAD~.

      Add custom RPM

      From time to time there is a need to add more than just parts of Anaconda to an updates image, for example, custom files or libraries. Even for this use case the makeupdates script can be used.

      ./scripts/makeupdates --add /path/to/my/first.rpm --add /path/to/my/second.rpm

      This is an easy way to add a custom RPM file to the installation environment. However, there is a drawback to this solution.

      RPM scriptlets are not executed in this case.

      RPM scriptlets are scripts in the rpm file which are executed when some action occurred (for example installation of the RPM could adjust configuration of the system). The reason why this is happening is that these RPM files are not installed to the updates image but only unpacked. For most cases that is sufficient, but for some situations you can get unexpected results such as missing files, bad dependencies, bad configuration. This could be fixed by adding more files to the updates image manually, see the section below.

      Add custom files

      The makeupdates script has a possibility to add or replace almost any content in the installation environment.

      ./scripts/makeupdates --keep

      The --keep parameter will prevent the script from removing the updates folder after each call. After running this command you will see the updates directory in the root Anaconda directory. You can then place any file in the updates directory and call the above command again to create the new updates image with the desired content. Here, the same rule applies as in the manual creation section, that directory structure created in the updates directory will be preserved in the installation environment. This command can be called as many times as needed. The updates directory won’t be erased until the --keep parameter is used. Example of this use case is:

      $ ./scripts/makeupdates --keep
      $ mkdir -p ./updates/etc/ssh
      $ cp /my/custom/config ./updates/etc/ssh/sshd_config
      $ ./scripts/makeupdates --keep

      The above is an example how to change ssh configuration of the booted installation.

      Using wrapper around makeupdates

      Aside from official makeupdates script in the Anaconda git repository, there is an option to use anaconda-updates wrapper script. This script was created mainly to simplify work with different Anaconda versions but can do more than that. This wrapper was designed to avoid writing custom user scripts. In general almost every user who has used the makeupdates script has also written their own scripts to, for example, upload resulting updates image for use in VM.

      The following steps are required to prepare an environment for using anaconda-updates.

      Create a configuration file

      The configuration file is part of the anaconda updates git repository, see link. Copy this configuration file into ~/.config/anaconda-updates/updates.cfg and fill in the values there. This configuration file will specify where is the project folder, containing projects like Anaconda, Pykickstart, and Blivet. Only Anaconda is required, however. It also defines where is the show version script (see below) and how to access a server where the updates image should be uploaded. The last part is really helpful to ease application of the updates image to the test machine, the drawback is that you have to have available server with ssh access.

      Adapt the show version script

      In the git repository there is also a script which is responsible for retrieving information about the current anaconda rpm file. This script has to be updated (or written from scratch if desired) to be applicable to your environment. See the script and make required changes.

      When the steps above are completed you can start using the anaconda-updates wrapper. The main benefits are that it is not required to use tags but rather the updates image will be created for the Anaconda rpm version (--master paramater). This also works for Fedora releases (--fedoraXX for fedora version). The automatic upload to a server is also a nice feature, since it helps to speed up the development and debugging cycles. Use of this wrapper script is pretty easy and I’m adding support for a new Fedora when it is released.

      Future improvements

      We are planning to promote the makeupdates script to a standalone script which will provide much of the current features but with a better user experience. The main change will be to move the functionality of the makeupdates script out of the Anaconda source code. We are trying to solve the problem that the script usage is not compatible for all versions and it could be problematic to start an old Anaconda version (RHEL-6/7) scripts on newer Fedora systems. Instead of having this script in different versions of Anaconda code base we will provide one script and the Anaconda code base will provide a configuration file for this script. This will probably deprecate the current anaconda-updates wrapper script because there won’t be any need for that. The new script will have the benefit to be easily extended so it can adapt all the current features of the anaconda-updates.

      Aside from that we also want to provide users with a default working configuration and separation from the Anaconda code base. That means the script will be usable immediately with no required configuration. It could be even packaged to Fedora if users would like to have it there. We will try to find the best defaults for users. The separation will give the possibility to just add a custom content without any requirement to clone the Anaconda source code. That will make use of the script much easier for people who want to debug something not related to Anaconda in the installation environment (for example why some RPM can’t be installed).

      Thanks everyone for your focus and if you have an idea for further improvement please write a comment. They are really valuable to us.

      Make your Python code look good with Black on Fedora

      Posted by Fedora Magazine on October 11, 2019 08:00 AM

      The Python programing language is often praised for its simple syntax. In fact the language recognizes that code is read much more often than it is written. Black is a tool that automatically formats your Python source code making it uniform and compliant to the PEP-8 style guide.

      How to install Black on Fedora

      Installing Black on Fedora is quite simple. Black is maintained in the official repositories.

      $ sudo dnf install python3-black

      Black is a command line tool and therefore it is run from the terminal.

      $ black --help

      Format your Python code with Black

      Using Black to format a Python code base is straight forward.

      $ black myfile.py
      All done! ✨ 🍰 ✨ 1 file left unchanged.
      $ black path_to_my_python_project/
      All done! ✨ 🍰 ✨
      165 files reformatted, 24 files left unchanged.

      By default Black allows 88 characters per line, meaning that the code will be reformatted to fit within 88 characters per line. It is possible to change this to a custom value, for example :

      $ black --line-length 100 my_python_file.py

      This will set the line length to allow 100 characters.

      Run Black as part of a CI pipeline

      Black really shines when it is integrated with other tools, like a continuous integration pipeline.

      The –check option allows to verify if any files need to be reformatted. This is useful to run as a CI test to ensure all your code is formatted in consistent manner.

      $ black --check myfile.py
      would reformat myfile.py
      All done! 💥 💔 💥
      1 file would be reformatted.

      Integrate Black with your code editor

      Running Black during the continuous integration tests is a great way to keep the code base correctly formatted. But developers really wants to forget about formatting and have the tool managing it for them.

      Most of the popular code editors support Black. It allows developers to run the format tool every time a file is saved. The official documentation details the configuration needed for each editor.

      Black is a must-have tool in the Python developer toolbox and is easily available on Fedora.

      Fedora localization platform migrates to Weblate

      Posted by Fedora Community Blog on October 10, 2019 02:46 PM
      Fedora Localization Project

      Fedora Project provides an operating system that is used in a wide variety of languages and cultures. To make it easy for non-native English speakers to use Fedora, significant effort is made to translate the user interfaces, websites and other materials.

      Part of this work is done in the Fedora translation platform, which will migrate to Weblate in the coming months.

      This migration was mandatory as development and maintenance of Zanata — the previous translation platform — ceased in 2018.

      There are a number of translation platforms available, but having a translation platform that is open source, answering Fedora Project’s needs, and likely to be long-lived are key considerations in choosing Weblate. Most other translation platforms being closed source or lacking features.

      The translation teams will be testing out Weblate over the next few months and expect it to enter production use in early 2020. More details on the transition to using Weblate are available in the wiki.

      Contributions both in the transition process and in translating are welcome and can be initiated by joining the Translation mailing list or by  joining by following updates on Translation platform migration project on teams.

      Would you like to try it out?

      1. Visit the weblate site
      2. Log in with your FAS account
      3. Start translating (the three available projects are related to our documentation and contains general purpose content)
      4. You’ll see the result on the Docs staging site (it is updated once a day)
      5. Send your feedback and questions on our mailing list so we can do our best to find the right configuration of the tool for our community.

      The post Fedora localization platform migrates to Weblate appeared first on Fedora Community Blog.

      F30-20191009 updated Live Isos released

      Posted by Ben Williams on October 10, 2019 01:27 PM

      The Fedora Respins SIG is pleased to announce the latest release of Updated F30-20190904 Live ISOs, carrying the 5.2.18-200 kernel.

      This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have 1.2GB of updates)).

      A huge thank you goes out to irc nicks dowdle,  Southern-Gentleman for testing these iso.

      We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:



      As always our isos can be found at  http://tinyurl.com/Live-respins .  

      PHP version 7.2.24RC1 and 7.3.11RC1

      Posted by Remi Collet on October 10, 2019 10:33 AM

      Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

      RPM of PHP version 7.3.11RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Fedora 29 and Enterprise Linux.

      RPM of PHP version 7.2.24RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 29 or remi-php72-test repository for Enterprise Linux.


      emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

      emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

      Parallel installation of version 7.3 as Software Collection:

      yum --enablerepo=remi-test install php73

      Parallel installation of version 7.2 as Software Collection:

      yum --enablerepo=remi-test install php72

      Update of system version 7.3:

      yum --enablerepo=remi-php73,remi-php73-test update php\*

      or, the modular way (Fedora and RHEL 8):

      dnf module enable php:remi-7.3
      dnf --enablerepo=remi-modular-test update php\*

      Update of system version 7.2:

      yum --enablerepo=remi-php72,remi-php72-test update php\*

      or, the modular way (Fedora and RHEL 8):

      dnf module enable php:remi-7.2
      dnf --enablerepo=remi-modular-test update php\*

      Notice: version 7.4.0RC3 in Fedora rawhide for QA.

      emblem-notice-24.pngEL-7 packages are built using RHEL-7.7.

      emblem-notice-24.pngPackages of 7.4.0RC3 are also available

      emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

      Software Collections (php72, php73)

      Base packages (php)

      EPEL8 packages

      Posted by Kevin Fenzi on October 09, 2019 06:31 PM

      With the opening up of EPEL8, there’s a lot of folks looking and seeing packages they formerly used in EPEL6/7 not being available and wondering why. The reason is simple: EPEL is not a fixed exact list of packages, it’s a framework that allows interested parties to build and provide the packages they are interested in providing to the community.

      This means for a package to be in EPEL8, it requires a maintainer to step forward and explicitly ask “I’d like to maintain this in EPEL8” and then build, test and do all the other things needed to provide that package.

      The reason for this is simple: We want a high quality, maintained collection of packages. Simply building things once and never again doesn’t allow for someone fixing bugs, updating the package or adjusting it for other changes. We need a active maintainer there willing and able to do the work.

      So, if you do see some package missing that you would really like, how do you get it added to the collection? First, open a bug in bugzilla.redhat.com against the package. If it has a Fedora EPEL product version, use that, otherwise use Fedora. Explain that you would really like the current Fedora/EPEL6/7 maintainers to also maintain it for EPEL8. If they are willing, they will answer in the bug. If no answer after a few weeks, you could consider maintaining the package yourself. Consult with the epel-devel list or #epel-devel on IRC for further options.

      Do note that mailing maintainer(s) directly isn’t nearly as good as just filing a bug. They would get the bug info anyhow in email, Other users might see that and add that they too want the package, the maintainer might hand off the package and the new packager could see the bug request but have no idea about private emails, some other packager might see the bug and offer to maintain it. All wins for a bug over private emails.

      As the collection grows, these sorts of questions will likely die down, but it’s important to remember that every package needs (at least) one maintainer.

      Software does not, by itself, change the world

      Posted by Mark J. Wielaard on October 09, 2019 01:31 PM

      Andy Wingo wrote some thoughts on rms and gnu. Although I don’t agree with the description of RMS as doing nothing for GNU, the part describing GNU itself is spot on:

      Software does not, by itself, change the world; it lacks agency. It is the people that maintain, grow, adapt, and build the software that are the heart of the GNU project — the maintainers of and contributors to the GNU packages. They are the GNU of whom I speak and of whom I form a part.

      Using American Fuzzy Lop on network clients

      Posted by Richard W.M. Jones on October 09, 2019 12:53 PM

      Previously I’ve fuzzed hivex and nbdkit using my favourite fuzzing tool, Michał Zalewski’s American Fuzzy Lop (AFL).

      AFL works by creating test cases which are files on disk, and then feeding those to programs which have been specially compiled so that AFL can trace into them and find out which parts of the code are run by the test case. It then adjusts the test cases and repeats, aiming to run more parts of the code and find ways to crash the program.

      This works well for programs that parse files (like hivex, but also binary parsers of all sorts and XML parsers and similar). It can also be used to fuzz some servers where you can feed a file to the server and discard anything the server sends back. In nbdkit we can use the nbdkit -s option to do exactly this, making it easy to fuzz.

      However it’s not obvious how you could use this to fuzz network clients. As readers will know we’ve been writing a new NBD client library called libnbd. But can we fuzz this? And find bugs? As it happens yes, and ooops — yes — AFL found a remote code execution bug allowing complete takeover of the client by a malicious server.

      The trick to fuzzing a network client is to do the server thing in reverse. We set up a phony server which feeds the test case back to the client socket, while discarding anything that the client writes:


      This is wrapped up into a single wrapper program which takes the test case on the command line and forks itself to make the client and server sides connected by a socket. This allows easy integration into an AFL workflow.

      We found our Very Serious Bug within 3 days of fuzzing.

      “Reformat the filesystem to enable support”

      Posted by Gary Benson on October 09, 2019 12:32 PM

      Apparently it’s been a while since I ran containers on my office computer—and by a while, I mean, since November 2016—because if your initial install was RHEL or CentOS 7.2 or older then neither Docker nor Podman will work:

      # yum -q -y install podman skopeo buildah
      # podman pull registry.access.redhat.com/ubi7/ubi
      Error: could not get runtime: kernel does not support overlay fs: overlay: the backing xfs filesystem is formatted without d_type support, which leads to incorrect behavior. Reformat the filesystem with ftype=1 to enable d_type support. Running without d_type is not supported.: driver not supported

      So… ugh. I didn’t have any disks it’d work on either:

      # for i in $(awk '{ if ($3 == "xfs") print $2 }' /etc/mtab); do xfs_info $i; done | grep ftype
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
      naming   =version 2              bsize=4096   ascii-ci=0 ftype=0

      I didn’t reformat anything though. podman pull wants overlayFS on /var/run/containers/storage, and buildah bud wants it on /var/lib/containers/storage. I made loopback disks for them both:

      1. Find/make space somewhere, then create a directory to put the images in:
        # mkdir -p /store/containers
      2. Create a big file, whatever size you want, for the disk image. I made mine 20GiB. It took a couple minutes, my disks are slow:
        # dd if=/dev/zero of=/store/containers/var_lib_containers.img bs=1M count=20K
      3. Find a free loop device and associate the file to it:
        # losetup -f
        # losetup /dev/loop1 /store/containers/var_lib_containers.img 
      4. Format the “device”, then detach it from the file:
        # mkfs -t xfs -n ftype=1 /dev/loop1
        # losetup -d /dev/loop1
      5. Mount the “disk”, and see if it worked:
        # mount -oloop /store/containers/var_lib_containers.img /var/lib/containers
        # df -h /var/lib/containers
        Filesystem      Size  Used Avail Use% Mounted on
        /dev/loop1       20G   33M   20G   1% /var/lib/containers
      6. It worked? Make it permanent:
        # echo "/store/containers/var_lib_containers.img /var/lib/containers xfs defaults,loop 1 2" >> /etc/fstab

      Rinse and repeat for the other drive it needed. Then try again:

      # podman pull registry.access.redhat.com/ubi7/ubi
      Trying to pull registry.access.redhat.com/ubi7/ubi...Getting image
      source signatures
      Copying blob bff3b73cbcc4 done
      Copying blob 7b1c937e0f67 done
      Copying config 6fecccc91c done
      Writing manifest to image destination
      Storing signatures


      Command line quick tips: Locate and process files with find and xargs

      Posted by Fedora Magazine on October 09, 2019 08:00 AM

      find is one of the more powerful and flexible command-line programs in the daily toolbox. It does what the name suggests: it finds files and directories that match the conditions you specify. And with arguments like -exec or -delete, you can have find take action on what it… finds.

      In this installment of the Command Line Quick Tips series, you’ll get an introduction to the find command and learn how to use it to process files with built-in commands or the xargs command.

      Finding files

      At a minimum, find takes a path to find things in. For example, this command will find (and print) every file on the system:

      find /

      And since everything is a file, you will get a lot of output to sort through. This probably doesn’t help you locate what you’re looking for. You can change the path argument to narrow things down a bit, but it’s still not really any more helpful than using the ls command. So you need to think about what you’re trying to locate.

      Perhaps you want to find all the JPEG files in your home directory. The -name argument allows you to restrict your results to files that match the given pattern.

      find ~ -name '*jpg'

      But wait! What if some of them have an uppercase extension? -iname is like -name, but it is case-insensitive:

      find ~ -iname '*jpg'

      Great! But the 8.3 name scheme is so 1985. Some of the pictures might have a .jpeg extension. Fortunately, we can combine patterns with an “or,” represented by -o. The parentheses are escaped so that the shell doesn’t try to interpret them instead of the find command.

      find ~ \( -iname '*jpeg' -o -iname '*jpg' \)

      We’re getting closer. But what if you have some directories that end in jpg? (Why you named a directory bucketofjpg instead of pictures is beyond me.) We can modify our command with the -type argument to look only for files:

      find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f

      Or maybe you’d like to find those oddly named directories so you can rename them later:

      find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d

      It turns out you’ve been taking a lot of pictures lately, so narrow this down to files that have changed in the last week with -mtime (modification time). The -7 means all files modified in 7 days or fewer.

      find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7

      Taking action with xargs

      The xargs command takes arguments from the standard input stream and executes a command based on them. Sticking with the example in the previous section, let’s say you want to copy all of the JPEG files in your home directory that have been modified in the last week to a thumb drive that you’ll attach to a digital photo display. Assume you already have the thumb drive mounted as /media/photo_display.

      find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7 -print0 | xargs -0 cp -t /media/photo_display

      The find command is slightly modified from the previous version. The -print0 command makes a subtle change on how the output is written: instead of using a newline, it adds a null character. The -0 (zero) option to xargs adjusts the parsing to expect this. This is important because otherwise actions on file names that contain spaces, quotes, or other special characters may not work as expected. You should use these options whenever you’re taking action on files.

      The -t argument to cp is important because cp normally expects the destination to come last. You can do this without xargs using find‘s -exec command, but the xargs method will be faster, especially with a large number of files, because it will run as a single invocation of cp.

      Find out more

      This post only scratches the surface of what find can do. find supports testing based on permissions, ownership, access time, and much more. It can even compare the files in the search path to other files. Combining tests with Boolean logic can give you incredible flexibility to find exactly the files you’re looking for. With build in commands or piping to xargs, you can quickly process a large set of files.

      Portions of this article were previously published on Opensource.com. Photo by Warren Wong on Unsplash.