Fedora People

How to run virtual machines with virt-manager

Posted by Fedora Magazine on July 22, 2019 08:00 AM

In the beginning there was dual boot, it was the only way to have more than one operating system on the same laptop. At the time, it was difficult for these operating systems to be run simultaneously or interact with each other. Many years passed before it was possible, on common PCs, to run an operating system inside another through virtualization.

Recent PCs or laptops, including moderately-priced ones, have the hardware features to run virtual machines with performance close to the physical host machine.

Virtualization has therefore become normal, to test operating systems, as a playground for learning new techniques, to create your own home cloud, to create your own test environment and much more. This article walks you through using Virt Manager on Fedora to setup virtual machines.

Introducing QEMU/KVM and Libvirt

Fedora, like all other Linux systems, comes with native support for virtualization extensions. This support is given by KVM (Kernel based Virtual Machine) currently available as a kernel module.

QEMU is a complete system emulator that works together with KVM and allows you to create virtual machines with hardware and peripherals.

Finally libvirt is the API layer that allows you to administer the infrastructure, ie create and run virtual machines.

The set of these three technologies, all open source, is what we’re going to install on our Fedora Workstation.


Step 1: install packages

Installation is a fairly simple operation. The Fedora repository provides the “virtualization” package group that contains everything you need.

sudo dnf install @virtualization

Step 2: edit the libvirtd configuration

By default the system administration is limited to the root user, if you want to enable a regular user you have to proceed as follows.

Open the /etc/libvirt/libvirtd.conf file for editing

sudo vi /etc/libvirt/libvirtd.conf

Set the domain socket group ownership to libvirt

unix_sock_group = "libvirt"

Adjust the UNIX socket permissions for the R/W socket

unix_sock_rw_perms = "0770"

Step 3: start and enable the libvirtd service

sudo systemctl start libvirtd
sudo systemctl enable libvirtd

Step 4: add user to group

In order to administer libvirt with the regular user you must add the user to the libvirt group, otherwise every time you start virtual-manager you will be asked for the password for sudo.

sudo usermod -a -G libvirt $(whoami)

This adds the current user to the group. You must log out and log in to apply the changes.

Getting started with virt-manager

The libvirt system can be managed either from the command line (virsh) or via the virt-manager graphical interface. The command line can be very useful if you want to do automated provisioning of virtual machines, for example with Ansible, but in this article we will concentrate on the user-friendly graphical interface.

The virt-manager interface is simple. The main form shows the list of connections including the local system connection.

The connection settings include virtual networks and storage definition. it is possible to define multiple virtual networks and these networks can be used to communicate between guest systems and between the guest systems and the host.

Creating your first virtual machine

To start creating a new virtual machine, press the button at the top left of the main form:

<figure class="wp-block-image"></figure>

The first step of the wizard requires the installation mode. You can choose between a local installation media, network boot / installation or an existing virtual disk import:

<figure class="wp-block-image"></figure>

Choosing the local installation media the next step will require the ISO image path:

<figure class="wp-block-image"><figcaption>

The subsequent two steps will allow you to size the CPU, memory and disk of the new virtual machine. The last step will ask you to choose network preferences: choose the default network if you want the virtual machine to be separated from the outside world by a NAT, or bridged if you want it to be reachable from the outside. Note that if you choose bridged the virtual machine cannot communicate with the host machine.

Check “Customize configuration before install” if you want to review or change the configuration before starting the setup:

<figure class="wp-block-image"></figure>

The virtual machine configuration form allows you to review and modify the hardware configuration. You can add disks, network interfaces, change boot options and so on. Press “Begin installation” when satisfied:

<figure class="wp-block-image"></figure>

At this point you will be redirected to the console where to proceed with the installation of the operating system. Once the operation is complete, you will have the working virtual machine that you can access from the console:

<figure class="wp-block-image"></figure>

The virtual machine just created will appear in the list of the main form, where you will also have a graph of the CPU and memory occupation:

<figure class="wp-block-image"></figure>

libvirt and virt-manager is a powerful tool that allows great customization to your virtual machines with enterprise level management. If something even simpler is desired, note that Fedora Workstation comes with GNOME Boxes pre-installed and can be sufficient for basic virtualization needs.

Contribute at the Fedora Test Week for kernel 5.2

Posted by Fedora Magazine on July 22, 2019 06:51 AM

The kernel team is working on final integration for kernel 5.1. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, Jul 22, 2019 through Monday, Jul 29, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Episode 155 - Stealing cars and ransomware

Posted by Open Source Security Podcast on July 22, 2019 12:01 AM
Josh and Kurt talk about a new way to steal cars because a service didn't do proper background checks. We also discuss how this relates to working with criminals, such as ransomware, and what it means for the future of the ransomware industry.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/10588175/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    Westcoast hackfest; GTK updates

    Posted by Matthias Clasen on July 21, 2019 11:51 PM

    After Behdad left, Christian and I turned our attention to GtkTextView, and made some progress.


    GtkTextView is a very old widget. It started out as a port of the tk text widget, and it has not seen a lot of architectural updates over the years. A few years ago, we added a pixel cache to it, to improve its scrolling, but on a high resolution display, its still a lot of pixels to shovel around.

    As we’ve moved widgets to GTK4’s rendering models, everybody avoided GtkTextView, so it was using the fallback cairo rendering path, even as we ported other text rendering in GTK to a new pango renderer which produces render nodes.

    Until yesterday. We decided to just have a look at how hard it would be to switch the text view over to the new pango renderer. This went much more smoothly than we expected, and the new code is in master today.

    <iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="267" src="https://www.youtube.com/embed/zDLCJCX1kL0?feature=oembed" title="Gtk 4 smooth scrolling with GPU backed textview" width="474"></iframe>

    So far, this is just a straight port with no optimizations (we want to look at smarter caching of render nodes for the visible range). But it is already noticeably smoother to scroll text.

    The video does not really do it justice. If you want to try for yourself, the commit is here.


    After this unexpected success, we looked for another small thing we could to make text editing in GTK feel more modern: better blinking cursors.

    <video class="wp-video-shortcode" controls="controls" height="267" id="video-3230-1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2019/07/cursor-blinks.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2019/07/cursor-blinks.webm</video>

    For the last 20 years, our cursor blinking was very simple: We turn it off, and then we turn it on again. With GTK4, it is very straightforward to do a little better, and fade the cursor in and out smoothly.

    A subtle change, but it improves the experience.

    Changing how we work

    Posted by Kevin Fenzi on July 21, 2019 08:59 PM

    As those of you who read the https://communityblog.fedoraproject.org/state-of-the-community-platform-engineering-team/ blog know, we are looking at changing workflows and organization around in the Community Platform Engineering team (of which, I am a member). So, I thought I would share a few thoughts from my perspective and hopefully enlighten the community more on why we are changing things and what that might look like.

    First, let me preface my remarks with a disclaimer: I am speaking for myself, not our entire team or anyone else in it.

    So what are the reasons we are looking for change? Well, there are a number of them, some of them inter-related, but:

    • I know I spend more time on my job than any ‘normal’ person would. Thats great, but we don’t want burn out or heroic efforts all the time. It’s just not sustainable. We want to get things done more efficiently, but also have time to relax and not have tons of stress.
    • We maintain/run too many things for the number of people we have. Some of our services don’t need much attention, but even so, we have added lots of things over the years and retired very few.
    • Humans suck at multitasking. There’s been study after study that show that for the vast majority of people, it is MUCH more efficent to do one task at a time, finish it and then move on. Our team gets constant interruptions, and we currently handle them poorly.
    • It’s unclear where big projects are in our backlog. When other teams approach us with big items to do it’s hard to show them when we might work on the thing they want us to, or whats ahead of it, or what priority things have.
    • We have a lot of ‘silos’. Just the way the team has worked, one person usually takes lead on each specific application or area and knows it quite well. This however means no one else does, no one else can help, they can never win the lottery, etc.
    • Things without a ‘driver’ sometimes just languish. If there is not someone (one of our team or even a requestor) pressing a work item forward, sometimes it just never gets done. Look at some of the old tickets in the fedora-infrastructure tracker. We totally want to do many of those, but they never get someone scheduling them and doing them.
    • There’s likely more…

    So, what have we done lately to help with these issues? We have been looking a lot at other similar teams and how they became more efficient. We have been looking at various of the ‘agile’ processes, although I personally do not want to cargo cult anything, if there’s a ceremony some process calls for that makes no sense for us, we should not do it.

    • We setup an ‘oncall’ person (switched weekly). This person listens for pings on IRC, tickets or emails to anyone on the team and tries to intercept and triage them. This allows the rest of the team to focus on whatever they are working on (unless the oncall person deems this serious enough to bother them). Even if you stop and tell the person you don’t have time and are busy on something else, the amount of time to swap that out and back in already makes things much worse for you. We of course will still be happy to work with people on IRC, just schedule time in advance in the associated ticket.
    • ticket or it doesn’t exist. We still are somewhat bad about this, but the idea is that every work item should be a ticket. Why? So we can keep track of the things we do, so oncall can triage them and assign priority, so people can look at tickets when they have finished a task and not been interrupted in the middle of it. So we can hand off items that are still being worked on and coordinate. So we know who is doing what. And on and on.
    • We are moving our ‘big project’ items to be handled by teams that assemble for that project. This includes a gathering info phase, priority, who does what, estimated schedule, etc. This ensures that there’s no silo (multiple people working on it), that it has a driver so it gets done and so on. Setting expectations is key.
    • We are looking to retire, outsource or hand off to community members some of the things we ‘maintain’ today. There’s a few things that just make sense to drop because they aren’t used much, or we can just point at some better one. There’s also a group of things that we could run, but we could just outsource to another company that focuses on that application and have them do it. Finally there are things we really like and want to grow, but we just don’t have any time to work on them. If we hand them off to people who are passioniate about them, hopefully they will grow much better than if we were still the bottleneck.

    Finally, where are we looking at getting to?

    • We will probibly be setting up a new tracker for work (which may not mean anything to our existing trackers, we may just sync from those to the new one). This is to allow us to get lots more metrics and have a better way of tracking all this stuff. This is all still handwavy, but we will of course take input on it as we go and adjust.
    • Have an ability to look and see what everyone is working on right at a point in time.
    • Much more ‘planning ahead’ and seeing all the big projects on the list.
    • Have an ability for stakeholders to see where their thing is and who is higher priority and be able to negotiate to move things around.
    • Be able to work on single tasks to completion, then grab the next one from the backlog.
    • Be able to work “normal” amounts of time… no heroics!

    I hope everyone will be patient with us as we do these things, provide honest feedback to us so we can adjust and help us get to a point where everyone is happier.

    Pango updates

    Posted by Matthias Clasen on July 19, 2019 07:35 PM
    <header class="entry-header">I have recently spent some time on Pango again, in preparation for the Westcoast hackfest. Behdad is here, and we’ve made great progress on the first day.


    My last Pango update laid out our plans for Pango. Today I’ll summarize the major changes that will be in the next Pango release, 1.44.

    Unicode APIs

    I had a planned to replace PangoScript by GUnicodeScript outright, but doing so caused breakage in introspection and elsewhere. So, for now, we’ve just deprecated it and recommend that everybody should use GUnicodeScript instead. We did get a registered GType for this (and other) enumerations into GObject, so the lack of a type is no longer an obstacle.

    Harfbuzz passthrough

    We have added an api to get a Harfbuzz font object from a PangoFont:

    hb_font_t *pango_font_get_hb_font (PangoFont *f)

    This makes technologies such as OpenType features or variations available to applications without adding more Pango apis in the future.

    Reduced freetype dependency

    Pango uses harfbuzz for getting font and glyph metrics , glyph IDs and other kinds of font information now, so we don’t need an FT_Face anymore, and pango_fc_font_lock_face() has been deprecated.

    Unified shaping

    We are using harfbuzz for shaping on all platforms now.  This has allowed us to drop the remaining internal uses of shape and language engines.

    Unhinted rendering

    Pango no longer forces glyph positions and sizes to be on integral pixel positions. This allows renderers to place glyphs on a subpixel grid. cairo master has the necessary changes to make this work.

    Modifying Windows local accounts with Fedora and chntpw

    Posted by Fedora Magazine on July 19, 2019 08:00 AM

    I recently encountered a problem at work where a client’s Windows 10 PC lost trust to the domain. The user is an executive and the hindrance of his computer can affect real-time mission-critical tasks. He gave me 30 minutes to resolve the issue while he attended a meeting.

    Needless to say, I’ve encountered this issue many times in my career. It’s an easy fix using the Windows 7/8/10 installation media to reset the Administrator password, remove the PC off the domain and rejoin it. Unfortunately it didn’t work this time. After 20 minutes of scouring the net and scanning through the Microsoft Docs with no success, I turned to my development machine running Fedora with hopes of finding a solution.

    With dnf search I found a utility called chntpw:

    $ dnf search windows | grep password

    According to the summary, chntpw will “change passwords in Windows SAM files.”

    Little did I know at the time there was more to this utility than explained in the summary. Hence, this article will go through the steps I used to successfully reset a Windows local user password using chntpw and a Fedora Workstation Live boot USB. The article will also cover some of the features of chntpw used for basic user administration.

    Installation and setup

    If the PC can connect to the internet after booting the live media, install chntpw from the official Fedora repository with:

    $ sudo dnf install chntpw

    If you’re unable to access the internet, no sweat! Fedora Workstation Live boot media has all the dependencies installed out-of-the-box, so all we need is the package. You can find the builds for your Fedora version from the Fedora Project’s Koji site. You can use another computer to download the utility and use a USB thumb drive, or other form of media to copy the package.

    First and foremost we need to create the Fedora Live USB stick. If you need instructions, the article on How to make a Fedora USB stick is a great reference.

    Once the key is created shut-down the Windows PC, insert the thumb drive if the USB key was created on another computer, and turn on the PC — be sure to boot from the USB drive. Once the live media boots, select “Try Fedora” and open the Terminal application.

    Also, we need to mount the Windows drive to access the files. Enter the following command to view all drive partitions with an NTFS filesystem:

    $ sudo blkid | grep ntfs

    Most hard drives are assigned to /dev/sdaX where X is the partition number — virtual drives may be assigned to /dev/vdX, and some newer drives (like SSDs) use /dev/nvmeX. For this example the Windows C drive is assigned to /dev/sda2. To mount the drive enter:

    $ sudo mount /dev/sda2 /mnt

    Fedora Workstation contains the ntfs-3g and ntfsprogs packages out-of-the-box. If you’re using a spin that does not have NTFS working out of the box, you can install these two packages from the official Fedora repository with:

    $ sudo dnf install ntfs-3g ntfsprogs

    Once the drive is mounted, navigate to the location of the SAM file and verify that it’s there:

    $ cd /mnt/Windows/System32/config
    $ ls | grep SAM

    Clearing or resetting a password

    Now it’s time to get to work. The help flag -h provides everything we need to know about this utility and how to use it:

    $ chntpw -h
    chntpw: change password of a user in a Windows SAM file,
    or invoke registry editor. Should handle both 32 and 64 bit windows and
    all version from NT3.x to Win8.1
    chntpw [OPTIONS] [systemfile] [securityfile] [otherreghive] […]
    -h This message
    -u Username or RID (0x3e9 for example) to interactively edit
    -l list all users in SAM file and exit
    -i Interactive Menu system
    -e Registry editor. Now with full write support!
    -d Enter buffer debugger instead (hex editor),
    -v Be a little more verbose (for debuging)
    -L For scripts, write names of changed files to /tmp/changed
    -N No allocation mode. Only same length overwrites possible (very safe mode)
    -E No expand mode, do not expand hive file (safe mode)

    Usernames can be given as name or RID (in hex with 0x first)
    See readme file on how to get to the registry files, and what they are.
    Source/binary freely distributable under GPL v2 license. See README for details.
    NOTE: This program is somewhat hackish! You are on your own!

    Use the -l parameter to display a list of users it reads from the SAM file:

    $ sudo chntpw -l SAM
    chntpw version 1.00 140201, (c) Petter N Hagen
    Hive name (from header): <\SystemRoot\System32\Config\SAM>
    ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
    File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
    Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

    | RID -|---------- Username ------------| Admin? |- Lock? --|
    | 01f4 | Administrator | ADMIN | dis/lock |
    | 01f7 | DefaultAccount | | dis/lock |
    | 03e8 | defaultuser0 | | dis/lock |
    | 01f5 | Guest | | dis/lock |
    | 03ea | sysadm | ADMIN | |
    | 01f8 | WDAGUtilityAccount | | dis/lock |
    | 03e9 | WinUser | | |

    Now that we have a list of Windows users we can edit the account. Use the -u parameter followed by the username and the name of the SAM file. For this example, edit the sysadm account:

    $ sudo chntpw -u sysadm SAM
    chntpw version 1.00 140201, (c) Petter N Hagen
    Hive name (from header): <\SystemRoot\System32\Config\SAM>
    ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
    File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
    Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

    ================= USER EDIT ====================

    RID : 1002 [03ea]
    Username: sysadm
    fullname: SysADM
    comment :
    homedir :

    00000220 = Administrators (which has 2 members)

    Account bits: 0x0010 =
    [ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
    [ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
    [ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
    [ ] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
    [ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |

    Failed login count: 0, while max tries is: 0
    Total login count: 0

    - - - User Edit Menu:
    1 - Clear (blank) user password
    (2 - Unlock and enable user account) [seems unlocked already]
    3 - Promote user (make user an administrator)
    4 - Add user to a group
    5 - Remove user from a group
    q - Quit editing user, back to user select
    Select: [q] >

    To clear the password press 1 and ENTER. If successful you will see the following message:

    Select: [q] > 1
    Password cleared!
    ================= USER EDIT ====================

    RID : 1002 [03ea]
    Username: sysadm
    fullname: SysADM
    comment :
    homedir :

    00000220 = Administrators (which has 2 members)

    Account bits: 0x0010 =
    [ ] Disabled | [ ] Homedir req. | [ ] Passwd not req. |
    [ ] Temp. duplicate | [X] Normal account | [ ] NMS account |
    [ ] Domain trust ac | [ ] Wks trust act. | [ ] Srv trust act |
    [ ] Pwd don't expir | [ ] Auto lockout | [ ] (unknown 0x08) |
    [ ] (unknown 0x10) | [ ] (unknown 0x20) | [ ] (unknown 0x40) |

    Failed login count: 0, while max tries is: 0
    Total login count: 0
    ** No NT MD4 hash found. This user probably has a BLANK password!
    ** No LANMAN hash found either. Try login with no password!

    Verify the change by repeating:

    $ sudo chntpw -l SAM
    chntpw version 1.00 140201, (c) Petter N Hagen
    Hive name (from header): <\SystemRoot\System32\Config\SAM>
    ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
    File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
    Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

    | RID -|---------- Username ------------| Admin? |- Lock? --|
    | 01f4 | Administrator | ADMIN | dis/lock |
    | 01f7 | DefaultAccount | | dis/lock |
    | 03e8 | defaultuser0 | | dis/lock |
    | 01f5 | Guest | | dis/lock |
    | 03ea | sysadm | ADMIN | *BLANK* |
    | 01f8 | WDAGUtilityAccount | | dis/lock |
    | 03e9 | WinUser | | |


    The “Lock?” column now shows BLANK for the sysadm user. Type q to exit and y to write the changes to the SAM file. Reboot the machine into Windows and login using the account (in this case sysadm) without a password.


    Furthermore, chntpw can perform basic Windows user administrative tasks. It has the ability to promote the user to the administrators group, unlock accounts, view and modify group memberships, and edit the registry.

    The interactive menu

    chntpw has an easy-to-use interactive menu to guide you through the process. Use the -i parameter to launch the interactive menu:

    $ chntpw -i SAM
    chntpw version 1.00 140201, (c) Petter N Hagen
    Hive name (from header): <\SystemRoot\System32\Config\SAM>
    ROOT KEY at offset: 0x001020 * Subkey indexing type is: 686c
    File size 65536 [10000] bytes, containing 7 pages (+ 1 headerpage)
    Used for data: 346/37816 blocks/bytes, unused: 23/7016 blocks/bytes.

    <>========<> chntpw Main Interactive Menu <>========<>
    Loaded hives:
    1 - Edit user data and passwords
    2 - List groups
    - - -
    9 - Registry editor, now with full write support!
    q - Quit (you will be asked if there is something to save)

    Groups and account membership

    To display a list of groups and view its members, select option 2 from the interactive menu:

    What to do? [1] -> 2
    Also list group members? [n] y
    === Group # 220 : Administrators
    0 | 01f4 | Administrator |
    1 | 03ea | sysadm |
    === Group # 221 : Users
    1 | 000b | NT AUTHORITY\Authenticated Users |
    2 | 03e8 | defaultuser0 |
    3 | 03e9 | WinUser |
    === Group # 222 : Guests
    0 | 01f5 | Guest |
    === Group # 223 : Power Users
    === Group # 247 : Device Owners

    Adding the user to the administrators group

    To elevate the user with administrative privileges press 1 to edit the account, then 3 to promote the user:

    Select: [q] > 3

    Will add the user to the administrator group (0x220)
    and to the users group (0x221). That should usually be
    what is needed to log in and get administrator rights.
    Also, remove the user from the guest group (0x222), since
    it may forbid logins.

    (To add or remove user from other groups, please other menu selections)

    Note: You may get some errors if the user is already member of some
    of these groups, but that is no problem.

    Do it? (y/n) [n] : y

    Adding to 0x220 (Administrators) …
    sam_put_user_grpids: success exit
    Adding to 0x221 (Users) …
    sam_put_user_grpids: success exit
    Removing from 0x222 (Guests) …
    remove_user_from_grp: NOTE: group not in users list of groups, may mean user not member at all. Safe. Continuing.
    remove_user_from_grp: NOTE: user not in groups list of users, may mean user was not member at all. Does not matter, continuing.
    sam_put_user_grpids: success exit

    Promotion DONE!

    Editing the Windows registry

    Certainly the most noteworthy, as well as the most powerful, feature of chntpw is the ability to edit the registry and write to it. Select 9 from the interactive menu:

    What to do? [1] -> 9
    Simple registry editor. ? for help.

    > ?
    Simple registry editor:
    hive [] - list loaded hives or switch to hive number
    cd - change current key
    ls | dir [] - show subkeys & values,
    cat | type - show key value
    dpi - show decoded DigitalProductId value
    hex - hexdump of value data
    ck [] - Show keys class data, if it has any
    nk - add key
    dk - delete key (must be empty)
    ed - Edit value
    nv - Add value
    dv - Delete value
    delallv - Delete all values in current key
    rdel - Recursively delete key & subkeys
    ek - export key to (Windows .reg file format)
    debug - enter buffer hexeditor
    st [] - debug function: show struct info
    q - quit

    Finding help

    As we saw earlier, the -h parameter allows us to quickly access a reference guide to the options available with chntpw. The man page contains detailed information and can be accessed with:

    $ man chntpw

    Also, if you’re interested in a more hands-on approach, spin up a virtual machine. Windows Server 2019 has an evaluation period of 180 days, and Windows Hyper-V Server 2019 is unlimited. Creating a Windows guest VM will provide the basics to modify the Administrator account for testing and learning. For help with quickly creating a guest VM refer to the article Getting started with virtualization in Gnome Boxes.


    chntpw is a hidden gem for Linux administrators and IT professionals alike. While a nifty tool to quickly reset Windows account passwords, it can also be used to troubleshoot and modify local Windows accounts with a no-nonsense feel that delivers. This is perhaps only one such tool for solving the problem, though. If you’ve experienced this issue and have an alternative solution, feel free to put it in the comments below.

    This tool, like many other “hacking” tools, holds with it an ethical responsibility. Even chntpw states:

    NOTE: This program is somewhat hackish! You are on your own!

    When using such programs, we should remember the three edicts outlined in the message displayed when running sudo for the first time:

    1. Respect the privacy of others.
    2. Think before you type.
    3. With great power comes great responsibility.

    Photo by Silas Köhler on Unsplash,

    Codes of Conduct and Hypocrisy

    Posted by Daniel Pocock on July 19, 2019 07:20 AM

    In recent times, there has been increasing attention on all forms of abuse and violence against women.

    Many types of abuse are hidden from public scrutiny. Yet there is one that is easily visible: the acid attack.

    Reshma Qureshi, pictured above, was attacked by an estranged brother-in-law. He had aimed to attack her sister, his ex-wife. This reveals one of the key attributes of these attacks: they are often perpetrated by somebody who the victim trusted.

    When so many other forms of abuse are hidden, why is the acid attack so visible? This is another common theme: the perpetrator is often motivated to leave lasting damage, to limit the future opportunities available to the victim. It is not about hurting the victim, it is about making sure they will be rejected by others.

    It is disturbing then that we find similar characteristics in online communities. Debian and Wikimedia (beware: scandal) have both recently decided to experiment with publicly shaming, humiliating and denouncing people. In the world of technology, trust is critical. People in positions of leadership have found that a simple email to the press can be used to undermine trust in a rival, leaving a smear that will linger, like the scars intended by Qureshi's estranged brother-in-law. Here is an example:

    Jackson's virtual acid attack was picked up by at least one journalist and used to create a news story.

    Some people spend endless hours talking (or writing) about safety and codes of conduct, yet they seem to completely miss the point. Personally, I don't object to codes of conduct, but we have to remember that not all codes of conduct are equal. In practice, the use of codes of conduct in many free software communities today looks like this:

    If you search for sample codes of conduct online, you may well find some organizations use alternative titles, such as a statement of member's rights and obligations. This reminds us that you need to have both.

    When we see organizations like FSFE and Debian trying to make up excuses to explain why members can't be members of their respective legal bodies, what they are really saying is that they want the members to have less rights.

    When you have obligations without rights, you end up with slavery and cult-like phenomena.

    History lessons

    One of the first codes of conduct may be the Magna Carta from the year 1215. Lord Denning described it as the greatest constitutional document of all times – the foundation of the freedom of the individual against the arbitrary authority of the despot.

    In other words, 800 years ago in medieval England they came to the conclusion that members of a community couldn't be punished arbitrarily.

    What is significant about this document is that the king himself chose to be subjected to this early code of conduct.

    An example of rights

    In 2016, when serious accusations of sexual misconduct were made against a volunteer who participates in multiple online communities, the Debian Account Managers sent him a threat of expulsion and gave him two days to respond.

    Yet in 2018, when Chris Lamb decided to indulge in removing members from the Debian keyring, he simply did it spontaneously, using the Debian Account Managers as puppets to do his bidding. Members targetted by these politically-motivated assassinations weren't given the same two day notice period as the person facing allegations of sexual assault.

    Two days hardly seems like sufficient time to respond to such allegations, especially for the member who was ambushed the week before Christmas. What if such a message was sent when he was already on vacation and didn't even receive the message until January? Nonetheless, however crude, a two day response period is a process. Chris Lamb threw that process out the window. There is something incredibly arrogant about that, a leader who doesn't need to listen to people before making such a serious decision, it is as if he thinks being Debian Project Leader is equivalent to being God.

    The Universal Declaration of Human Rights, Article 10 tells us that Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations. They were probably thinking about more than a two day response period when they wrote that.

    Any organization seeking to have a credible code of conduct seeks to have a clause equivalent to article 10. Yet the recent scandals in Debian and Wikimedia demonstrate what happens in the absence of such clauses. As Lord Denning put it, without any process or hearing, members are faced with the arbitrary authority of the despot.

    The trauma of incarceration

    In her FOSDEM 2019 talk about Enforcement, Molly de Blanc has chosen pictures of a cat behind bars and a cat being squashed in a sofa.

    It is abhorrent that de Blanc chose to use this imagery just three days after another member of the Debian community passed away. Locking up people (or animals) is highly abusive and not something to joke about. For example, we wouldn't joke with a photo of an animal being raped, so why is it OK to display an image of a cat behind bars?

    Deaths in custody are a phenomena that is both disturbing and far too common. Debian's founder had taken his life immediately after a period of incarceration.

    Virtual incarceration

    The system of secretly shaming people, censoring people, demoting people and running huge lynching threads on the debian-private mailing list has many psychological similarities to incarceration.

    Here is a snapshot of what happens on debian-private:

    It resembles the medieval practice of locking people in the pillory or stocks and inviting the rest of the community to throw rocks and garbage at them.

    How would we feel if somebody either responded to this virtual lynching with physical means, or if they took their own life or the lives of other people? In my earlier blog about secret punishments, I referred to the research published in Social Psychology of Education which found that psychological impacts of online bullying, which includes shaming, are just as harmful as the psychological impact from child abuse.

    Would you want to holiday in a village that re-introduced this type of cruel punishment? It turns out, studies have also shown that witnesses to the bullying, which could include any subscribers to the debian-private mailing list, may be suffering as much or more harm than the victims.

    If Debian's new leader took bullying seriously, he would roll back all decisions made through such vile processes, delete all evidence of the bullying from public mailing list archives and give a public statement to confirm that the organization failed. Instead, we see people continuing to try and justify a kangaroo court, using grievance procedures sketched on the back of a napkin.

    What is leadership for?

    It is generally accepted that leaders of modern organizations should act to prevent lynchings and mobbings in their organizations. Yet in recent cases in both Debian and Wikimedia, it appears that the leaders have been the instigators, using the lynching to turn opinion against their victims before there is any time to analyse evidence or give people a fair hearing.

    What's more, many people have formed the impression that Molly de Blanc's talks on this subject are not only encouraging these practices but also trolling the victims. She is becoming a trauma trigger for anybody who has ever been bullied.

    Looking over the debian-project mailing list since December 2018, it appears all the most abusive messages, such as the call for dirt on another member, or the public announcement that a member is on probation, have been written by people in a position of leadership or authority, past or present. These people control the infrastructure, they know the messages will reach a lot of people and they intend to preserve them publicly for eternity. That is remarkably similar to the mindset of the men who perpetrate acid attacks on women they can't control.

    Therefore, if the leader of an organization repeatedly indulges himself, telling volunteers they are not real developers, has he really made them less of a developer, or has he simply become less of a leader, demoting himself to become one of the despots Lord Denning refers to?

    Setting up WKD

    Posted by Kushal Das on July 19, 2019 05:35 AM

    We fetch any GPG public key from the keyservers using the GPG fingerprint (or parts of it). This step is still a problematic one for most of us. As the servers may not be responding, or the key is missing (not pushed) to the server. Also, if we only have the email address, there is no easy way to download the corresponding GPG key.

    Web Key Directory to rescue

    The Web Key Directory comes to the picture. We use WKD to enable others to get our GPG keys for email addresses very easily. In simple terms:

    The Web Key Directory is the HTTPS directory from which keys can be fetched.

    Let us first see this in action:

    gpg --auto-key-locate clear,wkd --locate-key mail@kushaldas.in

    The above will fetch you the key for the email address, and you can also assume the person who owns the key also has access to the https://kushaldas.in server.

    There are many available email clients, which will do this for you. For example Thunderbird/Enigmail 2.0 or Kmail version 5.6 onwards.

    Setting up WKD for your domain

    I was going through the steps mentioned in the GNUPG wiki, while weasel pointed to me to a Makefile to keep things even more straightforward.

    all: update install
            rm -rfv openpgpkey
            mkdir -v openpgpkey
            echo 'A85FF376759C994A8A1168D8D8219C8C43F6C5E1 mail@kushaldas.in' | /usr/lib/gnupg/gpg-wks-client -v --install-key
            chmod -v 0711 openpgpkey/kushaldas.in
            chmod -v 0711 openpgpkey/kushaldas.in/hu
            chmod -v 0644 openpgpkey/kushaldas.in/hu/*
            touch openpgpkey/kushaldas.in/policy
            ln -s kushaldas.in/hu openpgpkey/
            ln -s kushaldas.in/policy openpgpkey/
    install: update
            rsync -Pravz --delete ./openpgpkey root@kushaldas.in:/usr/local/www/kushaldas.in/.well-known/
    .PHONY: all update install

    The above Makefile is using gpg-wks-client executable and also pushing the changes to the right directory on the server.

    Email providers like protonmail already allow users to publish similar information. I hope this small Makefile will help you to set up your domain.

    Untitled Post

    Posted by Carlos Jara Alva on July 18, 2019 07:44 PM
    FLISOL 2019
    En el mes de abril se realizo el FLISOL en distintas partes de Latinoamérica. En el Perú, se realizo en pocas sedes:
    En Lima, la única sede que participo fue en la Municipalidad de Pueblo Libre, en el cual la Comunidad de Fedora Peru fue invitada: https://www.facebook.com/proyectofedoraperu/

    Muchas gracias por el apoyo.

    Contribute at the Fedora Test Week for kernel 5.2

    Posted by Fedora Community Blog on July 18, 2019 05:22 PM
    Fedora 30 Kernel 5.2Test Day

    The kernel team is working on final integration for kernel 5.2. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, July 22, 2019 through Monday, July 29, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

    How does a test week work?

    A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

    To contribute, you only need to be able to do the following things:

    • Download test materials, which include some large files
    • Read and follow directions step by step

    The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

    Happy testing, and we hope to see you on test day.

    The post Contribute at the Fedora Test Week for kernel 5.2 appeared first on Fedora Community Blog.

    Text install's Revenge!

    Posted by Luigi Votta on July 18, 2019 03:16 PM
    To install Fedora 30 in text mode,
    selecting what you need and prefer, try this!

    Download a net-install iso
    When grub loads, insert inst.text at end of kernel line.

    Building blocks of syslog-ng

    Posted by Peter Czanik on July 18, 2019 09:20 AM

    Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that feedback into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.

    This one gives you an overview of syslog-ng, its major features and an introduction to its configuration.

    What is logging & syslog-ng?

    Let’s start from the very beginning. Logging is the recording of events on a computer. And what is syslog-ng? It’s an enhanced logging daemon with a focus on portability and high-performance central log collection. It was originally developed in C.

    Why is central logging so important? There are three major reasons:

    • Ease of use: you have only one location to check for your log messages instead of many.

    • Availability: logs are available even when the sender machine is unreachable.

    • Security: logs are often deleted or modified once a computer is breached. Logs collected on the central syslog-ng server, on the other hand, can be used to reconstruct how the machine was compromised.

    There are four major roles of syslog-ng: collecting, processing, filtering, and storing (or forwarding) log messages.

    The first role is collecting, where syslog-ng can collect system and application logs together. These two can provide useful contextual information for either side. Many platform-specific log sources are supported (for example, collecting system logs from /dev/log, the Systemd Journal or Sun Streams). As a central log collector, syslog-ng supports both the legacy/BSD (RFC 3164) and the new (RFC 5424) syslog protocols over UDP, TCP and encrypted connections. It can also collect logs or any kinds of text data through files, sockets, pipes and even application output. The Python source serves as a Jolly Joker: you can implement an HTTP server (similar to Splunk HEC), fetch logs from Amazon Cloudwatch, and implement a Kafka source, to mention only a few possibilities..

    The second role is processing, which covers many different possibilities. For example, syslog-ng can classify, normalize, and structure logs with built-in parsers. It can rewrite log messages ( we aren’t talking about falsifying log messages here, but anonimization as required by compliance regulations, for example). It can also enrich log messages using GeoIP, or create additional name-value pairs based on message content. You can use templates to reformat log messages, as required by a specific destination (for example, you can use the JSON template function with Elasticsearch). Using the Python parser, you can do any of the above, and even filtering.

    The third role is filtering, which has two main uses. The first one is, discarding surplus log messages, like debug level messages, for example. The second one is message routing: making sure that a given set of logs reaches the right destination (for example, authentication-related messages reach the SIEM). There are many possibilities, as message routing can be based on message parameters or content, using many different filtering functions. Best of all: any of these can be combined using Boolean operators.

    The fourth role is storage. Traditionally, syslog-ng stored log messages to flat files, or forwarded them to a central syslog-ng server using one of the syslog protocols and stored them there to flat files. Over the years, an SQL destination, then different big-data destinations (Hadoop, Kafka, Elasticsearch), message queuing (like AMQP or STOMP), different logging as a service providers, and many other features were added. Nowadays you can also write your own destinations in Python or Java.

    Log messages

    If you take a look at your /var/log directory, where log messages are normally stored on a Linux/UNIX system, you will see that most log messages have the following format: date + hostname + text. For example, observe this ssh login message:

    Mar 11 13:37:56 linux-6965 sshd[4547]: Accepted keyboard-interactive/pam for root from port 46048 ssh2

    As you can see, the text part is an almost complete English sentence with some variable parts in it. It is pretty easy to read for a human. However, as each application produces different messages, it is quite difficult to create reports and alerts based on these messages.

    There is a solution for this problem: structured logging. Instead of free-form text messages, in this case events are described using name-value pairs. For example, an ssh login can be described with the following name-value pairs:

    app=sshd user=root source_ip=

    The good news is that syslog-ng was built around name-value pairs right from the beginning, as both advanced filtering and templates required syslog header data to be parsed and available as name-value pairs. Parsers in syslog-ng can turn unstructured, and even some structured data (CSV, JSON, etc.) into name-value pairs as well.


    Configuring syslog-ng is simple and logical, even if it does not look so at first sight. My initial advice: Don’t panic! The syslog-ng configuration has a pipeline model. There are many different building blocks (like sources, destinations, filters and others), and all of these can be connected in pipelines using log statements.

    By default, syslog-ng usually looks for its configuration in /etc/syslog-ng/syslog-ng.conf (configurable at compile time). Here you can find a very simple syslog-ng configuration showing you all the mandatory (and even some optional) building blocks:

    @include "scl.conf"
    # this is a comment :)
    options {flush_lines (0); keep_hostname (yes);};
    source s_sys { system(); internal();};
    destination d_mesg { file("/var/log/messages"); };
    filter f_default { level(info..emerg) and not (facility(mail)); };
    log { source(s_sys); filter(f_default); destination(d_mesg); };

    The configuration always starts with a version number declaration. It helps syslog-ng to figure out what your original intention with the configuration was and also warns you if there was an important change in syslog-ng internals.

    You can include other configuration files from the main syslog-ng configuration. The one included here is an important one: it includes the syslog-ng configuration library. It will be discussed later in depth. For now, it is enough to know that many syslog-ng features are actually defined there, including the Elasticsearch destination.

    You can place comments in your syslog-ng configuration, which helps structure the configuration and remind you about your decisions and workarounds when you need to modify the configuration later.

    The use of global options helps you make your configuration shorter and easier to maintain. Most settings here can be overridden later in the configuration. For example flush_lines() defines how many messages are sent to a destination at the same time. A larger value adds latency, but better performance and lower resource usage as well. Zero is a safe choice of value for most logs on a low traffic server, as it writes all logs to disk as soon as they arrive. On the other hand, if you have a busy mail server on that host, you might want to override this value for the mail logs only. Then later, when your server becomes busy, you can easily raise the value for all of your logs.

    The next three lines are the actual building blocks. Two of these are mandatory: the source and the destination (as you need to collect logs and store them somewhere). The filter is optional but useful and highly recommended.

    • A source is a named collection of source drivers. In this case, its name is s_sys, and it is using the system() and internal() sources. The first one collects from local, platform-specific log sources, while the second one collects messages generated by syslog-ng.

    • A destination is a named collection of destination drivers. In this case, its name is d_mesg, and it stores files into a flat file called /var/log/messages.

    • A filter is a named collection of filter functions. You can have a single filter function or a collection of filter functions connected using Boolean operators. Here we have a function for discarding debug level messages and another one for finding facility mail.

    There are a few more building blocks (parsers, rewrites and others) not shown here. They will be introduced later.

    Finally, there is a log statement connecting all these building blocks. Here you refer to the different building blocks by their names. Naturally, in a real configuration you will have several of these building blocks to refer to, not only one of each. Unless you are machine generating a complex configuration, you do not have to count the number of items in your configuration carefully.

    SCL: syslog-ng configuration library

    The syslog-ng configuration library (SCL) contains a number of ready-to-use configuration snippets. From the user’s point of view, they are no different from any other syslog-ng drivers. For example, the new elasticsearch-http() destination driver also originates from here.

    Application Adapters are a set of parsers included in SCL that automatically try to parse any log messages arriving through the system() source. These parsers turn incoming log messages into a set of name-value pairs. The names for these name-value pairs, containing extra information, start with a dot to differentiate them from name-value pairs created by the user. For example, names for values parsed from sudo logs start with the .sudo. prefix.

    This also means that unless you really know what you are doing, you should include the syslog-ng configuration library from your syslog-ng.conf. If you do not do that, many of the documented features of syslog-ng will stop working for you.

    As you have already seen it in the sample configuration, you can enable SCL with the following line:

    @include "scl.conf"


    One of the most important features of syslog-ng is central log collection. You can use either the legacy or the new syslog protocols to collect logs centrally over the network. The machines sending the logs are called clients, while those on the receiving end are called servers. There is a lesser known, but at least equally, if not even more, important variant as well: the relays. On larger networks (or even smaller networks with multiple locations) relays are placed between clients and servers. This makes your logging infrastructure hierarchical with one or more levels of relays.

    Whyuse relays? There are three major reasons:

    • you can collect UDP logs as close to the source as possible

    • you can distribute processing of log messages

    • you can secure your infrastructure: have a relay for each department or physical location, so logs can be sent from clients in real-time even if the central server is inaccessible

    Macros & templates

    As a syslog message arrives, syslog-ng automatically parses it. Most macros or name-value pairs are variables defined by syslog-ng based on the results of parsing. There are some macros that do not come from the parsing directly, for example the date and time a message was received (as opposed to the value stored in the message), or from enrichment, like GeoIP.

    By default, messages are parsed as legacy syslog, but by using flags you can change this to new syslog (flags(syslog-protocol)) or you can even disable parsing completely (flags(no-parse)). In the latter case the whole incoming message is stored into the MESSAGE macro.

    Name-value pairs or macros have many uses. One of these uses is in templates. By using templates you can change the format of how messages are stored, (for example, use ISODATE instead of the traditional date format):

    template t_syslog {
        template("$ISODATE $HOST $MSG\n");
    destination d_syslog {
        file("/var/log/syslog" template(t_syslog));

    Another use is making file names variable. This way you can store logs coming from different hosts into different files or implement log rotation by storing files into directories and files based on the current year, month and day. An external script can delete files older than required to keep due to compliance or other reasons.

    destination d_messages {
        file("/var/log/$R_YEAR/$R_MONTH/$HOST_$R_DAY.log" create_dirs(yes));

    Filters & if/else statements

    By using filters you can fine-tune which messages can reach a given destination. You can combine multiple filter functions using Boolean operators in a single filter, and you can use multiple filters in a log path. Filters are declared similarly to any other building blocks: you have to name them and then use one more filter function combined with Boolean operators inside the filter. Here is the relevant part of the example configuration from above:

    filter f_default { level(info..emerg) and not (facility(mail)); };

    The level() filter function lets all messages through, except for those from debug level. The second one selects all messages with facility mail. The two filter functions are connected with a not operator, so in the end all debug level and all facility mail messages are discarded by this filter.

    There are many more filters. The match() filter operates on the message content and there are many more that operate on different values parsed from the message headers. From the security point of view, the inlist() filter might be interesting. This filter can compare a field with a list of values (for example, it can compare IP addresses extracted from firewall logs with a list of malware command & control IP addresses).

    Conditional expressions in the log path make using the results of filtering easier. What is possible now by using simple if / else statements used to require complex configuration. You can use conditional expressions with similar blocks within the log path:

    if (filter()) { do this }; else { do that };

    It can be used, for example, to apply different parsers to different log messages or to save a subset of log messages to a separate destination.

    Below you can find a simplified example, showing the log statement only:

    log {
        if (match("czanik" value(".sudo.SUBJECT"))) {
            destination { file("/var/log/sudo_filtered"); };

    The log statement in the example above collects logs from a source called s_sys. The next filter, referred from the log path, keeps sudo logs only. Recent versions of syslog-ng automatically parse sudo messages. The if statement here uses the results of parsing, and writes any log messages where the user name (stored in the .sudo.SUBJECT name-value pair) equals to my user name to a separate file. Finally, all sudo logs are stored to a log file.


    Parsers of syslog-ng can structure, classify and normalize log messages. There are multiple advantages of parsing:

    • instead of the whole message, only the relevant parts are stored

    • more precise filtering (alerting)

    • more precise searches in (no)SQL databases

    By default, syslog-ng treats the message part of logs as strings even if the message part contains structured data. You have to parse the message parts in order to turn them into name-value pairs. The advantages listed above can only be used once you have turned the message into name-value pairs by using the parsers of syslog-ng..

    One of the earliest parsers of syslog-ng is the PatternDB parser. This parser can extract useful information from unstructured log messages into name-value pairs. It can also add status fields based on the message text and classify messages (like LogCheck). The downside of PatternDB is that you need to know your log messages in advance and describe them in an XML database. It takes time and effort, and while some example log messages do exist, for your most important log messages you most likely need to create the XML yourself.

    For example, in case of an ssh login failure the name-value pairs created by PatternDB could be:

    • parsed directly from the message: app=sshd, user=root, source_ip=

    • added, based on the message content: action=login, status=failure

    • classified as “violation” in the end.

    JSON is becoming very popular recently, even for log messages. The JSON parser of syslog-ng can turn JSON logs into name-value pairs.

    The CSV parser can turn any kind of columnar log messages into name-value pairs. A popular example was the Apache web server access log.

    If you are into IT security, you will most likely use the key=value parser a lot, as iptables and most firewalls store their log messages in this format.

    There are many more lesser known parsers in syslog-ng as well. You can parse XML logs, logs from the Linux Audit subsystem, and even custom date formats, by using templates.

    SCL contains many parsers that combine multiple parsers into a single one to parse more complex log messages. There are parsers for Apache access logs that also parse the date from the logs. In addition, they can also interpret most Cisco logs resembling syslog messages.

    Enriching messages

    You can create additional name-value pairs based on the message content. PatternDB, already discussed among the parsers, can not only parse messages, but can also create name-value pairs based on the message content.

    The GeoIP parser can help you find the geo-location of IP addresses. The new geoip2() parser can show you more than just the country or longitude/latitude information: it can display the continent, the county, and even the city as well, in multiple languages. It can help you spot anomalies or display locations on a map.

    By using add-contextual-data(), you can enrich log messages from a CSV file. You can add, for example, host role or contact person information, based on the host name. This way you have to spend less time on finding extra information, and it can also help you create more accurate dashboards and alerts.

    parser p_kv {kv-parser(prefix("kv.")); };
    parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };
    source s_tcp { tcp(port(514)); };
    destination d_file {
      file("/var/log/fromnet" template("$(format-json --scope rfc5424
      --scope dot-nv-pairs --rekey .* --shift 1 --scope nv-pairs
      --exclude DATE --key ISODATE @timestamp=${ISODATE})\n\n") );
    log {

    The configuration above collects log messages from a firewall using the legacy syslog protocol on a TCP port. The incoming logs are first parsed with a key=value parser (using a prefix to avoid colliding macro names). The geoip2() parser takes the source IP address as input (stored in kv.SRC) and stores location data under a different prefix. By default, logs written to disk do not include the extracted name-value pairs. This is why logs are written here to a file using the JSON template function, which writes all syslog-related macros and any extracted name-value pairs into the file. Name-initial dots are removed from names and date is used as expected by Elasticsearch. The only difference is that there are two line feeds at the end, to make the file easier to read.

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I am available as @Pczanik.

    libinput's new thumb detection code

    Posted by Peter Hutterer on July 18, 2019 08:40 AM

    The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

    libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

    Thanks to Matt Mayfields' work, we now have a much more sophisticated thumb detection algorithm. The speed detection is still there but it better accounts for pinch gestures and two-finger scrolling. The exclusion zones are still there but less final about the state of the touch, a thumb can escape that "jail" and contribute to pointer motion where necessary. The new documentation has a bit of a general overview. A requirement for well-working thumb detection however is that your device has the required (device-specific) thresholds set up. So go over to the debugging thumb thresholds documentation and start figuring out your device's thresholds.

    As usual, if you notice any issues with the new code please let us know, ideally before the 1.14 release.

    PHP version 7.2.21RC1 and 7.3.8RC1

    Posted by Remi Collet on July 18, 2019 08:25 AM

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

    RPM of PHP version 7.387RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 28-29 and Enterprise Linux.

    RPM of PHP version 7.2.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Enterprise Linux.


    emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

    emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

    Parallel installation of version 7.3 as Software Collection:

    yum --enablerepo=remi-test install php73

    Parallel installation of version 7.2 as Software Collection:

    yum --enablerepo=remi-test install php72

    Update of system version 7.3:

    yum --enablerepo=remi-php73,remi-php73-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module enable php:remi-7.3
    dnf --enablerepo=remi-modular-test update php\*

    Update of system version 7.2:

    yum --enablerepo=remi-php72,remi-php72-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module enable php:remi-7.2
    dnf --enablerepo=remi-modular-test update php\*

    Notice: version 7.3.8RC1 in Fedora rawhide for QA.

    emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

    emblem-notice-24.pngPackages of 7.4.0alpha3 are also available as a Software Collections.

    emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

    Software Collections (php72, php73)

    Base packages (php)

    QElectroTech version 0.70

    Posted by Remi Collet on July 18, 2019 06:43 AM

    RPM of QElectroTech version 0.70, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux 7.

    A bit more than 1 year after the version 0.60 release, the project have just released a new major version of their electric diagrams editor.

    Official web site : http://qelectrotech.org/ and version announcement.

    Installation by YUM :

    yum --enablerepo=remi install qelectrotech

    RPM (version 0.70-1) are available for Fedora ≥ 28 and Enterprise Linux 7 (RHEL, CentOS, ...)

    Updates are also on the road to official repositories

    Notice :a Copr / Qelectrotech repository also exists, which provides "development" versions (0.80-DEV for now).

    Compling ARM stuff without an ARM board / Build PyTorch for the Raspberry Pi

    Posted by nmilosev on July 17, 2019 08:49 PM


    I am in the process of building a self-driving RC car. It’s a fun process full of discovery (I hate it already). Once it is finished I hope to write a longer article here about what I learned so stay tuned!

    While the electronics stuff was difficult for me (fingers still burnt from soldering) I hoped that the computer vision stuff would be easier. Right? Right? Well no.

    Neural network inference on small devices #

    To be clear I didn’t expect to train my CNN on the Raspberry Pi that I have (its revision 2, with added USB WiFi dongle and USB webcam) but I wanted to do some inference on a model that I can train on my other computers.

    I love using PyTorch and I use it for all my projects/work/research. Simply put it’s fantastic software.

    Problem #1 - PyTorch doesn’t have official ARMv7 or ARMv8 builds. #

    While you can get PyTorch if you have NVIDIA Jetson hardware, there are no builds for other generic boards. Insert sad emoji.

    Problem #2 - ONNX, no real options #

    I had the idea to export my trained model to ONNX (Open Neural Network eXchange format), but then what.

    There are two projects:

    1. Microsoft’s ONNX Runtime - doesn’t support RPi2
    2. Snips Tract - Seems super-cool but Rust underneath (nothing against Rust, just not familiar)

    So the only solution was: Build PyTorch from source.

    “When your build takes two days you have time to think about life” - Anonymous programmer 2019. #

    The PyTorch build process is fantastically simple. You get the code and run a single command. It’s robust and I used it many times before. So I jumped right in, it can’t take that long, yeah? NOP.

    On my Raspberry Pi 2, with a decent SD card (Kingston UHS1 16GB) the build took 36 and a bit hours. Yes you read that correctly. Not 3.6 hours. Thirty six hours. While it ran, during these 36 hours I had a lot of down time. So I wondered how to do it quicker.

    Option 1 - Cross compilation #

    Cross compilation (or witchcraft in software development circles) is a process where you can build software for some architecture on another architecture. So here I wanted to build for ARM on a standard x86_64 machine. From my (albeit small) experience cross compilation is complicated and difficult. Even though it was my first thought and I wanted to try it out, I then discovered on the PyTorch Github issues, that it is not supported for the project.

    Option 2 - What about emulation? #

    This seems reasonable. You emulate generic ARM or ARMv8 board and build on it. QEMU/libvirt can emulate ARM just fine and there are clear instructions on how to achieve it. For example Fedora Wiki (I am using Fedora 30 both on RPi and my build machine) has a short guide on how to do it. Here is the link.

    I tried this, and to be fair it worked fine. But it was slow. Almost unusably slow.

    Option 3 - Witchcraft, sort of #

    Remember cross compilation? I ran into an article which explains this weird setup for building ARM software. It is amazing. Basically there is a qemu-user package that allows you chroot into a rootFS of a different architecture with very little performance loss (!!!). Pair this with DNF’s feature to make a rootfs of any architecture, and you got something immensely powerful. Not just for building Python packages, for building anything for ARM or ARMv8 (aarch64 as it is called by DNF).

    But then I read the last line. This was just a proposal.

    So I went down the rabbit hole and followed the bug reports. All of them seemed closed. Could this feature work already? The answer was: YES!

    Building PyTorch for the Raspberry Pi boards #

    Once I discovered qemu-user chroot thingy, everything clicked.

    So here we go, this is how to do it.

    We need qemu and qemu-user packages. Virt manager is optional but nice to have.

    sudo dnf install qemu-system-arm qemu-user-static virt-manager

    We now need the rootfs, which is a single-liner

    sudo dnf install --releasever=30 --installroot=/tmp/F30ARM --forcearch=armv7hl --repo=fedora --repo=updates systemd passwd dnf fedora-release vim-minimal openblas-devel blas-devel m4 cmake python3-Cython python3-devel python3-yaml python3-setuptools python3-numpy python3-cffi python3-wheel gcc-c++ tar gcc git make tmux -y

    This will install a ARM rootfs to your /tmp directory along with everything you need to build PyTorch. Yes, it is that easy.

    Let’s chroot

    sudo chroot /tmp/F30ARM

    Welcome to your “ARM board”, verify your kernel arch:

    bash-5.0# uname -a
    Linux toshiba-x70-a 5.1.12-300.fc30.x86_64 #1 SMP Wed Jun 19 15:19:49 UTC 2019 armv7l armv7l armv7l GNU/Linux

    So cool, isn’t it? Some things are broken, but easy to fix. Mainly network and DNF wrongly detects your arch.

    # Fix for 1691430
    sed -i "s/'armv7hnl', 'armv8hl'/'armv7hnl', 'armv7hcnl', 'armv8hl'/" /usr/lib/python3.7/site-packages/dnf/rpm/__init__.py
    alias dnf='dnf --releasever=30 --forcearch=armv7hl --repo=fedora --repo=updates'
    # Fixes for default python and network
    alias python=python3
    echo 'nameserver' > /etc/resolv.conf

    Your configuration is now complete and you have a working emulated ARM board.

    Get PyTorch source:

    git clone https://github.com/pytorch/pytorch --recursive
    git checkout v1.1.0 # optional, you can build master if you are brave
    git submodule update --init --recursive

    Since we are building for a Raspberry Pi we want to disable CUDA, MKL etc.

    export NO_CUDA=1
    export NO_DISTRIBUTED=1
    export NO_MKLDNN=1 
    export NO_NNPACK=1 # update July 19, this is optional, can build with NNPACK
    export NO_QNNPACK=1 # same as above, can be omitted 

    All ready, build!

    python setup.py bdist_wheel

    Performance #

    The RPi2 took 36+ hours. This? Under two. My laptop isn’t that new (i7 4700MQ) and I guess you can do it even faster with a faster CPU.

    Conclusion #

    Building for ARM shouldn’t be done on a board. There are probably some exceptions to the rule, but you really should consider the way explained here. It’s faster, reproducible and easy. Fedora works remarkably well for this (as for all other things, hehe) both on the device and on the build system.

    Let me know how it goes for you.

    Oh, and if you just stumbled on this page on Google wanting a wheel/.whl of PyTorch for your RPi2, here you go. To build for RPi3 and ARMv8 just replace every armv7hl in this post with aarch64 and you should be fine. :)

    Image credit: https://xkcd.com/303/

    Move a Linux running process to a screen shell session

    Posted by Hernan Vivani on July 17, 2019 06:39 PM

    Use case:

    • You just started a process (i.e. compile, copy, etc).
    • You noticed it will take much longer than expected to finish.
    • You cannot abort or risk the process to get aborted due to the current shell session finishing.
    • It would be ideal to have this process on ‘screen’ to have it running on backgroud.

    We can move it to a screen session with the following steps:

    1. Suspend the process
      1. press Ctrl+Z
    2. Resume the process in the background
      1. bg
    3. Disown the process
      1. disown %1
    4. Launch a screen session
      1. screen
    5. Find the PID of the process
      1. pgrep myappname
    6. Use reptyr to take over the process
      1. reptyr 1234


    Note: at the moment of writing this, reptyr is not available on any Fedore/Redhat repo. We’ll need to compile:

    $ git clone https://github.com/nelhage/reptyr.git
    $ cd reptyr/
    $ make
    $ sudo make install





    Bond WiFi and Ethernet for easier networking mobility

    Posted by Fedora Magazine on July 17, 2019 08:00 AM

    Sometimes one network interface isn’t enough. Network bonding allows multiple network connections to act together with a single logical interface. You might do this because you want more bandwidth than a single connection can handle. Or maybe you want to switch back and forth between your wired and wireless networks without losing your network connection.

    The latter applies to me. One of the benefits to working from home is that when the weather is nice, it’s enjoyable to work from a sunny deck instead of inside. But every time I did that, I lost my network connections. IRC, SSH, VPN — everything goes away, at least for a moment while some clients reconnect. This article describes how I set up network bonding on my Fedora 30 laptop to seamlessly move from the wired connection my laptop dock to a WiFi connection.

    In Linux, interface bonding is handled by the bonding kernel module. Fedora does not ship with this enabled by default, but it is included in the kernel-core package. This means that enabling interface bonding is only a command away:

    sudo modprobe bonding

    Note that this will only have effect until you reboot. To permanently enable interface bonding, create a file called bonding.conf in the /etc/modules-load.d directory that contains only the word “bonding”.

    Now that you have bonding enabled, it’s time to create the bonded interface. First, you must get the names of the interfaces you want to bond. To list the available interfaces, run:

    sudo nmcli device status

    You will see output that looks like this:

    DEVICE          TYPE      STATE         CONNECTION         
    enp12s0u1       ethernet  connected     Wired connection 1
    tun0            tun       connected     tun0               
    virbr0          bridge    connected     virbr0             
    wlp2s0          wifi      disconnected  --      
    p2p-dev-wlp2s0  wifi-p2p disconnected  --      
    enp0s31f6       ethernet  unavailable   --      
    lo              loopback  unmanaged     --                 
    virbr0-nic      tun       unmanaged     --       

    In this case, there are two (wired) Ethernet interfaces available. enp12s0u1 is on a laptop docking station, and you can tell that it’s connected from the STATE column. The other, enp0s31f6, is the built-in port in the laptop. There is also a WiFi connection called wlp2s0. enp12s0u1 and wlp2s0 are the two interfaces we’re interested in here. (Note that it’s not necessary for this exercise to understand how network devices are named, but if you’re interested you can see the systemd.net-naming-scheme man page.)

    The first step is to create the bonded interface:

    sudo nmcli connection add type bond ifname bond0 con-name bond0

    In this example, the bonded interface is named bond0. The “con-name bond0” sets the connection name to bond0; leaving this off would result in a connection named bond-bond0. You can also set the connection name to something more human-friendly, like “Docking station bond” or “Ben”

    The next step is to add the interfaces to the bonded interface:

    sudo nmcli connection add type ethernet ifname enp12s0u1 master bond0 con-name bond-ethernet
    sudo nmcli connection add type wifi ifname wlp2s0 master bond0 ssid Cotton con-name bond-wifi

    As above, the connection name is specified to be more descriptive. Be sure to replace enp12s0u1 and wlp2s0 with the appropriate interface names on your system. For the WiFi interface, use your own network name (SSID) where I use “Cotton”. If your WiFi connection has a password (and of course it does!), you’ll need to add that to the configuration, too. The following assumes you’re using WPA2-PSK authentication

    sudo nmcli connection modify bond-wifi wifi-sec.key-mgmt wpa-psk
    sudo nmcli connection edit bond-wif

    The second command will bring you into the interactive editor where you can enter your password without it being logged in your shell history. Enter the following, replacing password with your actual password

    set wifi-sec.psk password

    Now you’re ready to start your bonded interface and the secondary interfaces you created

    sudo nmcli connection up bond0
    sudo nmcli connection up bond-ethernet
    sudo nmcli connection up bond-wifi

    You should now be able to disconnect your wired or wireless connections without losing your network connections.

    A caveat: using other WiFi networks

    This configuration works well when moving around on the specified WiFi network, but when away from this network, the SSID used in the bond is not available. Theoretically, one could add an interface to the bond for every WiFi connection used, but that doesn’t seem reasonable. Instead, you can disable the bonded interface:

    sudo nmcli connection down bond0

    When back on the defined WiFi network, simply start the bonded interface as above.

    Fine-tuning your bond

    By default, the bonded interface uses the “load balancing (round-robin)” mode. This spreads the load equally across the interfaces. But if you have a wired and a wireless connection, you may want to prefer the wired connection. The “active-backup” mode enables this. You can specify the mode and primary interface when you are creating the interface, or afterward using this command (the bonded interface should be down):

    sudo nmcli connection modify bond0 +bond.options "mode=active-backup,primary=enp12s0u1"

    The kernel documentation has much more information about bonding options.

    NEURON in NeuroFedora needs testing

    Posted by The NeuroFedora Blog on July 17, 2019 07:52 AM

    We have been working on including the NEURON simulator in NeuroFedora for a while now. The build process that NEURON uses has certain peculiarities that make it a little harder to build.

    For those that are interested in the technical details, while the main NEURON core is built using the standard ./configure; make ; make install process that cleanly differentiates the "build" and "install" phases, the Python bits are built as a "post-install hook". That is to say, they are built after the other bits in the "install" step instead of the "build" step. This implies that the build is not quite straightforward and must be slightly tweaked to ensure that the Fedora packaging guidelines are met.

    After discussing things on this Github issue, the developers, @nrnhines (Michael Hines) and @ramcdougal (Robert A McDougal) helped me understand the complexities of the build process and get it done. They have also mentioned that NEURON is now moving to a CMake based build system and should be simpler to work with in the future. CMake is generally nicer for projects that include different languages and build systems.

    After a few hours of work, NEURON is now ready to use in NeuroFedora. It is built with Python 3, and does not currently provide IV and MPI bits. These will be worked upon later. Since MUSIC is not yet in NeuroFedora, NEURON does not support MUSIC either currently. This is also a work in progress.

    I have tested the NEURON build on my machine with a few example simulations and it works well, but this cannot be considered exhaustive testing of the package. If you have a Fedora system, please test NEURON and let us know if you notice any issues. Here's how.

    Step 1: Set up a Fedora installation

    NeuroFedora is based on Fedora, so the simplest way to use it is to install a Fedora Workstation using the live images available on https://getfedora.org. You can either install it on a system or use a virtual machine if you wish. NeuroFedora includes lots of other software for neuroscience also. You can learn more in our documentation. Fedora, in general, provides lots of other software too. You can search them using the Fedora Packages web application.

    Step 2: Install NEURON

    I would recommend updating your system before proceeding using dnf in a terminal:

    sudo dnf update --refresh

    Then, you can install NEURON. It is currently in the testing repositories, so they will need to be enabled for the command:

    sudo dnf --enablerepo=updates-testing install neuron python3-neuron

    Step 3: Test it out

    Test it out with your models. Hopefully, everything will work fine. The NEURON documentation is here for those that would like to tinker with it too: https://www.neuron.yale.edu/neuron/docs

    Step 4: Give feedback

    Bodhi logo

    Bodhi, the Fedora Quality Assurance web application.

    This step is optional, especially if everything works fine. If you experience any issues, please do get in touch with us. You can either contact us directly using one of our communication channels, or you can give karma to the update on Bodhi. The latter is preferred.

    Bodhi is the system Fedora uses for pushing updates to users. In a nutshell:

    • a new version of software is released
    • the Fedora maintainer updates the Fedora package.
    • the maintainer submits the new Fedora package to Bodhi
    • the package remains in the updates-testing repositories while users test it out and provide feedback.
    • if the update receives positive feedback (positive karma), it is automatically pushed to the updates repository for all users to receive the new version.
    • if the update receives negative feedback, the new version is not sent out to users and the maintainer must fix the reported issues and submit a new version of the package for testing again.

    This workflow applies to all Fedora packages, thus ensuring that there's plenty of time for issues to be flagged before the software reaches users. So, if you have a few minutes to spare, please help us by testing these packages out. The updates for Fedora 29 and Fedora 30 are both here: https://bodhi.fedoraproject.org/updates/?packages=neuron

    Please note that this requires an Fedora account, since that's the account system that links all Fedora community infrastructure. This Fedora Magazine post provides an excellent resource on setting up a Fedora Account: https://fedoramagazine.org/getting-set-up-with-fedora-project-services/

    Detailed information on testing updates in Fedora can be found here on the Quality Assurance (QA) team's documentation: https://fedoraproject.org/wiki/QA:Updates_Testing

    NeuroFedora is volunteer driven initiative and contributions in any form always welcome. You can get in touch with us here. We are happy to help you learn the skills needed to contribute to the project. In fact, that is one of the major goals of the initiative---to spread technical knowledge that is necessary to develop software for Neuroscience.

    Google, Money and Censorship in Free Software communities

    Posted by Daniel Pocock on July 16, 2019 10:05 PM

    On 30 June 2019, I sent the email below to the debian-project mailing list.

    It never appeared.

    Alexander Wirt (formorer) has tried to justify censoring the mailing list in various ways. Wirt has multiple roles, as both Debian mailing list admin and also one of Debian's GSoC administrators and mentors. Google money pays for interns to do work for him. It appears he has a massive conflict of interest when using the former role to censor posts about Google, which relates to the latter role and its benefits.

    Wirt has also made public threats to censor other discussions, for example, the DebConf Israel debate. In that case he has wrongly accused people of antisemitism, leaving people afraid to speak up again. The challenges of holding a successful event in that particular region require a far more mature approach, not a monoculture.

    Why are these donations and conflicts of interest hidden from the free software community who rely on, interact with and contribute to Debian in so many ways? Why doesn't Debian provide a level playing field, why does money from Google get this veil of secrecy?

    Is it just coincidence that a number of Google employees who spoke up about harassment are forced to resign and simultaneously, Debian Developers who spoke up about abusive leadership are obstructed from competing in elections? Are these symptoms of corporate influence?

    Is it coincidence that the three free software communities censoring my recent blog about human rights from their Planet sites (FSFE, Debian and Mozilla, evidence of censorship) are also the communities where Google money is a disproportionate part of the budget?

    Could the reason for secrecy about certain types of donation be motivated by the knowledge that unpleasant parts of the donor's culture also come along for the ride?

    The email the cabal didn't want you to see

    Subject: Re: Realizing Good Ideas with Debian Money
    Date: Sun, 30 Jun 2019 23:24:06 +0200
    From: Daniel Pocock <daniel@pocock.pro>
    To: debian-project@lists.debian.org, debian-devel@lists.debian.org
    On 29/05/2019 13:49, Sam Hartman wrote:
    > [moving a discussion from -devel to -project where it belongs]
    >>>>>> "Mo" == Mo Zhou <lumin@debian.org> writes:
    >     Mo> Hi,
    >     Mo> On 2019-05-29 08:38, Raphael Hertzog wrote:
    >     >> Use the $300,000 on our bank accounts?
    > So, there were two $300k donations in the last year.
    > One of these was earmarked for a DSA equipment upgrade.
    When you write that it was earmarked for a DSA equipment upgrade, do you
    mean that was a condition imposed by the donor or it was the intention
    of those on the Debian side of the transaction?  I don't see an issue
    either way but the comment is ambiguous as it stands.
    Debian announced[1] a $300k donation from Handshake foundation.
    I couldn't find any public disclosure about other large donations and
    the source of the other $300k.
    In Bits from the DPL (December 2018), former Debian Project Leader (DPL)
    Chris Lamb opaquely refers[2] to a discussion with Cat Allman about a
    "significant donation".  Although there is a link to Google later in
    Lamb's email, Lamb fails to disclose the following facts:
    - Cat Allman is a Google employee (some people would already know that,
    others wouldn't)
    - the size of the donation
    - any conditions attached to the donation
    - private emails from Chris Lamb indicated he felt some pressure,
    influence or threat from Google shortly before accepting their money
    The Debian Social Contract[3] states that Debian does not hide our
    problems.  Corporate influence is one of the most serious problems most
    people can imagine, why has nothing been disclosed?
    Therefore, please tell us,
    1. who did the other $300k come from?
    2. if it was not Google, then what is the significant donation from Cat
    Allman / Google referred[2] to in Bits from the DPL (December 2018)?
    3. if it was from Google, why was that hidden?
    4. please disclose all conditions, pressure and influence relating to
    any of these donations and any other payments received
    1. https://www.debian.org/News/2019/20190329
    2. https://lists.debian.org/debian-devel-announce/2018/12/msg00006.html
    3. https://www.debian.org/social_contract

    Censorship on the Google Summer of Code Mentor's mailing list

    Google also operates a mailing list for mentors in Google Summer of Code. It looks a lot like any other free software community mailing list except for one thing: censorship.

    Look through the "Received" headers of messages on the mailing list and you can find examples of messages that were delayed for some hours waiting for approval. It is not clear how many messages were silently censored, never appearing at all.

    Recent attempts to discuss the issue on Google's own mailing list produced an unsurprising result: more censorship.

    However, a number of people have since contacted me personally about their negative experiences with Google Summer of Code. I'm publishing below the message that Google didn't want you to see.

    Subject: [GSoC Mentors] discussions about GSoC interns/students medical status
    Date: Sat, 6 Jul 2019 10:56:31 +0200
    From: Daniel Pocock <daniel@pocock.pro>
    To: Google Summer of Code Mentors List <google-summer-of-code-mentors-list@googlegroups.com>
    Hi all,
    Just a few months ago, I wrote a blog lamenting the way some mentors
    have disclosed details of their interns' medical situations on mailing
    lists like this one.  I asked[1] the question: "Regardless of what
    support the student received, would Google allow their own employees'
    medical histories to be debated by 1,000 random strangers like this?"
    Yet it has happened again.  If only my blog hadn't been censored.
    If our interns have trusted us with this sensitive information,
    especially when it concerns something that may lead to discrimination or
    embarrassment, like mental health, then it highlights the enormous trust
    and respect they have for us.
    Many of us are great at what we do as engineers, in many cases we are
    the experts on our subject area in the free software community.  But we
    are not doctors.
    If an intern goes to work at Google's nearby office in Zurich, then they
    are automatically protected by income protection insurance (UVG, KTG and
    BVG, available from all major Swiss insurers).  If the intern sends a
    doctor's note to the line manager, the manager doesn't have to spend one
    second contemplating its legitimacy.  They certainly don't put details
    on a public email list.  They simply forward it to HR and the insurance
    company steps in to cover the intern's salary.
    The cost?  Approximately 1.5% of the payroll.
    Listening to what is said in these discussions, many mentors are
    obviously uncomfortable with the fact that "failing" an intern means
    they will not even be paid for hours worked prior to a genuine accident
    or illness.  For 1.5% of the program budget, why doesn't Google simply
    take that burden off the mentors and give the interns peace of mind?
    On numerous occasions Stephanie Taylor has tried to gloss over this
    injustice with her rhetoric about how we have to punish people to make
    them try harder next year.  Many of our interns are from developing
    countries where they already suffer injustice and discrimination.  You
    would have to be pretty heartless to leave these people without pay.
    Could that be why Googlespeak clings to words like "fail" and "student"
    instead of "not pay" and "employee"?
    Many students from disadvantaged backgrounds, including women, have told
    me they don't apply at all because of the uncertainty about doing work
    that might never be paid.  This is an even bigger tragedy than the time
    mentors lose on these situations.
    Former Debian GSoC administrator

    Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC)

    Posted by Brian "bex" Exelbierd on July 16, 2019 05:35 PM

    I’ve decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC).  This was not an easy decision to make. I am proud of the work I have done in Fedora over the last three years and I think I have helped the community move past many challenges.  I could NEVER have done all of this without the support and assistance of the community!

    As some of you know, I have been covering for some other roles in Red Hat for almost the last year.  Some of these tasks have led to some opportunities to take my career in a different direction. I am going to remain at Red Hat and on the same team with the same manager, but with a slightly expanded scope of duties.  I will no longer be day-to-day on Fedora and will instead be in a consultative role as a Community Architect at Large. This is a fancy way of saying that I will be tackling helping lots of projects with various issues while also working on some specific strategic objectives.

    Read more over at the Fedora Magazine where this was originally posted.

    Application service categories and community handoff

    Posted by Fedora Community Blog on July 16, 2019 06:34 AM

    The Community Platform Engineering (CPE) team recently wrote about our face-to-face meeting where we developed a team mission statement and developed a framework for making our workload more manageable. Having more focus will allow us to progress higher priority work for multiple stakeholders and move the needle on more initiatives in a more efficient manner than how we are working right now. 

    During the F2F we walked through the process of how to gracefully remove ourselves from applications that are not fitting our mission statement. The next couple of months will be a transition phase as we want to ensure continuity and cause minimum disruption to the community. To assist in that strategy, we analysed our applications and came up with four classifications to which they could belong.

    Application service categories

    1. We maintain it, we run it

    This refers to apps that are in our mission and we need to both actively maintain and host it. CPE will be responsible for all development and lifecycle work on those apps, but we do welcome contributors. This is business as usual for CPE and it has a predictable cost associated with it from a planning and maintenance perspective. 

    2. We don’t maintain it, we run it

    This represents apps that are in our infrastructure but we are not responsible for their overall maintenance. We  provide power and ping at a server level and will attempt to restart apps that have encountered an issue . We are happy to host them but the maintenance of them, which includes development of new features and bug fixes, are no longer in our day to day remit. This represents light work for us, as the actual applications ownership resides outside of CPE, with our responsibility exclusively on lifecycle management of the app.

    3. We don’t maintain it, we don’t run it

    This represents an application that we need to move into a mode whereby somebody other than CPE needs to own it. This represents some work on CPE’s side to ensure continuity of service and to ensure that an owner is found for it. Our Community OpenShift instance will be offered to host services here. Apps that fall into this category have mostly been in maintenance mode on the CPE side, but they keep “head space”. So we want for them to live and evolve exclusively outside of CPE on a hosting environment that we can provide as a service. Here, we will provide the means to host an application and will fully support the Community PaaS but any app maintenance or lifecycle events will be in the hands of the people running the app, not the CPE team.

    These are apps for which we are a main contributor and which drain time and effort. In turn, this is causing us difficulty in planning wider initiatives because of the unpredictable nature of the requests. Category 3 apps are where our ongoing work in this area is more historical than strategic.

    Winding down apps

    Category 3 apps ultimately do not fit within CPE’s mission statement and our intention here is to have a maintenance wind-down period. That period will be decided on an app-by-app basis, with a typical wind down period being of the order of 1-6 months. For exceptional circumstances we may extend this out to a maximum of 12 months. That time frame will be decided in consultation with the next maintainer and the community at large to allow for continuity to occur. For apps that find themselves here, we are providing a community service in the form of Community OpenShift (“Communishift”) that could become a home for those apps. However, the CPE team won’t maintain a Service Level Expectation (SLE) for availability or fixes. Our SLE is a best effort to keep services and hardware on the air during working hours while being creative with staff schedules and time zones. We have this documented here for further information. Ideally they would have owners outside the team to which requests could be referred to, but would not be a CPE responsibility.

    We are working on formalising the process of winding down an application by creating a Standard Operating Procedure (SOP). At a high level, that should include a project plan derived from consultation with the community. That may see work on the CPE team to get the application to a state of maintenance. That work could be on documentation, training, development of critical fixes / features or help with porting it to another location. Ultimately, the time spent on that kind of work is a fraction of the longer term maintenance cost. Our intention is to run all of the apps through the Fedora Council first, in case the Council prefers any specific alternatives to a particular service or app. 

    4. We turn it off

    This represents applications that are no longer used or have been superseded by another application. This may also represent applications that were not picked up by the other members of the community. Turning off does not equate to a hard removal of the app and if an owner can be found or a case made as to why CPE should own it, we can revisit it.

    Initial app analysis

    To help us identify that path, at our F2F we have evaluated a first round of apps.

    Category 1

    For completeness we are highlighting one example of a Category 1 application that we will always aim to maintain and keep on the air. Bodhi is one such example as it is one of the core services used to build and deliver Fedora and was developed specifically around the needs of the Fedora project. This makes it one of a kind, there are no application out there that could be leveraged to replace it and any attempts to replace it with something else would have repercussions into the entire build process and likely the entire community.

    Category 2

    Wiki x 2 (This may be a Category 3 after further analysis) — CPE maintains two wiki instances, one for Fedora and one for CentOS. Both of them are used by the communities in ways that makes it currently impossible to remove. In Fedora’s case it is also used by QA (Quality Assurance), making it an integral part of the Fedora release process and thus not something that can be handed to the community to maintain.

    Category 3

    Overall, the trend for these tools will be to move them to a steady-state of no more fixes/enhancements. The community will be welcome to maintain, and/or locate a replacement service that satisfies their requirements. Replacements can be considered by the Council for funding as appropriate.

    Mailman/Hyperkitty/postorious — Maintaining this stack has cost the equivalent of an entire developer’s time long-term. However, we recognize the imperative that projects have mailing lists for discussion and collaboration. No further features will be added here and based on the community needs an outside mailing list service can be contracted.

    Elections — This application has been in maintenance mode for some time now. We recently invested some time in it to port it to python3, use a newer authentication protocol (OpenID Connect) and move it to openshift, while integrating Ben Cotton’s work to add support for badges to elections. We believe elections is in a technical state that is compatible with a low-maintenance model for a community member who would like to take it over. As a matter of fact, we have already found said community member in the person of Ben Cotton (thank you Ben!).

    Fedocal — This application has been in maintenance mode for some time. It has been ported to python3 (but hasn’t had a release with python3 support yet). There is work in progress to port it to OpenID Connect and have it run in OpenShift. It still needs to be ported to fedora-messaging

    Nuancier — This application has been in maintenance mode as well. It has been ported to python3 but needs to be ported to OpenID Connect, fedora-messaging and moved to OpenShift.

    Badges — This application has been in maintenance mode for a while now. The work to port it to python3 has been started but it still needs to be ported to OpenID Connect, fedora-messaging and moved to openshift. We invested some time recently to identify the highest pain point of the application, log them in the issue tracker as user stories and start prioritizing them. We however, cannot commit to fixing them.

    For Fedocal and Nuancier, we are thinking of holding virtual hackfest on Fridays for as long as there is work to do on them, advertise this to the community to try to spark interest in these applications in the hope that we find someone interested enough (and after these hackfest knowledgeable enough) to take over their maintenance.

    Category 4

    Pastebin — fpaste.org is a well known and used service in Fedora, however it has been a pain point for the CPE team for few years. The pastebin software that exist are most often not maintained, finding and replacing one is a full-time work for a few weeks. Finally, this type of service also comes with high legal costs as we are often asked to remove content from it, despite the limited time this content remains available. CentOS is also running a pastebin service but it has the same long term costs and a similar conversation will need to occur there

    Apps.fp.o — This is the landing page available at https://apps.fedoraproject.org/. Its content has not been kept up to date and it overall needs some redesign. We may be open to give it up to a community member, but we do not believe that the gain is worth the time investment in finding that person

    Ipsilon — Ipsilon is our identity provider. It supports multiple authentication protocol (OpenID 2.0, OpenID Connect, SAML 2.0, …) and multiple backends (FAS, LDAP/FreeIPA, htpasswd, system accounts…). While it was originally shipped as a tech preview in RHEL it no longer is and the team working on this application has also been refocused on other projects. We would like to move all our applications to use OpenID Connect or SAML 2.0 (instead of OpenID 2.0 with (custom) extensions) and replace FAS with an IPA-based solution, which in turn allows us to replace ipsilon by a more maintained solution, likely Red Hat Single Sign On. The dependencies are making this a long term effort. We will need to announce to the community that this means we will shut down the public OpenID 2.0 endpoints, which means that any community services that use this protocol need to be moved to OpenID Connect as well.

    Over the coming weeks we will setup our process to begin the formal window of the items listed above that are in a 3 or a 4 state and will share that process and plan with the Fedora Council.

    The post Application service categories and community handoff appeared first on Fedora Community Blog.

    Episode 154 - Chat with the authors of the book "The Fifth Domain"

    Posted by Open Source Security Podcast on July 16, 2019 12:10 AM
    Josh and Kurt talk to the authors of a new book The Fifth Domain. Dick Clarke and Rob Knake join us to discuss the book, cybersecurity, US policy, how we got where we are today and what the future holds for cybersecurity.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/10497236/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes

      Résultats des élections de Fedora 06/19

      Posted by Charles-Antoine Couret on July 15, 2019 06:13 PM

      Comme je vous le rapportais il y a peu, Fedora a organisé des élections pour renouveler partiellement le collège de ses organes FESCo, Mindshare et Council.

      Le scrutin est comme toujours un vote par valeurs. Nous pouvons attribuer à chaque candidat un certain nombre de points, dont la valeur maximale est celui du nombre de candidat, et le minimum 0. Cela permet de montrer l'approbation à un candidat et la désapprobation d'un autre sans ambiguïté. Rien n'empêchant de voter pour deux candidats avec la même valeur.

      Les résultats pour le Conseil sont (qui est le seul candidat) :

        # votes |  name
       176          Till Maas (till)

      À titre indicatif le score maximal possible était de 184 * 1 votes soit 184.

      Les résultats pour le FESCo sont (seuls les quatre premiers sont élus) :

        # votes |  name
      695     Stephen Gallagher (sgallagh)
      687         Igor Gnatenko (ignatenkobrain)
      615     Aleksandra Fedorova (bookwar)
      569     Petr Šabata (psabata)
      525      Jeremy Cline
      444     Fabio Valentini (decathorpe

      À titre indicatif le score maximal possible était de 205 * 6 soit 1230.

      Les résultats pour le Mindshare sont donc (seuls le premier est élu) :

        # votes |  name
      221     Sumantro Mukherjee (sumantrom)
      172     Luis Bazan (lbazan)

      À titre indicatif le score maximal possible était de 178 * 2 soit 356.

      Nous pouvons noter que globalement le nombre de votants pour chaque scrutin était proche aux alentours de 175-200 votants ce qui est un poil moins que la fois précédente (200-250 en moyenne). Les scores sont aussi plutôt éparpillés.

      Bravo aux participants et aux élus et le meilleur pour le projet Fedora.

      Duplicity 0.8.01

      Posted by Gwyn Ciesla on July 15, 2019 03:10 PM

      Duplicity 0.8.01 is now in rawhide. The big change here is that it now uses Python 3. I’ve tested it in my own environment, both on it’s own and with deja-dup, and both work.

      Please test and file bugs. I expect there will be more, but with Python 2 reaching EOL soon, it’s important to move everything we can to Python 3.



      CPU atomics and orderings explained

      Posted by William Brown on July 15, 2019 02:00 PM

      CPU atomics and orderings explained

      Sometimes the question comes up about how CPU memory orderings work, and what they do. I hope this post explains it in a really accessible way.

      Short Version - I wanna code!

      Summary - The memory model you commonly see is from C++ and it defines:

      • Relaxed
      • Acquire
      • Release
      • Acquire/Release (sometimes AcqRel)
      • SeqCst

      There are memory orderings - every operation is “atomic”, so will work correctly, but there rules define how the memory and code around the atomic are influenced.

      If in doubt - use SeqCst - it’s the strongest guarantee and prevents all re-ordering of operations and will do the right thing.

      The summary is:

      • Relaxed - no ordering guarantees, just execute the atomic as is.
      • Acquire - all code after this atomic, will be executed after the atomic.
      • Release - all code before this atomic, will be executed before the atomic.
      • Acquire/Release - both Acquire and Release - ie code stays before and after.
      • SeqCst - Stronger consistency of Acquire/Release.

      Long Version … let’s begin …

      So why do we have memory and operation orderings at all? Let’s look at some code to explain:

      let mut x = 0;
      let mut y = 0;
      x = x + 3;
      y = y + 7;
      x = x + 4;
      x = y + x;

      Really trivial example - now to us as a human, we read this and see a set of operations that are linear by time. That means, they execute from top to bottom, in order.

      However, this is not how computers work. First, compilers will optimise your code, and optimisation means re-ordering of the operations to achieve better results. A compiler may optimise this to:

      let mut x = 0;
      let mut y = 0;
      // Note removal of the x + 3 and x + 4, folded to a single operation.
      x = x + 7
      y = y + 7;
      x = y + x;

      Now there is a second element. Your CPU presents the illusion of running as a linear system, but it’s actually an asynchronous, out-of-order task execution engine. That means a CPU will reorder your instructions, and may even run them concurrently and asynchronously.

      For example, your CPU will have both x + 7 and y + 7 in the pipeline, even though neither operation has completed - they are effectively running at the “same time” (concurrently).

      When you write a single thread program, you generally won’t notice this behaviour. This is because a lot of smart people write compilers and CPU’s to give the illusion of linear ordering, even though both of them are operating very differently.

      Now we want to write a multithreaded application. Suddenly this is the challenge:

      We write a concurrent program, in a linear language, executed on a concurrent asynchronous machine.

      This means there is a challenge is the translation between our mind (thinking about the concurrent problem), the program (which we have to express as a linear set of operations), which then runs on our CPU (an async concurrent device).

      Phew. How do computers even work in this scenario?!

      Why are CPU’s async?

      CPU’s have to be async to be fast - remember spectre and meltdown? These are attacks based on measuring the side effects of CPU’s asynchronous behaviour. While computers are “fast” these attacks will always be possible, because to make a CPU synchronous is slow - and asynchronous behaviour will always have measurable side effects. Every modern CPU’s performance is an illusion of async black magic.

      A large portion of the async behaviour comes from the interaction of the CPU, cache, and memory.

      In order to provide the “illusion” of a coherent synchronous memory interface there is no seperation of your programs cache and memory. When the cpu wants to access “memory” the CPU cache is utilised transparently and will handle the request, and only on a cache miss, will we retrieve the values from RAM.

      (Aside: in almost all cases more CPU cache, not frequency will make your system perform better, because a cache miss will mean your task stalls waiting on RAM. Ohh no!)

      CPU -> Cache -> RAM

      When you have multiple CPU’s, each CPU has it’s own L1 cache:

      CPU1 -> L1 Cache -> |              |
      CPU2 -> L1 Cache -> | Shared L2/L3 | -> RAM
      CPU3 -> L1 Cache -> |              |
      CPU4 -> L1 Cache -> |              |

      Ahhh! Suddenly we can see where problems can occur - each CPU has an L1 cache, which is transparent to memory but unique to the CPU. This means that each CPU can make a change to the same piece of memory in their L1 cache without the other CPU knowing. To help explain, let’s show a demo.

      CPU just trash my variables fam

      We’ll assume we now have two threads - my code is in rust again, and there is a good reason for the unsafes - this code really is unsafe!

      // assume global x: usize = 0; y: usize = 0;
      THREAD 1                        THREAD 2
      if unsafe { *x == 1 } {          unsafe {
          unsafe { *y += 1 }              *y = 10;
      }                                   *x = 1;

      At the end of execution, what state will X and Y be in? The answer is “it depends”:

      • What order did the threads run?
      • The state of the L1 cache of each CPU
      • The possible interleavings of the operations.
      • Compiler re-ordering

      In the end the result of x will always be 1 - because x is only mutated in one thread, the caches will “eventually” (explained soon) become consistent.

      The real question is y. y could be:

      • 10
      • 11
      • 1

      10 - This can occur because in thread 2, x = 1 is re-ordered above y = 10, causing the thread 1 “y += 1” to execute, followed by thread 2 assign 10 directly to y. It can also occur because the check for x == 1 occurs first, so y += 1 is skipped, then thread 2 is run, causing y = 10. Two ways to achieve the same result!

      11 - This occurs in the “normal” execution path - all things considered it’s a miracle :)

      1 - This is the most complex one - The y = 10 in thread 2 is applied, but the result is never sent to THREAD 1’s cache, so x = 1 occurs and is made available to THREAD 1 (yes, this is possible to have different values made available to each cpu …). Then thread 1 executes y (0) += 1, which is then sent back trampling the value of y = 10 from thread 2.

      If you want to know more about this and many other horrors of CPU execution, Paul McKenny is an expert in this field and has many talks at LCA and others on the topic. He can be found on twitter and is super helpful if you have questions.

      So how does a CPU work at all?

      Obviously your system (likely a multicore system) works today - so it must be possible to write correct concurrent software. Cache’s are kept in sync via a protocol called MESI. This is a state machine describing the states of memory and cache, and how they can be synchronised. The states are:

      • Modified
      • Exclusive
      • Shared
      • Invalid

      What’s interesting about MESI is that each cache line is maintaining it’s own state machine of the memory addresses - it’s not a global state machine. To coordinate CPU’s asynchronously message each other.

      A CPU can be messaged via IPC (Inter-Processor-Communication) to say that another CPU wants to “claim” exclusive ownership of a memory address, or to indicate that it has changed the content of a memory address and you should discard your version. It’s important to understand these messages are asynchronous. When a CPU modifies an address it does not immediately send the invalidation message to all other CPU’s - and when a CPU recieves the invalidation request it does not immediately act upon that message.

      If CPU’s did “synchronously” act on all these messages, they would be spending so much time handling IPC traffic, they would never get anything done!

      As a result, it must be possible to indicate to a CPU that it’s time to send or acknowledge these invalidations in the cache line. This is where barriers, or the memory orderings come in.

      • Relaxed - No messages are sent or acknowledged.
      • Release - flush all pending invalidations to be sent to other CPUS
      • Acquire - Acknowledge and process all invalidation requests in my queue
      • Acquire/Release - flush all outgoing invalidations, and process my incomming queue
      • SeqCst - as AcqRel, but with some other guarantees around ordering that are beyond this discussion.

      Understand a Mutex

      With this knowledge in place, we are finally in a position to understand the operations of a Mutex

      // Assume mutex: Mutex<usize> = Mutex::new(0);
      THREAD 1                            THREAD 2
      {                                   {
          let guard = mutex.lock()            let guard = mutex.lock()
          *guard += 1;                        println!(*guard)
      }                                   }

      We know very clearly that this will print 1 or 0 - it’s safe, no weird behaviours. Let’s explain this case though:

      THREAD 1
          let guard = mutex.lock()
          // Acquire here!
          // All invalidation handled, guard is 0.
          // Compiler is told "all following code must stay after .lock()".
          *guard += 1;
          // content of usize is changed, invalid req is queue
      // Release here!
      // Guard goes out of scope, invalidation reqs sent to all CPU's
      // Compiler told all proceeding code must stay above this point.
                  THREAD 2
                      let guard = mutex.lock()
                      // Acquire here!
                      // All invalidations handled - previous cache of usize discarded
                      // and read from THREAD 1 cache into S state.
                      // Compiler is told "all following code must stay after .lock()".
                  // Release here!
                  // Guard goes out of scope, no invalidations sent due to
                  // no modifications.
                  // Compiler told all proceeding code must stay above this point.

      And there we have it! How barriers allow us to define an ordering in code and a CPU, to ensure our caches and compiler outputs are correct and consistent.

      Benefits of Rust

      A nice benefit of Rust, and knowing these MESI states now, we can see that the best way to run a system is to minimise the number of invalidations being sent and acknowledged as this always causes a delay on CPU time. Rust variables are always mutable or immutable. These map almost directly to the E and S states of MESI. A mutable value is always exclusive to a single cache line, with no contention - and immutable values can be placed into the Shared state allowing each CPU to maintain a cache copy for higher performance.

      This is one of the reasons for Rust’s amazing concurrency story is that the memory in your program map to cache states very clearly.

      It’s also why it’s unsafe to mutate a pointer between two threads (a global) - because the cache of the two cpus’ won’t be coherent, and you may not cause a crash, but one threads work will absolutely be lost!

      Finally, it’s important to see that this is why using the correct concurrency primitives matter - it can highly influence your cache behaviour in your program and how that affects cache line contention and performance.

      For comments and more, please feel free to email me!

      Shameless Plug

      I’m the author and maintainer of Conc Read - a concurrently readable datastructure library for Rust. Check it out on crates.io!

      ASG! 2019 CfP Re-Opened!

      Posted by Lennart Poettering on July 14, 2019 10:00 PM

      <large>The All Systems Go! 2019 Call for Participation Re-Opened for ONE DAY!</large>

      Due to popular request we have re-opened the Call for Participation (CFP) for All Systems Go! 2019 for one day. It will close again TODAY, on 15 of July 2019, midnight Central European Summit Time! If you missed the deadline so far, we’d like to invite you to submit your proposals for consideration to the CFP submission site quickly! (And yes, this is the last extension, there's not going to be any more extensions.)

      ASG image

      All Systems Go! is everybody's favourite low-level Userspace Linux conference, taking place in Berlin, Germany in September 20-22, 2019.

      For more information please visit our conference website!

      HP, Linux and ACPI

      Posted by Luya Tshimbalanga on July 14, 2019 05:35 PM
      Majority of HP hardware running on Linux and even Microsoft reported an issue related to a non-standard compliant ACPI. Notable message below repeats at least three times on the boot:

      4.876549] ACPI BIOS Error (bug): AE_AML_BUFFER_LIMIT, Field [D128] at bit offset/length 128/1024 exceeds size of target Buffer (160 bits) (20190215/dsopcode-198) 
      [ 4.876555] ACPI Error: Aborting method \HWMC due to previous error (AE_AML_BUFFER_LIMIT) (20190215/psparse-529) 
      [ 4.876562] ACPI Error: Aborting method \_SB.WMID.WMAA due to previous error (AE_AML_BUFFER_LIMIT) (20190215/psparse-529)

      The bug is a known for years from which Linux kernel team are unable to fix without the help of vendor i.e. HP. Here is a compilation of reports:
       The good news is some errors seems harmless. Unfortunately, such errors displayed the quirks approach used by vendors to support Microsoft Windows system thus doing bad practice. One of case how such action lead to an issue to even the officially supported operating system on HP hardware.

      The ideal will be for HP to provide a BIOS fix for their affected hardware and officially support the Linux ecosystem much like their Printing department. Linux Vendor Firmware Service will be a good start and so far Dell is the leader in that department. American Megatrends Inc, the company developing BIOS/UEFI for HP made the process easier so it is a matter to fully enable the support.

      bzip2 1.0.8

      Posted by Mark J. Wielaard on July 13, 2019 07:38 PM

      We are happy to announce the release of bzip2 1.0.8.

      This is a fixup release because the CVE-2019-12900 fix in bzip2 1.0.7 was too strict and might have prevented decompression of some files that earlier bzip2 versions could decompress. And it contains a few more patches from various distros and forks.

      bzip2 1.0.8 contains the following fixes:

      • Accept as many selectors as the file format allows. This relaxes the fix for CVE-2019-12900 from 1.0.7 so that bzip2 allows decompression of bz2 files that use (too) many selectors again.
      • Fix handling of large (> 4GB) files on Windows.
      • Cleanup of bzdiff and bzgrep scripts so they don’t use any bash extensions and handle multiple archives correctly.
      • There is now a bz2-files testsuite at https://sourceware.org/git/bzip2-tests.git

      Patches by Joshua Watt, Mark Wielaard, Phil Ross, Vincent Lefevre, Led and Kristýna Streitová.

      This release also finalizes the move of bzip2 to a community maintained project at https://sourceware.org/bzip2/

      Thanks to Bhargava Shastry bzip2 is now also part of oss-fuzz to catch fuzzing issues early and (hopefully not) often.

      All systems go

      Posted by Fedora Infrastructure Status on July 12, 2019 10:20 PM
      Service 'Pagure' now has status: good: Everything seems to be working.

      There are scheduled downtimes in progress

      Posted by Fedora Infrastructure Status on July 12, 2019 08:59 PM
      Service 'Pagure' now has status: scheduled: scheduled outage: https://pagure.io/fedora-infrastructure/issue/7980

      FPgM report: 2019-28

      Posted by Fedora Community Blog on July 12, 2019 08:56 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week. I am on PTO the week of 15 July, so there will be no FPgM report or FPgM office hours next week.

      I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


      Upcoming meetings

      Fedora 31


      • 2019-07-23 — Self-contained changes due
      • 2019-07-24–08-13 — Mass rebuild
      • 2019-08-13 — Code complete (testable) deadline
      • 2019-08-13 — Fedora 31 branch point



      Submitted to FESCo

      Approved by FESCo

      The post FPgM report: 2019-28 appeared first on Fedora Community Blog.

      Settings, in a sandbox world

      Posted by Matthias Clasen on July 12, 2019 06:19 PM

      GNOME applications (and others) are commonly using the GSettings API for storing their application settings.

      GSettings has many nice aspects:

      • flexible data types, with GVariant
      • schemas, so others can understand your settings (e.,g. dconf-editor)
      • overrides, so distros can tweak defaults they don’t like

      And it has different backends, so it can be adapted to work transparently in many situations. One example for where this comes in handy is when we use a memory backend to avoid persisting any settings while running tests.

      The GSettings backend that is typically used for normal operation is the DConf one.


      DConf features include profiles,  a stack of databases, a facility for locking down keys so they are not writable, and a single-writer design with a central service.

      The DConf design is flexible and enterprisey – we have taken advantage of this when we created fleet commander to centrally manage application and desktop settings for large deployments.

      But it is not a great fit for sandboxing, where we want to isolate applications from each other and from the host system.  In DConf, all settings are stored in a single database, and apps are free to read and write any keys, not just their own – plenty of potential for mischief and accidents.

      Most of the apps that are available as flatpaks today are poking a ‘DConf hole’ into their sandbox to allow the GSettings code to keep talking to the dconf daemon on the session bus, and mmap the dconf database.

      Here is how the DConf hole looks in the flatpak metadata file:

      [Session Bus Policy]


      Ideally, we want sandboxed apps to only have access to their own settings, and maybe readonly access to a limited set of shared settings (for things like the current font, or accessibility settings). It would also be nice if uninstalling a sandboxed app did not leave traces behind, like leftover settings  in some central database.

      It might be possible to retrofit some of this into DConf. But when we looked, it did not seem easy, and would require reconsidering some of the central aspects of the DConf design. Instead of going down that road, we decided to take advantage of another GSettings backend that already exists, and stores settings in a keyfile.

      Unsurprisingly, it is called the keyfile backend.


      The keyfile backend was originally created to facilitate the migration from GConf to GSettings, and has been a bit neglected, but we’ve given it some love and attention, and it can now function as the default GSettings backend inside sandboxes.

      It provides many of the isolation aspects we want: Apps can only read and write their own settings, and the settings are in a single file, in the same place as all the application data:


      One of the things we added to the keyfile backend is support for locks and overrides, so that fleet commander can keep working for apps that are in flatpaks.

      For shared desktop-wide settings, there is a companion Settings portal, which provides readonly access to some global settings. It is used transparently by GTK and Qt for toolkit-level settings.

      What does all this mean for flatpak apps?

      If your application is not yet available as a flatpak, and you want to provide one, you don’t have to do anything in particular. Things will just work. Don’t poke a hole in your sandbox for DConf, and GSettings will use the keyfile backend without any extra work on your part.

      If your flatpak is currently shipping with a DConf hole, you can keep doing that for now. When you are ready for it, you should

      • Remove the DConf hole from your flatpak metadata
      • Instruct flatpak to migrate existing DConf settings, by adding a migrate-path setting to the X-DConf section in your flatpak metadata. The value fo the migrate-path key is the DConf path prefix where your application’s settings are stored.

      Note that this is a one-time migration; it will only happen if the keyfile does not exist. The existing settings will be left in the DConf database, so if you need to do the migration again for whatever reason, you can simply remove the the keyfile.

      This is how the migrate-path key looks in the metadata file:


      Closing the DConf hole is what makes GSettings use the keyfile backend, and the migrate-path key tells flatpak to migrate settings from DConf – you need both parts for a seamless transition.

      There were some recent fixes to the keyfile backend code, so you want to make sure that the runtime has GLib 2.60.6, for best results.

      Happy flatpaking!

      Update: One of the most recent fixes in the keyfile backend was to correct under what circumstances GSettings will choose it as the default backend. If you have problems where the wrong backend is chosen, as a short-term workaround, you can override the choice with the GSETTINGS_BACKEND environment variable.

      Update 2: To add the migrate-path setting with flatpak-builder, use the following option:


      GNOME Software in Fedora will no longer support snapd

      Posted by Richard Hughes on July 12, 2019 12:51 PM

      In my slightly infamous email to fedora-devel I stated that I would turn off the snapd support in the gnome-software package for Fedora 31. A lot of people agreed with the technical reasons, but failed to understand the bigger picture and asked me to explain myself.

      I wanted to tell a little, fictional, story:

      In 2012 the ISO institute started working on a cross-vendor petrol reference vehicle to reduce the amount of R&D different companies had to do to build and sell a modern, and safe, saloon car.

      Almost immediately, Mercedes joins ISO, and starts selling the ISO car. Fiat joins in 2013, Peugeot in 2014 and General Motors finally joins in 2015 and adds support for Diesel engines. BMW, who had been trying to maintain the previous chassis they designed on their own (sold as “BMW Kar Koncept”), finally adopts the ISO car also in 2015. BMW versions of the ISO car use BMW-specific transmission oil as it doesn’t trust oil from the ISO consortium.

      Mercedes looks to the future, and adds high-voltage battery support to the ISO reference car also in 2015, adding the required additional wiring and regenerative braking support. All the other members of the consortium can use their own high voltage batteries, or use the reference battery. The battery can be charged with electricity from any provider.

      In 2016 BMW stops marketing the “ISO Car” like all the other vendors, and instead starts calling it “BMW Car” instead. At about the same time BMW adds support for hydrogen engines to the reference vehicle. All the other vendors can ship the ISO car with a Hydrogen engine, but all the hydrogen must be purchased from a BMW-certified dealer. If any vendor other than BMW uses the hydrogen engines, they can’t use the BMW-specific heat shield which protects the fuel tank from exploding in the event on a collision.

      In 2017 Mercedes adds traction control and power steering to the ISO reference car. It is enabled almost immediately and used by nearly all the vendors with no royalties and many customer lives are saved.

      In 2018 BMW decides that actually producing vendor-specific oil for it’s cars is quite a lot of extra work, and tells all customers existing transmission oil has to be thrown away, but now all customers can get free oil from the ISO consortium. The ISO consortium distributes a lot more oil, but also has to deal with a lot more customer queries about transmission failures.

      In 2019 BMW builds a special cut-down ISO car, but physically removes all the petrol and electric functionality from the frame. It is rebranded as “Kar by BMW”. It then sends a private note to the chair of the ISO consortium that it’s not going to be using ISO car in 2020, and that it’s designing a completely new “Kar” that only supports hydrogen engines and does not have traction control or seatbelts. The explanation given was that BMW wanted a vehicle that was tailored specifically for hydrogen engines. Any BMW customers using petrol or electricity in their car must switch to hydrogen by 2020.

      The BMW engineers that used to work on ISO Car have been shifted to work on Kar, although have committed to also work on Car if it’s not too much extra work. BMW still want to be officially part of the consortium and to be able to sell the ISO Car as an extra vehicle to the customer that provides all the engine types (as some customers don’t like hydrogen engines), but doesn’t want to be seen to support anything other than a hydrogen-based future. It’s also unclear whether the extra vehicle sold to customers would be the “ISO Car” or the “BMW Car”.

      One ISO consortium member asks whether they should remove hydrogen engine support from the ISO car as they feel BMW is not playing fair. Another consortium member thinks that the extra functionality could just be disabled by default and any unused functionality should certainly be removed. All members of the consortium feel like BMW has pushed them too far. Mercedes stop selling the hydrogen ISO Car model stating it’s not safe without the heat shield, and because BMW isn’t going to be supporting the ISO Car in 2020.

      What is Silverblue?

      Posted by Fedora Magazine on July 12, 2019 08:00 AM

      Fedora Silverblue is becoming more and more popular inside and outside the Fedora world. So based on feedback from the community, here are answers to some interesting questions about the project. If you do have any other Silverblue related questions, please leave it in the comments section and we will try to answer them in a future article.

      What is Silverblue?

      Silverblue is a codename for the new generation of the desktop operating system, previously known as Atomic Workstation. The operating system is delivered in images that are created by utilizing the rpm-ostree project. The main benefits of the system are speed, security, atomic updates and immutability.

      What does “Silverblue” actually mean?

      “Team Silverblue” or “Silverblue” in short doesn’t have any hidden meaning. It was chosen after roughly two months when the project, previously known as Atomic Workstation was rebranded. There were over 150 words or word combinations reviewed in the process. In the end Silverblue was chosen because it had an available domain as well as the social network accounts. One could think of it as a new take on Fedora’s blue branding, and could be used in phrases like “Go, Team Silverblue!” or “Want to join the team and improve Silverblue?”.

      What is ostree?

      OSTree or libostree is a project that combines a “git-like” model for committing and downloading bootable filesystem trees, together with a layer to deploy them and manage the bootloader configuration. OSTree is used by rpm-ostree, a hybrid package/image based system that Silverblue uses. It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed.

      Why use Silverblue?

      Because it allows you to concentrate on your work and not on the operating system you’re running. It’s more robust as the updates of the system are atomic. The only thing you need to do is to restart into the new image. Also, if there’s anything wrong with the currently booted image, you can easily reboot/rollback to the previous working one, if available. If it isn’t, you can download and boot any other image that was generated in the past, using the ostree command.

      Another advantage is the possibility of an easy switch between branches (or, in an old context, Fedora releases). You can easily try the Rawhide or updates-testing branch and then return back to the one that contains the current stable release. Also, you should consider Silverblue if you want to try something new and unusual.

      What are the benefits of an immutable OS?

      Having the root filesystem mounted read-only by default increases resilience against accidental damage as well as some types of malicious attack. The primary tool to upgrade or change the root filesystem is rpm-ostree.

      Another benefit is robustness. It’s nearly impossible for a regular user to get the OS to the state when it doesn’t boot or doesn’t work properly after accidentally or unintentionally removing some system library. Try to think about these kind of experiences from your past, and imagine how Silverblue could help you there.

      How does one manage applications and packages in Silverblue?

      For graphical user interface applications, Flatpak is recommended, if the application is available as a flatpak. Users can choose between Flatpaks from either Fedora and built from Fedora packages and in Fedora-owned infrastructure, or Flathub that currently has a wider offering. Users can install them easily through GNOME Software, which already supports Fedora Silverblue.

      One of the first things users find out is there is no dnf preinstalled in the OS. The main reason is that it wouldn’t work on Silverblue — and part of its functionality was replaced by the rpm-ostree command. Users can overlay the traditional packages by using the rpm-ostree install PACKAGE. But it should only be used when there is no other way. This is because when the new system images are pulled from the repository, the system image must be rebuilt every time it is altered to accommodate the layered packages, or packages that were removed from the base OS or replaced with a different version.

      Fedora Silverblue comes with the default set of GUI applications that are part of the base OS. The team is working on porting them to Flatpaks so they can be distributed that way. As a benefit, the base OS will become smaller and easier to maintain and test, and users can modify their default installation more easily. If you want to look at how it’s done or help, take a look at the official documentation.

      What is Toolbox?

      Toolbox is a project to make containers easily consumable for regular users. It does that by using podman’s rootless containers. Toolbox lets you easily and quickly create a container with a regular Fedora installation that you can play with or develop on, separated from your OS.

      Is there any Silverblue roadmap?

      Formally there isn’t any, as we’re focusing on problems we discover during our testing and from community feedback. We’re currently using Fedora’s Taiga to do our planning.

      What’s the release life cycle of the Silverblue?

      It’s the same as regular Fedora Workstation. A new release comes every 6 months and is supported for 13 months. The team plans to release updates for the OS bi-weekly (or longer) instead of daily as they currently do. That way the updates can be more thoroughly tested by QA and community volunteers before they are sent to the rest of the users.

      What is the future of the immutable OS?

      From our point of view the future of the desktop involves the immutable OS. It’s safest for the user, and Android, ChromeOS, and the last macOS Catalina all use this method under the hood. For the Linux desktop there are still problems with some third party software that expects to write to the OS. HP printer drivers are a good example.

      Another issue is how parts of the system are distributed and installed. Fonts are a good example. Currently in Fedora they’re distributed in RPM packages. If you want to use them, you have to overlay them and then restart to the newly created image that contains them.

      What is the future of standard Workstation?

      There is a possibility that the Silverblue will replace the regular Workstation. But there’s still a long way to go for Silverblue to provide the same functionality and user experience as the Workstation. In the meantime both desktop offerings will be delivered at the same time.

      How does Atomic Workstation or Fedora CoreOS relate to any of this?

      Atomic Workstation was the name of the project before it was renamed to Fedora Silverblue.

      Fedora CoreOS is a different, but similar project. It shares some fundamental technologies with Silverblue, such as rpm-ostree, toolbox and others. Nevertheless, CoreOS is a more minimal, container-focused and automatically updating OS.


      Posted by Porfirio A. Páiz - porfiriopaiz on July 12, 2019 05:32 AM

      Software Repositories

      Once we solved the problem of getting connected to the Internet and how to launch a terminal, you might want to install all the software you use.

      The software comes from somewhere, on Fedora these are called Software Repositories, next I detail which are the ones I enable on all my Fedora installs apart of the officials that comes preinstalled and enabled by default.

      Open a terminal and enable some of these.


      RPM Fusion is a repository of add-on packages for Fedora and EL+EPEL maintained by a group of volunteers. RPM Fusion is not a standalone repository, but an extension of Fedora. RPM Fusion distributes packages that have been deemed unacceptable to Fedora.

      More about RPMFusion on its official website: https://rpmfusion.org/FAQ

      su -c 'dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm'

      Fedora Workstation Repositories

      From the Fedora wiki page corresponding to Fedora Workstation Repositories:

      The Fedora community strongly promotes free and open source resources. The Fedora Workstation, in its out of the box configuration, therefore, only includes free and open source software. To make the Fedora Workstation more usable, we've made it possible to easily install a curated set of third party (external) sources that supply software not included in Fedora via an additional package.

      Read more at: https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories

      Please note that this will only install the *.repo files, it will not enable the provided repos:

      su -c 'dnf install fedora-workstation-repositories'

      Fedora Rawhide's Repositories

      Rawhide is the name given to the current development version of Fedora. It consists of a package repository called "rawhide" and contains the latest build of all Fedora packages updated on a daily basis. Each day, an attempt is made to create a full set of 'deliverables' (installation images and so on), and all that compose successfully are included in the Rawhide tree for that day.

      It is possible to install its repository files and just temporarily enable it for just a single transaction, let us say, to simple install or upgrade a single package and its dependencies, maybe, to give a try to its new version that is not currently available on any of the stable and maintained versions of Fedora.

      This is useful when a bug was fixed on Rawhide but it has not landed yet on the stable branch of Fedora and the urge for it cannot wait.

      Again, this will just install the *.repo file under /etc/yum.repos.d/, this will not enable it. Later we will see how to handle, disable and enable this repositories for just one transaction.

      More on Rawhide on its wiki page: https://fedoraproject.org/wiki/Releases/Rawhide

      su -c 'dnf install fedora-repos-rawhide'


      Copr is an easy-to-use automatic build system providing a package repository as its output.

      Here are some of the repos I rely on for some packages:


      Remarkable is a free fully featured markdown editor.

      su -c 'dnf -y copr enable neteler/remarkable'


      Gajim is a Jabber client written in PyGTK, currently it provides support for the OMEMO encryption method which I use. This repo provides tools and dependencies not available in the official Fedora repo.

      su -c 'dnf -y copr enable philfry/gajim'


      QGIS is a user friendly Open Source Geographic Information System.

      su -c 'dnf -y copr enable dani/qgis'


      This provides the .NET CLI tools and runtime for Fedora.

      su -c 'dnf copr enable dotnet-sig/dotnet'


      Few weeks ago I decided to give a try to VSCodium, a fork of VSCode, here is how to enable its repo for Fedora.

      First import its gpg key, so you can check the packages retrieved from the repo:

      su -c 'rpm --import https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg'

      Now create the vscodium.repo file:

      su -c "tee -a /etc/yum.repos.d/vscodium.repo << 'EOF'


      Now check that all the repos has been successfully installed and some of them enabled by refreshing the dnf metadata.

      su -c 'dnf check-update'

      Thats all, in the next post will see how to enable some of this repos, how temporarilly disable and enable some other for just a single transaction, how to install or upgrade certain packages from an specific repo and many repo administration tasks.

      Firefox 68 available now in Fedora

      Posted by Fedora Magazine on July 12, 2019 01:33 AM

      Earlier this week, Mozilla released version 68 of the Firefox web browser. Firefox is the default web browser in Fedora, and this update is now available in the official Fedora repositories.

      This Firefox release provides a range of bug fixes and enhancements, including:

      • Better handling when using dark GTK themes (like Adwaita Dark). Previously, running a dark theme may have caused issues where user interface elements on a rendered webpage (like forms) are rendered in the dark theme, on a white background. Firefox 68 resolves these issues. Refer to these two Mozilla bugzilla tickets for more information.
      • The about:addons special page has two new features to keep you safer when installing extensions and themes in Firefox. First is the ability to report security and stability issues with addons directly in the about:addons page. Additionally, about:addons now has a list of secure and stable extensions and themes that have been vetted by the Recommended Extensions program.

      Updating Firefox in Fedora

      Firefox 68 has already been pushed to the stable Fedora repositories. The security fix will be applied to your system with your next update. You can also update the firefox package only by running the following command:

      $ sudo dnf update --refresh firefox

      This command requires you to have sudo setup on your system. Additionally, note that not every Fedora mirrors syncs at the same rate. Community sites graciously donate space and bandwidth these mirrors to carry Fedora content. You may need to try again later if your selected mirror is still awaiting the latest update.

      GCI 2018 mentor’s summit @ Google headquarters

      Posted by Fedora Community Blog on July 11, 2019 12:54 PM
      Fedora community elections


      Google Code-in is a contest to introduce students (ages 13-17) to open source software development. Since 2010, 8,108 students from 107 countries have completed over 40,100 open source tasks Because Google Code-in is often the first experience many students have with open source, the contest is designed to make it easy for students to jump right in. I was one of the mentors in this first time for Fedora program. We had 125 students participating in Fedora and the top 3 students completed 26, 25 and 22 tasks each.

      Every year Google invites the Grand-Prize winners and their parents, and a mentor to it’s headquarters in San Francisco, California for a 4 days trip. I was offered the opportunity to go and represent Fedora in the summit and meet these 2 brilliant folks in person. This report covers activities and other things that happened there.


      From coming up with variety of different level tasks to verifying tasks on time, it was an experience for all of us. Active and helpful group of mentors helped the students whenever required. There were cases when the assigned mentor got busy/unavailable, we were ready to step in and take care of the task. Thanks to proper communication, we were always able to review tasks on time.

      We were 8 mentors (handling different kinds of tasks)


      Margii and EchoDuck

      Margii (left) – EchoDuck (Right) in Google Android park

      Both our winners were amazing and did the tasks with great quality.  I am still surprised to see their work at such a young age.
      Margii wants to contribute to Fedora Website and other Infra projects, and he is great with Python/Flask. EchoDuck is also interested in contributing to Fedora Infra and he is looking to get his hands dirty with Rust (coding wherever required, and packaging). It’s an action item on me to help them connect with right people so that they can start their contribution journey. I am also hoping to see them as GCI mentors or GSoC students in the future.

      Day 1 (Reception)

      We met other mentors/students/parents in hotel lobby at 5 pm and then left for Google San Francisco Office. We were welcomed with a lot of snacks and swag.
      Best part was all the students receiving Pixel 3xl. The event was followed up by a dinner and we were dropped off back at the hotel.

      Day 2 (Full Day at Google headquarters in Mountain View.)

      This day was special since we were going to explore Googleplex and talk to Google Engineers at Google Cloud office. We were also offered $75 and $150 to mentors and students respectively to buy our choice of Google Merchandise. Students met  with Google employees of their home country and parted to have lunch with them. We had a line up of talks by great folks from Google.

      • Recruiting – Lauren Alpizar
      • Android OS: Ally Sillins
      • Cloud: Ryan Matsumoto
      • Chrome OS – Grant Grundler
      • Google Assistant – Daniel Myers
      • Google TensorFlow – Paige Bailey

      We had Dinner at the same office and then head back to the hotel.

      Day 3 (Fun Day in San Francisco)

      Probably the day most of us would remember. We had option to select one of Segway tour or cable car tour. I selected the cable car and went to see around the whole city (Twin Peaks, Lands End, San Francisco City, Golden gate park, Golden gate bridge (also, walked across it))

      There was a Yacht waiting for us at one of the piers. We embarked and cruised around the Alcatraz Island, Golden Gate bridge. We had dinner on the Yacht and since this day was a fun day, obviously all of us were tired.

      Day 4 (Closing reception in Google SF office)

      On the last day, we had to go the office a bit early. We had breakfast in the office itself followed by the award ceremony (to grand prize winners). We were given 4 minutes per org time to share something if we wished. A lot of students shared their experience with GCI. We had lunch after than and meanwhile a video crew was taking interview of people who signed up for it. We had left the office by 3pm after taking a lot of pictures in front of the San Francisco – Oakland bridge.

      People who made this possible

      Thanks to every student who participated. Everyone of you were amazing and I hope to see you all.
      Thanks to Justin for being there when needed, facilitating with telegram group and IRC bridge, and keeping the conversation alive. A very special thanks to Bex for being the backbone of this and all other summer coding programs. Of course all the mentors, Thank you for giving all the time and I hope to be a part of this along with all of you in coming years.

      The post GCI 2018 mentor’s summit @ Google headquarters appeared first on Fedora Community Blog.

      NeuroFedora poster at CNS*2019

      Posted by The NeuroFedora Blog on July 11, 2019 12:25 PM

      With CNS*2019 around the corner, we worked on getting the NeuroFedora poster ready for the poster presentation session. Our poster is P96, on the first poster session on the 14th of July. The poster is also shown below:

      <object data="https://neurofedora.github.io/extra/2019-CNS-NeuroFedora.pdf" height="800" type="application/pdf" width="80%"> Your browser does not support previewing PDF files. Please download the file to view it. </object>

      The poster is made available under a CC-By license. Please feel free to share it around.

      The current team already consists of more people than the authors listed on the poster. The authors here were only the first set, and as the team grows, so will our author list for future publications. In general, we follow the standard rule: if one has contributed to the project since the previous publication, they get their name on the poster.

      Unfortunately, this time, no one from the team is able to attend the conference, but if you are there and want to learn more about NeuroFedora, please get in touch with us using any of our communication channels.

      To everyone that will be in Barcelona for the conference, we hope you have a fruitful one, and of course, we hope you are able to make some time to rest at the beach too.

      NeuroFedora is volunteer driven initiative and contributions in any form always welcome. You can get in touch with us here. We are happy to help you learn the skills needed to contribute to the project. In fact, that is one of the major goals of the initiative---to spread technical knowledge that is necessary to develop software for Neuroscience.

      Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC)

      Posted by Fedora Magazine on July 10, 2019 02:50 PM

      I’ve decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC).  This was not an easy decision to make. I am proud of the work I have done in Fedora over the last three years and I think I have helped the community move past many challenges.  I could NEVER have done all of this without the support and assistance of the community!

      As some of you know, I have been covering for some other roles in Red Hat for almost the last year.  Some of these tasks have led to some opportunities to take my career in a different direction. I am going to remain at Red Hat and on the same team with the same manager, but with a slightly expanded scope of duties.  I will no longer be day-to-day on Fedora and will instead be in a consultative role as a Community Architect at Large. This is a fancy way of saying that I will be tackling helping lots of projects with various issues while also working on some specific strategic objectives.

      I think this is a great opportunity for the Fedora community.  The Fedora I became FCAIC in three years ago is a very different place from the Fedora of today.  While I could easily continue to help shape and grow this community, I think that I can do more by letting some new ideas come in.  The new person will hopefully be able to approach challenges differently. I’ll also be here to offer my advice and feedback as others who have moved on in the past have done.  Additionally, I will work with Matthew Miller and Red Hat to help hire and onboard the new Fedora Community and Impact Coordinator. During this time I will continue as FCAIC.

      This means that we are looking for a new FCAIC. Love Fedora? Want to work with Fedora full-time to help support and grow the Fedora community? This is the core of what the FCAIC does. The job description (also below), has a list of some of the primary job responsibilities and required skills – but that’s just a sample of the duties required, and the day to day life working full-time with the Fedora community.

      Day to day work includes working with Mindshare, managing the Fedora Budget, and being part of many other teams, including the Fedora Council.  You should be ready to write frequently about Fedora’s achievements, policies and decisions, and to draft and generate ideas and strategies. And, of course, planning Flock and Fedora’s presence at other events. It’s hard work, but also a great deal of fun.

      Are you good at setting long-term priorities and hacking away at problems with the big picture in mind? Do you enjoy working with people all around the world, with a variety of skills and interests, to build not just a successful Linux distribution, but a healthy project? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Is Fedora’s mission deeply important to you?

      If you said “yes” to those questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit apply online, or contact Matthew Miller, Brian Exelbierd, or Stormy Peters.

      Fedora Community Action and Impact Coordinator

      Location: CZ-Remote – prefer Europe but can be North America

      Company Description

      At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and virtualization technologies, together with award-winning global customer support, consulting, and implementation services. Red Hat is a rapidly growing company supporting more than 90% of Fortune 500 companies.

      Job summary

      Red Hat’s Open Source Programs Office (OSPO) team is looking for the next Fedora Community Action and Impact Lead. In this role, you will join the Fedora Council and guide initiatives to grow the Fedora user and developer communities, as well as make Red Hat and Fedora interactions even more transparent and positive. The Council is responsible for stewardship of the Fedora Project as a whole, and supports the health and growth of the Fedora community.

      As a the Fedora Community Action and Impact Lead, you’ll facilitate decision making on how to best focus the Fedora community budget to meet our collective objectives, work with other council members to identify the short, medium, and long-term goals of the Fedora community, and organize and enable the project.

      You will also help make decisions about trademark use, project structure, community disputes or complaints, and other issues. You’ll hold a full council membership, not an auxiliary or advisory role.

      Primary job responsibilities

      • Identify opportunities to engage new contributors and community members; align project around supporting those opportunities.
      • Improve on-boarding materials and processes for new contributors.
      • Participate in user and developer discussions and identify barriers to success for contributors and users.
      • Use metrics to evaluate the success of open source initiatives.
      • Regularly report on community metrics and developments, both internally and externally.  
      • Represent Red Hat’s stake in the Fedora community’s success.
      • Work with internal stakeholders to understand their goals and develop strategies for working effectively with the community.
      • Improve onboarding materials and presentation of Fedora to new hires; develop standardized materials on Fedora that can be used globally at Red Hat.
      • Work with the Fedora Council to determine the annual Fedora budget.
      • Assist in planning and organizing Fedora’s flagship events each year.
      • Create and carry out community promotion strategies; create media content like blog posts, podcasts, and videos and facilitate the creation of media by other members of the community

      Required skills

      • Extensive experience with the Fedora Project or a comparable open source community.
      • Exceptional writing and speaking skills
      • Experience with software development and open source developer communities; understanding of development processes.
      • Outstanding organizational skills; ability to prioritize tasks matching short and long-term goals and focus on the tasks of high priority
      • Ability to manage a project budget.
      • Ability to lead teams and participate in multiple cross-organizational teams that span the globe.
      • Experience motivating volunteers and staff across departments and companies

      Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.

      Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.

      Photo by Deva Williamson on Unsplash.

      Cockpit 198

      Posted by Cockpit Project on July 10, 2019 12:00 AM

      Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 198.

      PatternFly4 user interface design

      Cockpit has been restyled to match the PatternFly 4 User Interface design, including the Red Hat Text and Display fonts.

      This style refresh aligns Cockpit with other web user interfaces that use PatternFly, such as OpenShift 4.

      Over time, Cockpit will be ported to actually use PatternFly 4 widgets, but this restyle allows us to change Cockpit gradually.

      login page

      system page

      SELinux: Show changes

      The SELinux page now has a new section “System Modifications” which shows all policy settings that were made to the system (with Cockpit or otherwise):

      SELinux modifications

      The “View Automation Script” link will show a dialog with a shell script that can be used to apply the same changes to other machines:

      SELinux automation script

      Machines: Deletion of Virtual Networks

      The Virtual Networks section of the Machines page now supports deleting networks.

      delete virtual network

      Machines: Support more disk types

      When creating a new VM, the disk can now be on a storage pool type other than a plain file. The newly supported types are iSCSI, LVM, and physical volumes.

      VM creation with iSCSI pool

      Docker: Change menu label

      The menu label changed from “Containers” to “Docker Containers”. This avoids confusion with the “Podman Containers” page from cockpit-podman, and points out that this page really is about Docker only, not any other container technology.

      Web server: More flexible https redirection for proxies

      cockpit-ws now supports redirecting unencrypted http to https (TLS) even when running in --no-tls mode. Use this when running cockpit-ws behind a reverse http proxy that also supports https, but does not handle the redirection from http to https for itself. This is enabled with the new --proxy-tls-redirect option.

      Try it out

      Cockpit 198 is available now:

      Bug bounties and NDAs are an option, not the standard

      Posted by Matthew Garrett on July 09, 2019 09:15 PM
      Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

      a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
      b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
      c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

      (a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

      The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

      Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

      If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

      tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

      comment count unavailable comments

      EPEL-8 Production Layout

      Posted by Stephen Smoogen on July 09, 2019 05:24 PM

      EPEL-8 Production Layout

      TL; DR:

      1. EPEL-8 will have a multi-phase roll-out into production.
      2. EPEL-8.0 will build using existing grobisplitter in order to use a ‘flattened’ build system without modules.
      3. EPEL-8.1 will start in staging without grobisplitter and using default modules via mock.
      4. The staging work will allow for continual development changes in koji, ‘ursa-prime’, and MBS functionality to work without breaking Fedora 31 or initial EPEL-8.0 builds.
      5. EPEL-8.1 will look to be ready by November 2019 after Fedora 31 around the time that RHEL-8.1 may release (if it uses a 6 month cadence.)

      Multi-phase roll-out

      As documented elsewhere, EPEL-8 has been slowly rolling out due to the many changes in RHEL and in the Fedora build system since EPEL-7 was initiated in 2014. Trying to roll out an EPEL-8 which was ‘final’ and thus the way it always will be was too prone to failure as we find we have to constantly change plans to match reality.
      We will be rolling out EPEL-8 in a multi-phase release cycle. Each cycle will allow for hopefully greater functionality for developers and consumers. On the flip side, we will find that we have to change expectations of what can and can not be delivered inside of EPEL over that time.
      1. 8.0 will be a ‘minimal viability’. Due to un-shipped development libraries and the lack of building replacement modules, not all packages will be able to build. Instead only non-modular RPMs which can rely on only ‘default’ modules will work. Packages must also only rely on what is shipped in RHEL-8.0 BaseOS/AppStream/CodeReadyBuilder channels versus any ‘unshipped -devel’ packages.
      2. 8.1 will add on ‘minimal modularity’. Instead of using a flattened build system, we will look at updating koji to have basic knowledge of modularity, use a tool to tag in packages from modules as needed, and possibly add in the Module Build System (MBS) in order to ship modules.
      3. 8.2 will finish adding in the Module Build System and will enable gating and CI into the workflow so that packages can tested faster.
      Due to the fact that the phases will change how EPEL is produced, there may be need to be mass rebuilds between each one. There will also be changes in policies about what packages are allowed to be in EPEL and how they would be allowed.

      Problems with koji, modules and mock

      If you are wanting to build packages in mock, you can set up a lot of controls in /etc/mock/foo.cfg which will turn on and off modules as needed so that you can enable the javapackages-tools or virt-devel module so that packages like libssh2-devel or javapackages-local are available. However koji does not allow this control per channel because it is meant to completely control what packages are brought into a buildroot. Every build records what packages were used to build an artifact and koji will create a special mock config file to pull in those items. This allows for a high level of auditability and confirmation that the package stored is the package built, and that what was built used certain things.
      For building an operating system like Fedora or Red Hat Enterprise Linux (RHEL), this works great because you can show how things were done 2-3 years later when trying to debug something else. However when koji does not ‘own’ the life-cycle of packages this becomes problematic. In building EPEL, the RHEL packages are given to the buildroot via external repositories. This means that koji does not fully know the life-cycle of the packages it ‘pulls’ in to the buildroot. In a basic mode it will choose packages it has built/knows about first, then packages from the buildroot, and if there is a conflict from external packages will try to choose the one with the highest epoch-version-release-timestamp so that only the newest version is in. (If the timestamp is the same, it tends to refuse to use both packages).
      An improvement to this was adding code to mergerepo which allows for dnf to make a choice on which packages to use between repositories. This allows for mock’s dnf to pull in modules without the repositories having been mangled or ‘flattened’ as with grobisplitter. However, it is not a complete story. For DNF to know which modules to pull in it needs to set an environment variable for the platform (for fedora releases it is something like f30 and for RHEL it is el8). Koji doesn’t know how to do this so the solution would be to set it in the build systems /etc/mock/site-defaults.cfg but that would affect all builds and would cause problems for building Fedora on the same build system.


      A second initiative to deal with building with modules was to try and take modules out of the equation completely. Since a module is a virtual repository embedded in a real one, you should be able to pull them apart and make new ones. Grobisplitter was designed to do this to help get CentOS-8 ready and also allow for EPEL to bootstrap using a minimal buildset. While working on this, we found that we needed also parts of the ‘–bare’ koji work because certain python packages have the same src.rpm name-version but different releases which koji would kick out.
      Currently grobisplitter does not put in any information about the module it ‘spat’ out. This will affect building when dnf starts seeing metadata in individual rpms which says ‘this is part of a module and needs to be installed as such’.

      Production plans

      We are trying to determine which tool will work better long term in order to make EPEL-8.0 and EPEL-8.1 work.


      Start DateEnd DateWork PlannedParty Involved
      2019-07-012019-07-05Lessons LearnedSmoogen, Mohan
      2019-07-082019-07-12Release Build workMohan, Fenzi
      2019-07-082019-07-12Call for packagesSmoogen
      2019-07-152019-07-19Initial branchingMohan, Dawson
      2019-07-222019-07-31First branch/testDawson, et al
      2019-08-012019-08-01EPEL-8.0 GAEPEL Steering Committee
      2019-08-012019-08-08Lessons LearnedSmoogen, et al
      2019-08-012019-08-08Revise documentationSmoogen, et al
      2019-09-012019-09-01Bodhi gating turned onMohan

      EPEL-8.0 Production Breakout

      1. Lessons Learned. Document the steps and lessons learned from the previous time frame. Because previous EPEL spin-ups have been done multiple years apart, what was known is forgotten and has to be relearned. By capturing it, we hope that EPEL-9 does not take as long.
      2. Documentation. Write documents on what was done to set up the environment and what is expected in the next section (how to branch to EPEL-8, how to build with EPEL-8, dealing with unshipped packages, updated FAQ)
      3. Call for Packages This will be going over the steps that packagers need to follow to get packages branched to EPEL-8.
      4. Release Build Work. This is setting up the builders and environment in production. Most of the steps should be repeats of what was done in staging with additional work done in bodhi to have signing and composes work
      5. Initial Branching. This where the first set of packages are needed to be branched and built for EPEL-8: epel-release, epel-rpm-macros, fedpkg-minimal, fedpkg (and all the things needed for it).
      6. First Branch Going over the various tickets for EPEL-8 packages, a reasonable sample will be branched. Work will be done with the packagers on problems they find. This will continue as needed.
      7. EPEL-8.0 GA Branching can follow normal processes to get done.
      8. Lessons Learned. Go over problems and feed into other groups backlogs.
      9. Documentation Update previous documents and add any that were found to be needed.


      Start DateEnd DateWork PlannedParty Involved
      2019-07-012019-07-05Lessons LearnedFenzi, Contyk, et al
      2019-07???Groom Koji changes needed???
      2019-07???Write/Test Koji changes needed???
      2019-07???Non-modular RPM in staging???
      2019-07???MBS in staging???
      2019-08????Implement Koji changes????
      2019-08????Implement bodhi compose in staging????
      2019-09????Close off 8.1 beta???
      2019-09????Lessons learned???
      2019-09????Begin changes in prod????
      2019-10????Open module builds in EPEL???
      2019-11????EPEL-8.1 GAEPEL Steering Committee
      2019-11????Lessons Learned???
      2019-11????Revise documentation???

      EPEL-8.1 Production Breakout

      This follows the staging and production of the 8.0 with additional work in order to make working with modules work in builds. Most of these dates and layers need to be filled out in future meetings. The main work will be adding in allowing a program code-named ‘Ursa-Prime’ to help build non-modular rpms using modules as dependencies. This will allow for grobisplitter to be replaced with a program that has long term maintenan

      I no longer recommend FreeIPA

      Posted by William Brown on July 09, 2019 02:00 PM

      I no longer recommend FreeIPA

      It’s probably taken me a few years to write this, but I can no longer recommend FreeIPA for IDM installations.

      Why not?

      The FreeIPA project focused on Kerberos and SSSD, with enough other parts glued on to look like a complete IDM project. Now that’s fine, but it means that concerns in other parts of the project are largely ignored. It creates design decisions that are not scalable or robust.

      Due to these decisions IPA has stability issues and scaling issues that other products do not.

      To be clear: security systems like IDM or LDAP can never go down. That’s not acceptable.

      What do you recommend instead?

      • Samba with AD
      • AzureAD
      • 389 Directory Server

      All of these projects are very reliable, secure, scalable. We have done a lot of work into 389 to improve our out of box IDM capabilities too, but there is more to be done too. The Samba AD team have done great things too, and deserve a lot of respect for what they have done.

      Is there more detail than this?

      Yes - buy me a drink and I’ll talk :)

      Didn’t you help?

      I tried and it was not taken on board.

      So what now?

      Hopefully in the next year we’ll see new IDM projects for opensource released that have totally different approachs to the legacy we currently ride upon.

      Red Hat, IBM, and Fedora

      Posted by Fedora Magazine on July 09, 2019 12:51 PM

      Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat, which will operate as a distinct unit within IBM.

      What does this mean for Red Hat’s participation in the Fedora Project?

      In short, nothing.

      Red Hat will continue to be a champion for open source, just as it always has, and valued projects like Fedora that will continue to play a role in driving innovation in open source technology. IBM is committed to Red Hat’s independence and role in open source software communities. We will continue this work and, as always, we will continue to help upstream projects be successful and contribute to welcoming new members and maintaining the project.

      In Fedora, our mission, governance, and objectives remain the same. Red Hat associates will continue to contribute to the upstream in the same ways they have been.

      We will do this together, with the community, as we always have.

      If you have questions or would like to learn more about today’s news, I encourage you to review the materials below. For any questions not answered here, please feel free to contact us. Red Hat CTO Chris Wright will host an online Q&A session in the coming days where you can ask questions you may have about what the acquisition means for Red Hat and our involvement in open source communities. Details will be announced on the Red Hat blog.


      Matthew Miller, Fedora Project Leader
      Brian Exelbierd, Fedora Community Action and Impact Coordinator

      Call for Fedora Women’s Day 2019 proposals

      Posted by Fedora Community Blog on July 09, 2019 08:30 AM

      Fedora Women’s Day (FWD) is a day to celebrate and bring visibility to female contributors in open source projects, including Fedora. This event is headed by Fedora’s Diversity and Inclusion Team.

      During the month of September, in collaboration with other open source communities, women in tech groups and hacker spaces, we plan to organize community meetups and events around the world to highlight and celebrate the women in open source communities like Fedora and their invaluable contributions to their projects and community.

      These events also provide a good opportunity for women worldwide to learn about free and open source software and jump start their journey as a FOSS user and/or a contributor.  They also provide a platform for women to connect, learn and be inspired by other women in open source communities and beyond.

      We are looking forward to applications for organizing FWD-2019, go ahead and submit applications and help us in organizing this event in various locations in the world. 

      Important dates:

      Deadline for submission – Friday, 23 August, 2019

      Acceptance deadline – Friday, 6 September, 2019

      Suggested dates for FWD: September-October 2019

      Note: The Diversity and Inclusion team gives flexibility on dates for organizing FWD, thus event can be organized on any dates throughout the month of September and October. The proposals will be reviewed on a rolling basis, so don’t procrastinate until the last date of the deadline to submit a spectacular proposal.

      Who can be a part of FWD:

      • We welcome all organizers and attendees whose values align with our mission and goals for the event irrespective of their genders and backgrounds.
      • Diversity and Inclusion team is eagerly looking forward to have more organizers and participants from the under-represented groups and areas.

      Why should you organize a FWD in your local community

      • It lets you share your knowledge as FOSS still remains an underutilized area.
      • You can spread the awesomeness that FOSS has provided you with.
      • You might find a fellow contributor to work with or engage locally.
      • You will win lots of goodies and love from the Fedora community.
      • You can get a lot of freedom for arranging your very own event and cultivate your leadership and creativity skills as you plan and organize a FWD in your local community.
      • To empower women in your local community through open source tools and build their skills to contribute to a global project.

      Why should you attend a FWD in your local community

      • To know open source and building skills by working on  projects related to it.
      • It helps to connect and gain inspiration from talented women
      • It helps in getting started with open source contributions.
      • To get some surprised goodies.

      Steps to organize a FWD event:

      Cannot find a FWD in your region? Organize one! It’s simple with below steps.

      Identify your goals:

      • Find out the interests of your local community and conduct interactive sessions, it can be a workshop, hackathon.
      • Do they know what open source is? If not, you can use your event to create awareness about open source software and more direction on topics Fedora would be nice.
      • Are they interested in contributing to open source? Make your content more contribution specific and make sure to follow-up with participants after the event to help them get started or to make progress on what they have started. You can also organise some follow up sessions if required.
      • Are they interested in networking? We can help you identify local open source contributors from your region.
      • Set some measurable goals for the event, to measure the success of the event. 
      • Make sure your goals align with our motivations for FWD. Brainstorm and share ideas with us that you feel can bring a massive difference to the audience and would help them learn to contribute to FOSS or Fedora.

      Tell us about it:

      • Please let us know if you are interested on the diversity mailing list before the deadline. We would be glad to support you through the whole process.
      • Fedora Women’s Day (FWD) event proposals need to be submitted to fedora-womens-day repository by Friday, 23 August, 2019
      • You can request budget for your event to be reimbursed after writing an event report.

      Spread the word:

      Start Early! Spread the word before and after the event. Publicise your event both locally and globally to gather as many participants as you can, to maximize the impact of efforts you put in. You can invite fellow Fedora Contributors who are based in your area to collaborate with you. It is important that you estimate your audience well in advance, so that you can plan and ask for a suitable budget. After you are done with the event, let others have an idea of fun you had, by clicking some interesting group pictures and writing an elaborate event report. If you know someone or a tech group who might be interested to organize a Fedora Women’s Day event, feel free to involve them.

      Increase your chances:

      Finally, to increase your chances of getting an acceptance, go through our internal goals very closely. Start planning early to give yourself enough time to prepare and connect with other hacker-spaces and tech community and enhance your proposal by involving a bigger group. Understand the needs of your audience and make a personalized proposal that fits the best. Identify the resources that you might be required to conduct a Fedora Women’s Day event and inform us if you need any help.

      The post Call for Fedora Women’s Day 2019 proposals appeared first on Fedora Community Blog.

      Highest used Python code in the Pentesting/Security world

      Posted by Kushal Das on July 09, 2019 05:34 AM
      python -c 'import pty;pty.spawn("/bin/bash")'

      I think this is the highest used Python program in the land of Pentesting/Security, Almost every blog post or tutorial I read, they talk about the above-mentioned line to get a proper terminal after getting access to a minimal shell on a remote Linux server.

      What does this code do?

      We are calling the Python executable with -c and python statements inside of the double quote. -c executes the Python statements, and as we are running it as non-interactive mode, it parses the entire input before executing it.

      The code we pass as the argument of the -c has two statements.

      import pty

      pty is a Python module which defines operations related to the pseudo-terminal concept, it can create another process, and from the controlling terminal, it can read/write to the new process.

      The pty.spawn function spawns a new process (/bin/bash in this case) and then connects IO of the new process to the parent/controlling process.

      demo of getting bash

      In most cases, even though you get access to bash using the way mentioned above, TAB completion is still not working. To enable it, press Ctrl+z to move the process to sleep, and then use the following command on your terminal.

      stty raw -echo

      stty changes terminal line settings and part of the GNU coreutils package. To read about all the options we set by using raw -echo, read the man page of stty.

      Many years ago, I watched a documentary about Security firms showcasing offensive attacks, that was the first I saw them using Python scripts to send in the payload and exploit the remote systems. Now, I am using similar scripts in the lab to learn and having fun with Python. It is a new world for me, but, it also shows the diverse world we serve via Python.

      Fedora 30 : Using the python-wikitcms.

      Posted by mythcat on July 08, 2019 04:35 PM
      This python module named python-wikitcms can be used for interacting with the Fedora wiki.
      The Fedora wiki used Fedora's Wikitcms.
      Today I test it and works great with Fedora distro version 30.
      First, the install of the fedora package with DNF tool:
      [root@desk mythcat]# dnf install python3-wikitcms.noarch
      Downloading Packages:
      (1/8): python3-mwclient-0.9.3-3.fc30.noarch.rpm 186 kB/s | 61 kB 00:00
      (2/8): python3-fedfind-4.2.5-1.fc30.noarch.rpm 314 kB/s | 105 kB 00:00
      (3/8): python3-cached_property-1.5.1-3.fc30.noa 41 kB/s | 20 kB 00:00
      (4/8): python3-requests-oauthlib-1.0.0-1.fc29.n 313 kB/s | 40 kB 00:00
      (5/8): python3-jwt-1.7.1-2.fc30.noarch.rpm 112 kB/s | 42 kB 00:00
      (6/8): python3-oauthlib-2.1.0-1.fc29.noarch.rpm 293 kB/s | 153 kB 00:00
      (7/8): python3-simplejson-3.16.0-2.fc30.x86_64. 641 kB/s | 278 kB 00:00
      (8/8): python3-wikitcms-2.4.2-2.fc30.noarch.rpm 264 kB/s | 84 kB 00:00
      I used this simple example to get information about the Fedora wiki:
      [mythcat@desk ~]$ python3
      Python 3.7.3 (default, May 11 2019, 00:38:04)
      [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] on linux
      Type "help", "copyright", "credits" or "license" for more information.
      >>> from wikitcms.wiki import Wiki
      >>> my_site = Wiki()
      >>> event = my_site.current_event
      >>> print(event.version)
      31 Rawhide 20190704.n.1
      >>> page = my_site.get_validation_page('Installation','23','Final','RC10')
      >>> for row in page.get_resultrows():
      ... print(row.testcase)
      >>> dir(my_site)
      I used this source code to login with my account.
      >>> my_site.login()
      A webpage is open to get access to the account and show this info:
      The OpenID Connect client Wiki Test Control Management System is asking to authorize access for mythcat. this allow you to access it 
      After I agree with this the page tells me to close it:
      You can close this window and return to the CLI
      The next examples show you how to get and show information from the wiki:
      >>> print(my_site.username)
      >>> result = my_site.api('query', titles='Mythcat')
      >>> for page in result['query']['pages'].values():
      ... print(page['title'])
      >>> for my_contributions in my_site.usercontributions('Mythcat'):
      ... print(my_contributions)
      This python module comes with low documentation.

      The state of open source GPU drivers on Arm in 2019

      Posted by Peter Robinson on July 08, 2019 03:27 PM

      I first blogged about the state of open source drivers for Arm GPUs 7 years ago, in January 2012, and then again in September 2017. I’ve had a few requests since then to provide an update but I’ve not bothered because there’s really been no real change in the last few years, that is until now!

      The good news

      So the big positive change is that there’s two new open drivers om the scene with the panfrost and lima drivers. Panfrost is a reverse engineered driver for the newer Midguard and Bitfrost series of Mali GPUs designed/licensed by Arm, whereas Lima is aimed at the older Utguard series Mali 4xx series of devices. Panfrost, started by Alyssa Rosenzweig, and now has quite a large contributor base, has over the last few months has been coming along leaps and bounds and by the time Mesa 19.2 is out I suspect it should be able to run gnome-shell on an initial set of devices. I’m less certain the state of Lima. The drivers landed in the kernel in the 5.2 development cycle, which Linus just released. On the userspace side they landed in the mesa 19.1 development cycle, but they’ve greatly improving in mesa 19.2 cycle. Of course they’re all enabled in Fedora rawhide, although I don’t expect them to be really testable until later in the 19.2 cycle, but it makes it easy for early adopters who know they’re doing to be able to start to play.

      A decent open source driver for the MALI GPUs from Arm had been the last biggest hold out from the Arm ecosystem we’ve been waiting for and it covers a lot of the cheaper end of the SBC market with a lot of AllWinner and some Rockchip SoCs having the MALI 4xx series of hardware, which will use the Lima driver and other lower to midrange hardware shipping with the newer Mali midguard GPUs like in the Rockchip 3399 SoC.

      Other general updates

      Since I last wrote the freedreno (QCom Ardreno) and etnaviv (Vivante GCxxx series) have continued to improve and add support for newer hardware. The vc4 open drivers for the Raspberry Pi 0-3 generations have seen gradual improvement over time, and there’s a new open v3d driver for the Raspberry Pi 4 which they use from the outset.

      The last driver is one that seems to have transitioned to be in limbo is the driver for the Nvidia Tegra Arm platform. While it has an open driver for the display controller, and the GPU mostly works with the nouveau driver, at least on the 32 bit TegraK1 (the upstream state of the Tegra X-series is definitely for another post) they appear to have yet another driver, not their closer x86 driver, but another one (not the latest rev, which is 4.9 based, but the only linkable version I could find) which is needed to do anything fun from an CUDA/AI/ML point of view, I wonder how it will fit with their commitment to support Arm64 for their HPC stack or will that only be interesting to them for PCIe/fabric attached discrete cards for HPC super computer deals?

      That brings me to OpenCL and Vulkan for all the drivers above, for the vast majority of the open drivers support for either is basically non existent or in the very early stages of development so for the time being I’m going to leave that for another follow up in this long winded series, probably when there’s something of note to report. The other thing that is looking quite good, but one for another post, is video acceleration offload, there’s been quite a bit of recent movement there too.