Fedora People

Fedora rawhide – fixed bugs 2019/07

Posted by Filipe Rosset on September 21, 2019 10:33 PM

Bug 1720880 – irsim-9.7.104 is available

Updated highlight to new 3.52 upstream version

Bug 1726367 – dgit-9.0 is available

Updated gbrainy to new 2.4.1 upstream release

Bug 1668945 – bonnie++-1.98 is available

Bug 1724390 – plexus-interpolation-1.26 is available

Bug 1220151 – springlobby-0.264 is available

Bug 1524335 – CVE-2017-17459 fossil: Command injection via malicious ssh URLs [fedora-all]

Bug 1581180 – Update fossil version to 2.6 (currently is 2.2)

Bug 1603993 – fossil: FTBFS in Fedora rawhide

Bug 1674893 – fossil: FTBFS in Fedora rawhide/f30

Bug 1603998 – free42: FTBFS in Fedora rawhide

Bug 1674894 – free42: FTBFS in Fedora rawhide/f30

Bug 1606760 – xss-lock: FTBFS in Fedora rawhide

Bug 1676252 – xss-lock: FTBFS in Fedora rawhide/f30

Bug 1606745 – xmltool: FTBFS in Fedora rawhide

Bug 1676234 – xmltool: FTBFS in Fedora rawhide/f30

Bug 1423279 – blobwars: FTBFS in rawhide

Bug 1603496 – blobwars: FTBFS in Fedora rawhide

Bug 1674703 – blobwars: FTBFS in Fedora rawhide/f30

Bug 1284212 – blobwars-2.00 is available

Bug 1674622 – TeXamator: FTBFS in Fedora rawhide/f30

Bug 1509743 – primer3-2.4.0 is available

Bug 1675685 – primer3: FTBFS in Fedora rawhide/f30

Bug 1606905 – primer3: FTBFS in Fedora rawhide

Bug 1727619 – dgit-9.2 is available

Bug 1674833 – dvb-apps: FTBFS in Fedora rawhide/f30

Bug 1674753 – comedilib: FTBFS in Fedora rawhide/f30

Bug 1554434 – comedilib-0.8.1-23.fc29: FTBFS – libcomedi.so: undefined reference to `minor’

Bug 1603247 – 0xFFFF: FTBFS in Fedora rawhide

Bug 1674565 – 0xFFFF: FTBFS in Fedora rawhide/f30

Bug 1603359 – ailurus: FTBFS in Fedora rawhide

Bug 1674638 – ailurus: FTBFS in Fedora rawhide/f30

Bug 1603437 – audio-convert-mod: FTBFS in Fedora rawhide

Bug 1674671 – audio-convert-mod: FTBFS in Fedora rawhide/f30

Bug 1604101 – giada: FTBFS in Fedora rawhide

Bug 1674963 – giada: FTBFS in Fedora rawhide/f30

Bug 1703719 – giada: update to 0.15.4 (rawhide)

Bug 1237195 – release field does not include dist tag

Bug 1579359 – xml2dict has no dist tag

Bug 1676233 – xml2dict: FTBFS in Fedora rawhide/f30

Bug 1676232 – xhotkeys: FTBFS in Fedora rawhide/f30

Bug 1726401 – apache-commons-daemon-1.2.0 is available

Bug 1606791 – zhu3d: FTBFS in Fedora rawhide

Bug 1606454 – surl: FTBFS in Fedora rawhide

Bug 1676094 – surl: FTBFS in Fedora rawhide/f30

Bug 1674889 – fluxbox: FTBFS in Fedora rawhide/f30

Bug 1603948 – fbdesk: FTBFS in Fedora rawhide

Bug 1674873 – fbdesk: FTBFS in Fedora rawhide/f30

Bug 1604324 – hfsutils: FTBFS in Fedora rawhide

Bug 1665687 – /usr/bin/hfs: line 3: exec: hfssh: not found

Bug 1675101 – hfsutils: FTBFS in Fedora rawhide/f30

Bug 1606734 – xfce-theme-manager: FTBFS in Fedora rawhide

Bug 1676230 – xfce-theme-manager: FTBFS in Fedora rawhide/f30

Bug 1606635 – varconf: FTBFS in Fedora rawhide

Bug 1676181 – varconf: FTBFS in Fedora rawhide/f30

Bug 1473919 – 2ping-4.2 is available

Bug 1604816 – mercator: FTBFS in Fedora rawhide

Bug 1675364 – mercator: FTBFS in Fedora rawhide/f30

Bug 1615067 – vttest-20180811 is available

Bug 1676196 – vttest: FTBFS in Fedora rawhide/f30

Bug 1606658 – vttest: FTBFS in Fedora rawhide

Bug 1606590 – tunneler: FTBFS in Fedora rawhide

Bug 1676166 – tunneler: FTBFS in Fedora rawhide/f30

Bug 1417333 – zsh-lovers-0.9.1 is available

Bug 1676201 – warmux: FTBFS in Fedora rawhide/f30

Update homebank to 5.2.7

umit – Fix FTBFS, drop sphinx doc support, spec cleanup and modernization

packETH – Update to 2.0 + spec cleanup and modernization

Closed bugs:
Bug 1307937 – python-pylons: FTBFS in rawhide

Bug 1245623 – python-pylons-1.0.3 is available

Bug 1449083 – CVE-2017-8825 libetpan: NULL pointer dereference inthe MIME handling component [fedora-all]

Bug 1606913 – libetpan: FTBFS in Fedora rawhide

Bug 1668487 – libetpan-1.9.3 is available

Bug 1603316 – Ray: FTBFS in Fedora rawhide

Bug 1674619 – Ray: FTBFS in Fedora rawhide/f30

Bug 1606908 – tinyxpath: FTBFS in Fedora rawhide

FPgM report: 2019-38

Posted by Fedora Community Blog on September 20, 2019 08:58 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week.

I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


  • Orphaned packages seeking maintainers
  • The Red Hat Summit CfP is open through 22 October.
  • dist-git changes are coming that will enable self-service orphaned package adoption and simplify anitya configuration changes.
  • Yahoo is [rejecting @fedoraproject.org email](http://smoogespace.blogspot.com/2019/09/attention-fedora-yahoo-email-users.html).

Help wanted

Upcoming meetings & test days

Fedora 31


  • 8 October — Final freeze begins
  • 22 October — Final release preferred target


Blocker bugs

Bug IDBlocker statusComponentBug status
1749433Accepted (final)mutterNEW
1751673Accepted (final)gdmNEW
1750394Accepted (final)gnome-control-centerNEW
1750805Accepted (final)gnome-control-centerNEW
1751438Accepted (final)LiveCDNEW
1728240Accepted (final)sddmNEW
1750036Accepted (final)selinux-policyPOST
1752249Accepted (final)dnfASSIGNED
1753985Proposed (final)gnome-boxesNEW
1753558Proposed (final)gnome-calendarNEW
1753191Proposed (final)gnome-shellNEW
1753337Proposed (final)gnome-shellNEW
1749868Proposed (final)gnome-softwareNEW

Fedora 32



Submitted to FESCo

The post FPgM report: 2019-38 appeared first on Fedora Community Blog.

Searching simple or complex strings in text files using grep with regular expression

Posted by Fedora Community Blog on September 20, 2019 03:00 PM

As a Linux programmer or system admin, it is common to search through text for a given sequence of characters (such as a word or phrase), called a string, or even for a pattern describing a set of such strings; this article contains few hands-on examples for doing these types of tasks. In this article, we first review how grep Linux works while reviewing few basic string searches. It follows by diving into more complex string search using grep with regular expression.

Searching for a Word or Phrase with Grep Command

The primary command used for searching through text is a tool called grep. It outputs lines of its input that contain a given string or pattern.

To search for a word, give that word as the first argument. By default, grep searches standard input; give the name of a file to search as the second argument.

To output lines in the file ‘catalog’ containing the word ‘boy’, type:

$ grep boy catalog

To search for a phrase, specify it in quotes.

To output lines in the file ‘book’ containing the word ‘Java Coding’, type:

$ grep ’Java Coding’ book

The preceding example outputs all lines in the file ‘book’ that contain the exact string ‘Java Coding’; it will not match, however, lines containing ‘java coding’ or any other variation on the case of letters in the search pattern. Use the ‘-i’ option to specify that matches are to be made regardless of case.

 To output lines in the file ‘book’ containing the string ‘java coding’ regardless of the case of its letters, type:

$ grep -i ’java coding’ book

This command outputs lines in the file ‘book’ containing any variation of the pattern ‘java coding’, including ‘java coding’, ‘JAVA CODING’, and ‘jaVA coDIng’.

One thing to remember is that grep only matches patterns that appear on a single line, so in the preceding example, if one line in ‘book’ ends with the word ‘java’ and the next begins with ‘coding’, grep will not match either line.

You can specify more than one file to search. When you specify multiple files, each match that grep

outputs is preceded by the name of the file it is in (and you can suppress this with the ‘-h’ option.). A good knowledge of Linux filesystem would be helpful to navigate the right file and folder directories.

To output lines in all of the files in the current directory containing the word ‘JAVA’, type:

$ grep JAVA *

 To output lines in all of the ‘.txt’ files in the ‘˜/doc’ directory containing the word ‘Java’, suppressing the listing of file names in the output, type:

$ grep -h Java ˜/doc/*.txt

Use the ‘-r’ option to search a given directory recursively, searching all subdirectories it contains.

 To output lines containing the word ‘Java’ in all of the ‘.txt’ files in the ‘˜/doc’ directory and in all of its subdirectories, type:

$ grep -r Java ˜/doc/*.txt

Grep Command with Regular Expressions

In addition to word and phrase searches, you can use grep to search for complex text patterns called regular expressions. A regular expression—or “regexp”—is a text string of special characters that specifies a set of patterns to match.

Technically speaking, the word or phrase patterns described in the previous section are regular expressions—just very simple ones. In a regular expression, most characters—including letters and numbers—represent themselves. For example, the regexp pattern 1 matches the string ‘1’, and the pattern boy matches the string ‘boy’.

There are a number of reserved characters called metacharacters that do not represent themselves in a regular expression, but they have a special meaning that is used to build complex patterns. These metacharacters are as follows: ., *, [, ], ˆ, $, and \. It is good to note that such metacharacters are common among almost all of common and special Linux distributions. Here is a good article that covers special meanings of the metacharacters and gives examples of their usage.

 To specify one of these literal characters in a regular expression, precede the character with a ‘\’.

 To output lines in the file ‘book’ that contain a ‘$’ character, type:

$ grep ’\$’ book

 To output lines in the file ‘book’ that contains the string ‘$14.99’, type:

$ grep ’\$14\.99’ book

 To output lines in the file ‘book’ that contain a ‘\’ character, type:

$ grep ’\\’ book


In this article, we reviewed how to search string in a text file in the Linux using grep command. We also discussed how to combine the power of regular expressions with grep to run complex string searches.

The post Searching simple or complex strings in text files using grep with regular expression appeared first on Fedora Community Blog.

Managing network interfaces and FirewallD in Cockpit

Posted by Fedora Magazine on September 20, 2019 09:30 AM

In the last article, we saw how Cockpit can manage storage devices. This article will focus on the networking functionalities within the UI. We’ll see how to manage the interfaces attached to the system in Cockpit. We’ll also look at the firewall and demonstrate how to assign a zone to an interface, and allow/deny services and ports.

To access these controls, verify the cockpit-networkmanager and cockpit-firewalld packages are installed.

To start, log into the Cockpit UI and select the Networking menu option. As is consistent with the UI design we see performance graphs at the top and a summary of the logs at the bottom of the page. Between them are the sections to manage the firewall and interface(s).

<figure class="aligncenter"></figure>


Cockpit’s firewall configuration page works with FirewallD and allows admins to quickly configure these settings. The page has options for assigning zones to specific interfaces, as well as a list of services configured to those zones.

Adding a zone

Let’s start by configuring a zone to an available interface. First, click the Add Zone button. From here you can select a pre-configured or custom zone. Selecting one of the zones will display a brief description of that zone, as well as the services, or ports, allowed, or opened, in that zone. Select the interface you want to assign the zone to. Also, there’s the option to configure the rules to apply to the Entire Subset, or you can specify a Range of IP addresses. In the example below, we add the Internal zone to an available network card. The IP range can also be configured so the rule is only applied to the specified addresses.

<figure class="aligncenter"></figure>

Adding and removing services/ports

To allow network access to services, or open ports, click the Add Services button. From here you can search (or filter) for a service, or manually enter the port(s) you would like to open. Selecting the Custom Ports option provides options to enter the port number or alias into the TCP and/or UDP fields. You can also provide an optional name to label the rule. In the example below, the Cockpit service/socket is added to the Internal zone. Once completed, click the Add Services, or Add Ports, button. Likewise, to remove the service click the red trashcan to the right, select the zone(s), and click Remove service.

For more information about using Cockpit to configure your system’s firewall, visit the Cockpit project’s Github page.

<figure class="aligncenter"></figure>


The interfaces section displays both physical and virtual/logical NICs assigned to the system. From the main screen we see the name of the interface, the IP address, and activity stats of the NIC. Selecting an interface will display IP related information and options to manually configure them. You can also choose to have the network card inactive after a reboot by toggling the Connect automatically option. To enable, or disable, the network interface, click the toggle switch in the top right corner of the section.

<figure class="aligncenter"></figure>


Bonding network interfaces can help increase bandwidth availability. It can also serve as a redundancy plan in the event one of the NIC’s fail.

To start, click the Add Bond button located in the header of the Interfaces section. In the Bond Settings overlay, enter a name and select the interfaces you wish to bond in the list below. Next, select the MAC Address you would like to assign to the bond. Now select the Mode, or purpose, of the bond: Round Robin, Active Backup, Broadcast, &c. (the demo below shows a complete list of modes.)

Continue the configuration by selecting the Primary NIC, and a Link Monitoring option. You can also tweak the Monitoring Interval, and Link Up Delay and Link Down Delay options. To finish the configuration, click the Apply button. We’re taken back to the main screen, and the new bonded interface we just created is added to the list of interfaces.

From here we can configure the bond like any other interface. We can even delve deeper into the interface’s settings for the bond. As seen in the example below, selecting one of the interfaces in the bond’s settings page provides details pertaining to the interface link. There’s also an added option for changing the bond settings. To delete the bond, click the Delete button.

<figure class="aligncenter"></figure>


Teaming, like bonding, is another method used for link aggregation. For a comparison between bonding and teaming, refer to this chart. You can also find more information about teaming on the Red Hat documentation site.

As with creating a bond, click the Add Team button. The settings are similar in the sense you can give it a name, select the interfaces, link delay, and the mode or Runner as it’s referred to here. The options are similar to the ones available for bonding. By default the Link Watch option is set to Ethtool, but also has options for ARP Ping, and NSNA Ping.

Click the Apply button to complete the setup. It will also return you to the main networking screen. For further configuration, such as IP assignment and changing the runner, click the newly made team interface. As with bonding, you can click one of the interfaces in the link aggregation. Depending on the runner, you may have additional options for the Team Port. Click the Delete button from the screen to remove the team.

<figure class="aligncenter"></figure>


From the article, Build a network bridge with Fedora:

“A bridge is a network connection that combines multiple network adapters.”

One excellent example for a bridge is combining the physical NIC with a virtual interface, like the one created and used for KVM virtualization. Leif Madsen’s blog has an excellent article on how to achieve this in the CLI. This can also be accomplished in Cockpit with just a few clicks. The example below will accomplish the first part of Leif’s blog using the web UI. We’ll bridge the enp9s0 interface with the virbr0 virtual interface.

Click the Add Bridge button to launch the settings box. Provide a name and select the interfaces you would like to bridge. To enable Spanning Tree Protocol (STP), click the box to the right of the label. Click the Apply button to finalize the configuration.

As is consistent with teaming and bonding, selecting the bridge from the main screen will display the details of the interface. As seen in the example below, the physical device takes control and the virtual interface will adopt that device’s IP address.

Select the individual interface in the bridge’s detail screen for more options. And once again, click the Delete button to remove the bridge.

<figure class="aligncenter"></figure>

Adding VLANs

Cockpit allows admins to create VLANs, or virtual networks, using any of the interfaces on the system. Click the Add VLAN button and select an interface. Furthermore, in the Parent drop-down list, assign the VLAN ID, and if you like, give it a new name. By default the name will be the same as the parent followed by a dot and the ID. For example, interface enp11s0 with VLAN ID 9 will result in enp11s0.9). Click Apply to save the settings and to return to the networking main screen. Click the VLAN interface for further configuration. As always, click the Delete button to remove the VLAN.

<figure class="aligncenter"></figure>

As we can see, Cockpit can help admins with common network configurations when managing the system’s connectivity. In the next article, we’ll explore how Cockpit handles user management and peek into the add-on 389 Directory Servers.

Running a non-root container on Fedora with podman and systemd (Home Assistant example)

Posted by Christopher Smart on September 20, 2019 01:34 AM

Similar to my post about running Home Assistant on Fedora in Docker, this is about using podman instead and integrating the container as a service with systemd. One of the major advantages to me is the removal of Docker daemon and integration with the rest of the system including management of dependencies like regular services.

This assumes you’ve just installed Fedora server and have a local user with sudo privileges. Let’s also install some SELinux tools.

sudo dnf install -y /usr/sbin/semanage

Create non-root user

Let’s create a specific user to run the Home Assistant service.

We could create a regular user (and remove password expiry settings), but as this is a service let’s create a system account even though it’s a bit more tricky.

sudo useradd -r -m -d /var/lib/hass hass

As this is a system account, we’ll need to manually specify sub user and group ids that the account is allowed to use inside the container. We work out what range is available by looking at /etc/subuid and /etc/subgid files on the host, ideally UID and GID should be the same.

NEW_SUBUID=$(($(tail -1 /etc/subuid |awk -F ":" '{print $2}')+65536))
NEW_SUBGID=$(($(tail -1 /etc/subgid |awk -F ":" '{print $2}')+65536))
sudo usermod \
--add-subuids  ${NEW_SUBUID}-$((${NEW_SUBGID}+65535)) \
--add-subgids  ${NEW_SUBGID}-$((${NEW_SUBGID}+65535)) \

Inside the hass user’s home directory, create a config directory to store configuration files and ssl to store SSL certificates. These will be mapped into the container as /config and /ssl respectively. We will also set the appropriate SELinux context so that the directories can be accessed in the container.

sudo -H -u hass bash -c "mkdir ~/{config,ssl}"
sudo semanage fcontext -a -t user_home_dir_t "/var/lib/hass(/.+)?"
sudo semanage fcontext -a -t svirt_sandbox_file_t \
sudo restorecon -Frv /var/lib/hass

Pull the container image

Now that we have the basic home directly in place, we can switch to the hass user with sudo.

sudo su - hass

As the hass user, let’s use podman to download and run the official Home Assistant image in a container.

First, pull the container which is stored under the non-root user’s ~/.local/share/containers/ directory. Note the latest tag on the end of the image name specifies the version to run. While this is not necessary if you’re getting the latest (as it’s the default), if you want a specific release simply replace latest with the version you want (see their Docker hub page for available releases). Specifying latest means we’ll get the latest release of the container at the time.

podman pull \

Manually start the container

Now we can spin up a container using that image. Note that we’re passing in the config and ssl (as read only) directories we created earlier and using host networking to open the required ports on the host.

podman run -dt \
--name=hass \
-v /var/lib/hass/config:/config \
-v /var/lib/hass/ssl:/ssl:ro \
-v /etc/localtime:/etc/localtime:ro \
--net=host \

Similar to Docker, you can look at the status of the container and manage it with podman, including getting the logs if you require them.

podman ps -a
podman logs hass
podman restart hass

To get a temporary shell on the container, execute bash.

podman exec -it hass /bin/bash

Inside the container, take a look at the passed-in /config directory (do anything else you want) and then exit when you’re done.

ls -l /config
echo "I am root in the container"

Once the container is up and running, the Home Assistant port should be listening on the host.

ss -ltn |grep 8123

Manually destroy the container

Next we’ll create a service to manage this, so for now you can stop and delete this container (this does not delete the image we downloaded). Do this as the hass user still, then exit to return to your regular user.

podman stop hass
podman rm hass
podman ps -a

Configuring the firewall

Home Assistant runs on port 8123, so we will need to open this port on the firewall (back as your regular user).

sudo firewall-cmd --get-active-zones
sudo firewall-cmd --zone=FedoraServer --add-port=8123/tcp

You can test this by using your web browser to connect to the IP address of your machine on port 8123 from another machine on your network.

If that works, make the firewall change permanent.

sudo firewall-cmd --runtime-to-permanent

Create service for the container

Now that we have the container that works, let’s create a systemd service to manage it. This will auto start the container on boot and allow us to manage it as a regular service, including any dependencies. This service stops, removes and starts a new container every time.

Note the Exec lines which will delete and restart the container from the image. As per the manual command above, to run a specific version replace latest with an available tagged release.

cat &lt;&lt; EOF | sudo tee /etc/systemd/system/hass.service
Description=Home Assistant in Container

ExecStartPre=-/usr/bin/podman rm -f "hass"
ExecStart=podman run --name=hass -v /var/lib/hass/ssl:/ssl:ro -v /var/lib/hass/config:/config -v /etc/localtime:/etc/localtime:ro --net=host docker.io/homeassistant/home-assistant:latest
ExecReload=-/usr/bin/podman stop "hass"
ExecReload=-/usr/bin/podman rm "hass"
ExecStop=-/usr/bin/podman stop "hass"


Reload the systemd daemon is required to pick up the new file.

sudo systemctl daemon-reload

Manage the container with systemd

Let’s see if we can restart the container and check its status. Because it is now managed by systemd, we can check the log with journalctl.

sudo systemctl restart hass
sudo systemctl status hass
sudo journalctl -u hass

Once you’re happy, we can enable the service.

sudo systemctl enable hass

Now is probably a good time to reboot your machine and make sure that the service comes up fine on boot.

Configuring Home Assistant

After rebooting, you should be able to browse to the Home Assistant port on your machine.

Now that you have Home Assistant running, modify the configuration as you please by editing the configuration file under the hass user home directory.

If you make a change, you can simply restart the service.

Updating the container

To update the container, switch to the hass user again and pull a newer version of the container. We can see the newer version of the image with podman and if you want to you can inspect the image for more details.

podman pull docker.io/homeassistant/home-assistant:latest
podman images -a
podman inspect docker.io/homeassistant/home-assistant:latest

Now you can restart the container as your regular user.

sudo systemctl restart hass
sudo journalctl -uf hass.service


Anyway, that’s an example of how you could do it with something like Home Assistant. It can be modified accordingly for any other container you might want to run.

rpminspect-0.6 released with new inspections and bug fixes

Posted by David Cantrell on September 19, 2019 06:50 PM
There are three new inspections implemented in rpminspect-0.6:
  • The upstream inspection compares SRPMs between before and after builds to determine if the Source archives changed, were removed, or new ones added.  Anything listed as a Source file in the spec file is examined and not just tarballs.  Source file changes when the package Epoch and Version do not change are considered suspect and need review.
  • The shellsyntax inspection looks at shell scripts in source and binary packages and runs them through the syntax validator for the indicated shell (the -n option on the shell command).  The shells that rpminspect cares about are in the shells list in the rpminspect.conf file.  This inspection reports scripts that fail the syntax validator or scripts that were good but are now bad.  If you had a bad one and it's now good, you are notified only.
  • The ownership inspection enforces some file owner and group policies across builds.  The rpminspect.conf settings of bin_owner, bin_group, forbidden_owners, and forbidden_groups are all used by this inspection.  A typical use of this inspection is to ensure executables are correctly owned and that nothing is owned by mockbuild.
This release also includes a lot of bug fixes.  I really appreciate all of the feedback users have been providing.  It is really helping round out the different inspections and ensure it works across all types of builds.

For details on what is new in rpminspect-0.6, see the release page.

There is also a new release of rpminspect-data-fedora which includes changes necessary for the new inspections.  See its release page for more information.

Both packages are available in my Copr repo.  I am doing Fedora builds now, which includes Fedora 31.  If you want another release of Fedora to have builds, let me know.

Fedora 31 Upgrade Test Day 2019-09-23

Posted by Fedora Community Blog on September 19, 2019 03:31 PM
F31 Upgrade test day

Monday 2019-09-23, is the Fedora 31 Upgrade Test Day! As part of preparing for the final release of Fedora 31, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

As we approach the Final Release date for Fedora 31, most users will be upgrading. This test day will help us understand if everything is working perfectly. This test day will cover both a Gnome graphical upgrade and an upgrade done using DNF .

We need your help!

All of the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 31 Upgrade Test Day 2019-09-23 appeared first on Fedora Community Blog.

Attention: Fedora Yahoo Email Users

Posted by Stephen Smoogen on September 19, 2019 01:41 PM
Going from a blast of the past we are currently going through one of the Yahoo is not allowing many emails with either fedoraproject.org OR from our mail routers.  It would seem that the way to get yahoo to blacklist a domain is to get subscribed to mailing lists and then report the lists as SPAM. Enough accounts (or maybe if one person does it enough times).. yahoo will helpfully blacklist the domain completely. [It then is usually a multi-month process of people explaining that no Fedora is not a spam site, hasn't been taken over by a spam site, or a bunch of other things which do happen so any mail admin is going to be wary on.]

The funny thing is that their blockage doesn't work 100% so some people seem to still get emails delivered even when most of our logs show that yahoo is telling our servers various SMTP errors of GO AWAY.

At this point, if you are a packager with a yahoo.com email address, you probably have not gotten an email from any lists or possibly bugzilla for a bit. Trying to email you directly from our site to tell you this isn't going to work.. so we are going back to blogs on the hopes that someone still reads them.

Renewing the Modularity objective

Posted by Fedora Community Blog on September 18, 2019 07:18 PM

Now that Modularity is available for all Fedora variants, it’s time to address issues discovered and improve the experience for packagers and users. The Modularity team identified a number of projects that will improve the usefulness of Modularity and the experience of creating modules for packagers. We are proposing a renewed objective to the Fedora Council.

You can read the updated objective in pull request #61. Please provide feedback there or on the devel mailing list. The Council will vote on this in two weeks.

The post Renewing the Modularity objective appeared first on Fedora Community Blog.

Epiphany Technology Preview Users: Action Required

Posted by Michael Catanzaro on September 18, 2019 02:19 PM

Epiphany Technology Preview has moved from https://sdk.gnome.org to https://nightly.gnome.org. The old Epiphany Technology Preview is now end-of-life. Action is required to update. If you installed Epiphany Technology Preview prior to a couple minutes ago, uninstall it using GNOME Software and then reinstall using this new flatpakref.

Apologies for this disruption.

The main benefit to end users is that you’ll no longer need separate remotes for nightly runtimes and nightly applications, because everything is now hosted in one repo. See Abderrahim’s announcement for full details on why this transition is occurring.

[F31] Participez à la journée de test consacrée à GNOME 3.34

Posted by Charles-Antoine Couret on September 18, 2019 06:00 AM

Aujourd'hui, ce mercredi 18 septembre, est une journée dédiée à un test précis : sur l'environnement de bureau GNOME. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Nous juste après la diffusion de la Fedora 31 beta. L'environnement de bureau GNOME est celui par défaut depuis les débuts de Fedora.

L'objectif est de s'assurer que l'ensemble de l'environnement et que ses applications sont fonctionnels.

Les tests du jour couvrent :

  • La détection de la mise à niveau de Fedora par GNOME Logiciels ;
  • Le bon fonctionnement du navigateur Web ;
  • La connexion / déconnexion et changement d'utilisateurs ;
  • Le fonctionnement du son, notamment détection de la connexion ou déconnexion d'écouteurs ou casques audios ;
  • Possibilité de lancer les applications graphiques depuis le menu.
  • Et tant d'autres.

Comme vous pouvez le constater, ces tests sont assez simples et peuvent même se dérouler sans se forcer en utilisant simplement GNOME comme d'habitude. Donc n'hésitez pas de prendre quelques minutes pour vérifier les comportements et rapporter ce qui fonctionne ou non comme attendu.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Cockpit 203

Posted by Cockpit Project on September 18, 2019 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 203.

Machines: Network Interfaces

Virtual machines now support creation of virtual network interfaces.

VM Network Interface creation

Try it out

Cockpit 203 is available now:

Announcing Kanidm - A new IDM project

Posted by William Brown on September 17, 2019 10:00 PM

Announcing Kanidm - A new IDM project

Today I’m starting to talk about my new project - Kanidm. Kanidm is an IDM project designed to be correct, simple and scalable. As an IDM project we should be able to store the identities and groups of people, authenticate them securely to various other infrastructure components and services, and much more.

You can find the source for kanidm on github.

For more details about what the project is planning to achieve, and what we have already implemented please see the github.

What about 389 Directory Server

I’m still part of the project, and working hard on making it the best LDAP server possible. Kanidm and 389-ds have different goals. 389 Directory Server is a globally scalable, distributed database, that can store huge amounts of data and process thousands of operations per second. 389-ds let’s you build a system ontop, in any way you want. If you want an authentication system today, use 389-ds. We are even working on a self-service web portal soon too (one of our most requested features!). Besides my self, no one on the (amazing) 389 DS team has any association with kanidm (yet?).

Kanidm is an opinionated IDM system, and has strong ideas about how authentication and users should be processed. We aim to be scalable, but that’s a long road ahead. We also want to have more web integrations, client tools and more. We’ll eventually write a kanidm to 389-ds sync tool.

Why not integrate something with 389? Why something new?

There are a lot of limitations with LDAP when it comes to modern web-focused auth processes such as webauthn. Because of this, I wanted to make something that didn’t have the same limitations, and had different ideas about data storage and apis. That’s why I wanted to make something new in parallel. It was a really hard decision to want to make something outside of 389 Directory Server (Because I really do love the project, and have great pride in the team), but I felt like it was going to be more productive to build in parallel, than ontop.

When will it be ready?

I think that a single-server deployment will be usable for small installations early 2020, and a fully fledged system with replication would be late 2020. It depends on how much time I have and what parts I implement in what order. Current rough work order (late 2019) is indexing, RADIUS integration, claims, and then self-service/web ui.

Towards a UX Strategy for GNOME (Part 3)

Posted by Allan Day on September 17, 2019 02:41 PM

This post is part of a series on UX strategy. In my previous two posts, I described what I hope are the beginnings of a UX strategy for GNOME. In the first post, I described some background research and analysis. In the second post, I introduced what I think ought to be the high-level goals and principles for the UX strategy.

Now it’s time for the fun bit! For this instalment, I’m going to go over recent work that the GNOME design team has been doing. I’m doing this for two reasons. First: I want to show off some of the great work that the design team has been doing! Second, I want to show this design work fits into the strategic approach that I’ve previously described. A key element of that plan was to prioritise on areas which will have the biggest impact, and I’m going to be using the prioritisation word a lot in what follows.

This post is intended as an overview and, as such, I’m not going to go into the designs in great detail. However, there are detailed designs behind everything I’m presenting, and there’s a list of links at the end of the post, for those who want to learn more.

Core System

In my previous post, I argued that prioritisation ought to be a key part of our UX strategy. This is intended to help us drive up quality, as well as deliver impactful improvements. One way that we can prioritise is by focusing on those parts of GNOME that people use all the time.

The obvious place to start with this is the core elements of the GNOME system: those parts of the software that make up the most basic and common interactions, like login, app launching, window switching, notifications, and so on. I believe that improving the level of polish of these basic features would go a long way to elevating the standing of the entire platform.

Unlock and Login

Login screen mockup

The design team has longstanding ambitions to update GNOME’s unlock and login experience. The designs have continued to evolve since I last blogged about them, and we continue to return to and improve them.

System unlock is a classic example of a touchstone experience. People unlock their computers all the time and, as the entry point to the system, it comes to define the overall experience. It’s the face of the system. It is therefore critical that unlock and login leave a good impression.

The new designs are intended to reduce the amount of friction that people experience when they unlock. They require users to take fewer steps and involve going through fewer transitions, so that people can get to their session faster and more seamlessly. This will in turn make the experience feel more comfortable.

The designs are also intended to be beautiful! As an emblematic part of the GNOME UX, we want unlock and login to look and feel fantastic.


Notifications popover mockup

The design team has been systematically reviewing almost all parts of the core GNOME system, with a view to polish and refine them. Some of this work has already landed in GNOME 3.34, where you will see a collection of visual style improvements.

One area where we want to develop this work is the calendar and notifictions list. The improvements here are mostly visual – nicer layout, better typography, and so on – but there are functional improvements too. Our latest designs include a switch for do not disturb mode, for example.

There are other functional improvements that we’d like to see in subsequent iterations to the notification list, such as grouping notifications by application, and allowing notification actions to be accessed from the list.

Notifications are another great example where we can deliver clear value for our users: they’re something that users encounter all the time, and which are almost always relevant, irrespective of the apps that someone uses.

System Polish

System menu and dialog mockup

Our core system polish and refinement drive knows no bounds! We have designs for an updated system menu, which are primarily intended to resolve some long-standing discoverability issues. We’ve also systematically gone through all of the system dialogs, in order to ensure that each one is consistent and beautiful (something that is sadly lacking at the moment).

These aren’t the only parts of the core system that the design team is interested in improving. One key area that we are planning on working on in the near future is application launching. We’ve already done some experimental work in this area, and are planning on building on the drag and drop work that Georges Stavracas landed for GNOME 3.34.


The principle of prioritisation can also be applied to GNOME’s applications. The design team already spends a lot of time on the most essential applications, like Settings, Software and Files. Following the principle of prioritisation, we’ve also been taking a fresh look at some of the really basic apps that people use every day.

Two key examples of this are the document and image viewers. These are essential utilities that everyone uses. Such basic features ought to look and feel great and be firmly part of the GNOME experience. If we can’t get them right, then people won’t get a great impression.

Today our document and image viewers do their jobs reasonably well, but they lack refinement in some areas and they don’t always feel like they belong to the rest of the system. They also lack a few critical features.

<figure aria-describedby="caption-attachment-7193" class="wp-caption alignnone" id="attachment_7193" style="width: 1280px">Document viewer mockup<figcaption class="wp-caption-text" id="caption-attachment-7193">Document viewer mockup</figcaption></figure> <figure aria-describedby="caption-attachment-7190" class="wp-caption alignnone" id="attachment_7190" style="width: 1280px">Image viewer mockup<figcaption class="wp-caption-text" id="caption-attachment-7190">Image viewer mockup</figcaption></figure>

This is why the design team has created updated designs for both the document and image viewers. These use the same design patterns, so they will feel like they belong together (as well as to the rest of the system). They also include some additional important features, like basic image editing (from talking to GNOME users, we know that this is a sorely missed feature).

It would be great to extend this work to look at some of the other basic, frequently-used apps, like the Text Editor and Videos.

There’s a lot of other great application design work that I could share here, but am not going to, because I do think that focusing on these core apps first makes the most strategic sense.

Development Platform

Another way that we can prioritise is by working on the app development platform. Improvements in this area make it easier for developers to create apps. They also have the potential to make every GNOME app look and behave better, and can therefore be an extremely effective way to improve the GNOME UX.

Again, this is an area where the design team has already been doing a lot of work, particularly around our icon system. This is part of the application platform, and a lot of work has recently gone into making it easier than ever to create new icons as well as consume the ones that GNOME provides out of the box. If you’re interested in this topic, I’d recommend Jakub’s GUADEC talk on the subject.

<figure aria-describedby="caption-attachment-7187" class="wp-caption alignnone" id="attachment_7187" style="width: 1280px">GTK widget mockup<figcaption class="wp-caption-text" id="caption-attachment-7187">Mockups for menus, dropdown lists, reorderable lists, and in-app notifications</figcaption></figure>

Aside from the icon system, we have also been working to ensure that all the key design patterns are fully supported by the application development platform. The story here is patchy: not all of the design patterns have corresponding widgets in GTK and, in some cases it can be a lot of work to implement standard GNOME application designs. The result can also lack the quality that we’d like to see.

This is why the design team has been reviewing each of our design patterns, with a view to ensuring that each one is both great quality, and is fully supported. We want each pattern to look great, function really well, and be easy for application developers to use. So far, we have new designs for menus, dropdown lists, listboxes and in-app notifications, and there’s more to come. This initiative is ongoing, and we need help from platform and toolkit developers to drive it to completion.

What Next?

UX is more than UI: it is everything that makes up the user’s experience. As such, what I’ve presented here only represents a fraction of what would need to be included in a comprehensive UX strategy. That said, I do think that the work I’ve described above is of critical importance. It represents a programme to drive up the quality of the experience we provide, in a way that I believe would really resonate with users, because it focuses on features that people use every day, and aims to deliver tangible improvements.

As an open, upstream project, GNOME doesn’t have direct control over who works on what. However, it is able to informally influence where resources go, whether it’s by advertising priorities, encouraging contributions in particular areas, or tracking progress towards goals. If we are serious about wanting to compete in the marketplace, then doing this for the kind of UX programme that I’ve described seems like it could be an important step forward.

If there’s one thing I’d like to see come out of this series, it would be a serious conversation about how GNOME can be more strategic in its outlook and organisation.

This post marks the end of the “what” part of the series. In the next and final part, I’ll be moving onto the “how”: rather than talking about what we should be working on and what our priorities should be, I’ll set out how we ought to be working. This “how” part of the equation is critical: you can have the best strategy in the world, but still fail due to poor processes. So, in the final instalment, we’ll be discussing development process and methodology!

Further Reading

More information about the designs mentioned in this post:

Fedora 31 Beta est de sortie

Posted by Charles-Antoine Couret on September 17, 2019 02:22 PM

En ce mardi 17 septembre, les utilisateurs du Projet Fedora seront ravis d'apprendre la disponibilité de la version Beta Fedora 31.

Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora 31 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.

La version finale est pour le moment fixée pour le 22 ou 29 octobre. Voici les nouveautés annoncées pour cette version :

Expérience utilisateur

  • Passage à GNOME 3.34.
  • La roue tourne pour Xfce avec la version 4.14.
  • Mise à jour de DeepinDE 15.11.
  • Firefox utilise Wayland nativement par défaut, bien entendu si la session de bureau le permet.
  • Les applications Qt utiliseront de manière analogue Wayland lors d'une session GNOME sous Wayland.
  • Les paquets RPM utilisent le format de compression zstd au lieu de xz. Le temps de décompression est bien plus rapide d'un facteur trois ou quatre pour le paquet Firefox par exemple. Mais générer un paquet est légèrement plus long.

Gestion du matériel

  • Le noyau Linux i686 n'est plus généré et les dépôts associés sont également supprimés. De fait il n'y aura plus d'images de Fedora pour cette architecture, ni mise à niveau possible depuis Fedora 30 pour ces utilisateurs. Des paquets i686 peuvent subsister dans les dépôts à destination des utilisateurs ayant l'architecture x86_64 uniquement.
  • Le spin Xfce de Fedora dispose d'une image pour l'architecture AArch64.
  • Sur les machines avec la fonctionnalité Secure Boot de l'UEFI activé, GRUB peut maintenant utiliser ses modules de sécurité nativement.


  • Les paquets langpacks sont subdivisés avec une partie langpacks-core qui ne propose que la police par défaut et la locale correspondante. L'utilisateur a donc plus de flexibilité à ce niveau.
  • Mise à jour d'IBus 1.5.21.
  • Les polices Google Noto Variables auront maintenant la priorité sur les polices non variables du même fournisseur.

Administration système

  • Le binaire /usr/bin/python fait référence dorénavant à Python 3 et non Python 2. En effet, Python 2 ne sera plus supportée par le projet officiel en janvier 2020, le projet Fedora respecte donc la PEP 394 pour entamer la transition. En cas de problèmes vous pouvez créer le lien symbolique ~/.local/bin/python pour un utilisateur ou /usr/local/bin/python pour le système entier afin de restaurer le comportement habituel.
  • De fait, il y a une suppression massive de paquets Python 2 pour ne garder essentiellement que les derniers projets non convertis à Python 3 aujourd'hui.
  • La fonction des politiques de sécurité, introduite peu à peu dans Fedora ces dernières années, offre maintenant la possibilité aux administrateurs de personnaliser les règles comme le choix des protocoles de sécurité utilisables ou non sur le système.
  • Le noyau propose les cgroups 2 au lieu de la version 1 utilisée jusqu'alors.
  • OpenSSH refuse par défaut les identifications par mot de passe pour le compte super utilisateur.
  • Tous les groupes utilisateurs ont la possibilité native de faire des ping sur le réseau sans binaire setuid. Cela est surtout à destination des environnements avec conteneur ou Fedora Silverblue.
  • Le compteur RPM atteint la version 4.15.
  • DNF émettra une erreur par défaut si un dépôt est non accessible au lieu d'émettre seulement un avertissement. Cela est surtout à destination des dépôts tiers qui n'activaient pas forcément cette option dans leur configuration.
  • YUM 3 tire sa révérence, uniquement un lien symbolique vers DNF est maintenu. Son API n'est également plus accessible.
  • Les paquets liés à 389-console sont retirés au profit d'une nouvelle interface web.


  • Mise à jour de la bibliothèque C glibc vers la version 2.30.
  • Gawk passe à la branche 5.0.
  • Node.js en est à son 12e nœud.
  • Le générateur de documentation Sphinx passe à la version 2 et abandonne la prise en charge de Python 2.
  • Les tests Python passent du paquet python3-libs au paquet python3-test.
  • Le langage Go fonce vers la version 1.13.
  • Le langage Perl reluit à la version 5.30.
  • Mise à jour du langage Erlang et OTP à la version 22.
  • Alors que le compilateur Haskell GHC et Stackage LTS passent respectivement à la version 8.6 et 13.
  • La pile .Net libre Mono bénéficie de la version 5.20.
  • L'environnement et la chaine de compilation MinGW passent la 6e.
  • Le projet Fedora propose une configuration alternative de l'éditeur de lien, pour passer aisément de celui du projet GNU LD à celui de LLVM LDD et vice versa sans changer l'environnement de développement.
  • L'éditeur de lien GOLD de binutils, développé par Google mais maintenu par GNU maintenant a son propre paquet binutils-gold pour facilement s'en séparer si la maintenance s'arrête. Le projet n'étant plus développé activement.

Projet Fedora

  • L'image Cloud de Fedora bénéficiera d'une nouvelle image chaque mois.
  • Dans la continuité de rendre Rawhide plus stable et d'améliorer l'assurance qualité, Rawhide a maintenant Bodhi qui est activé. Cela signifie qu'un paquet doit suivre le même processus pour une mise à jour sur Rawhide que pour une version stable.
  • Les sources RPM peuvent avoir des dépendances lors de la compilation générée dynamiquement. En effet de plus en plus de langages comme Rust ou Go gèrent eux mêmes les dépendances pour compiler un projet. Ainsi, l'empaqueteur n'a plus pour ces projets à recopier les dépendances que le projet a déjà lui même renseigné.
  • De nouvelles règles d'empaquetage pour les projets utilisant Go ont été édictées.
  • L'environnement de compilation de Fedora, le buildroot, utilise un gdb minimal pour gagner en efficience. Il ne dispose plus de la gestion du XML ou de Python.
  • Les dépendances autour du langage R peuvent maintenant être résolues automatiquement.
  • Le paquet glibc i686 nécessaire pour le buildroot de Fedora bénéficie d'une amélioration de sa compilation pour être plus maintenable et garantir le respect de la licence LGPL.


Durant le développement d'une nouvelle Fedora, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est pendant une journée de tester une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe de qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un bogue devra être ouvert pour permettre l'élaboration d'un correctif.

C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.

Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement ici quand une journée de tests est planifiée.

Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.

Si vous avez déjà Fedora 30 ou 29 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

En cas de bogue, n'oubliez pas de relire la documentation pour signaler les anomalies sur le BugZilla ou de contribuer à la traduction sur Zanata.

Bons tests à tous !

Announcing the release of Fedora 31 Beta

Posted by Fedora Magazine on September 17, 2019 01:47 PM

The Fedora Project is pleased to announce the immediate availability of Fedora 31 Beta, the next step towards our planned Fedora 31 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights

GNOME 3.34 (almost)

The newest release of the GNOME desktop environment is full of performance enhancements and improvements. The beta ships with a prerelease, and the full 3.34 release will be available as an update. For a full list of GNOME 3.34 highlights, see the release notes.

Fedora IoT Edition

Fedora Editions address specific use-cases the Fedora Council has identified as significant in growing our userbase and community. We have Workstation, Server, and CoreOS — and now we’re adding Fedora IoT. This will be available from the main “Get Fedora” site when the final release of F31 is ready, but for now, get it from iot.fedoraproject.org.

Read more about Fedora IoT in our Getting Started docs.

Fedora CoreOS

Fedora CoreOS remains in a preview state, with a planned generally-available release planned for early next year. CoreOS is a rolling release which rebases periodically to a new underlying Fedora OS version. Right now, that version is Fedora 30, but soon there will be a “next” stream which will track Fedora 31 until that’s ready to become the “stable” stream.

Other updates

Fedora 31 Beta includes updated versions of many popular packages like Node.js, the Go language, Python, and Perl. We also have the customary updates to underlying infrastructure software, like the GNU C Library and the RPM package manager. For a full list, see the Change set on the Fedora Wiki.

Farewell to bootable i686

We’re no longer producing full media or repositories for 32-bit Intel-architecture systems. We recognize that this means newer Fedora releases will no longer work on some older hardware, but the fact is there just hasn’t been enough contributor interest in maintaining i686, and we can provide greater benefit for the majority of our users by focusing on modern architectures. (The majority of Fedora systems have been 64-bit x86_64 since 2013, and at this point that’s the vast majority.)

Please note that we’re still making userspace packages for compatibility when running 32-bit software on a 64-bit systems — we don’t see the need for that going away anytime soon.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the Common F31 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 31 Beta release, you can consult the Fedora 31 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Changing code to use fedora-messaging instead of fedmsg

Posted by Karsten Hopp on September 17, 2019 01:40 PM

A good start to find out about the required code changes to get rid of fedmsg in favour of fedora-messaging is to check the changes that are already done in other components. For example waiverdb

Apart from some changes to the tests, the required code change boils down to creating a msg with a new format and then pushing it to the message bus with 'publish' from fedora_messaging.api instead of 'fedmsg.publish'

-                 fedmsg.publish(topic='waiver.new', msg=marshal(row, waiver_fields))
+                 msg = Message(
+                     topic='waiverdb.waiver.new',
+                     body=marshal(row, waiver_fields)
+                 )
+                 publish(msg)

OFC exceptions need to be modified slightly, too, but altogether this seems pretty straightforward.

-             except Exception:
-                 _log.exception('Couldn\'t publish message via fedmsg')
+             except PublishReturned as e:
+                 _log.exception('Fedora Messaging broker rejected message %s: %s', msg.id, e)
+                 monitor.messaging_tx_failed_counter.inc()
+             except ConnectionException as e:
+                 _log.exception('Error sending message %s: %s', msg.id, e)

Fedocal and Nuancier are looking for new maintainers

Posted by Fedora Community Blog on September 17, 2019 06:53 AM

Recently the Community Platform Engineering (CPE) team announced that we need to focus on key areas and thus let some of our applications go. So we started Friday with Infra to find maintainers for some of those applications. Unfortunately the first few occurrences did not seem to raise as much interest as we had hoped. As a result we are still looking for new maintainers for Fedocal and Nuancier.

What will be the responsibilities of new maintainer?

New maintainer will be completely responsible for the code base and communishift instance:

  • Managing application life cycle
  • Fixing bugs
  • Implementing new features
  • Managing OpenShift playbooks
  • Maintaining running pods in OpenShift
  • Deployment of new versions in OpenShift

In other words the application will belong completely to you.

What you will get as a reward?

To take maintainership of the application is not without its rewards. So when you will choose to take it over, you will be rewarded by:

  • Learning useful and marketable programming skills (Python, PostgreSQL, Ansible)
  • Learning how to write, deploy, and manage applications in OpenShift!
  • Making significant contributions to the Fedora Community (and often others)
  • Good feeling for helping the Fedora Community and the open source world
  • Experience with managing open source applications
  • Large user base (Fedocal is used by almost every team in Fedora, Nuancier is used by plenty of Fedora users and contributors to vote for new wallpapers)
  • A warm glow of accomplishment 🙂

What role does CPE play in this?

This can look like plenty of work at first glance, but we are here to help you start with this. CPE Team will provide you with guidance, help and as part of Friday with Infra, we will help you to get everything set up and fixing the most urgent issues. More information can be found on Friday with Infra wiki.

Sounds interesting, where can I sign up?

If you think you are the right person for this work, send an email to Fedora Infrastructure mailing list or ask in #fedora-apps channel on Freenode. See you soon!

The post Fedocal and Nuancier are looking for new maintainers appeared first on Fedora Community Blog.

Permanent Record: the life of Edward Snowden

Posted by Kushal Das on September 17, 2019 04:44 AM

book cover

The personal life and thinking of the ordinary person who did an extraordinary thing.

A fantastic personal narrative of his life and thinking process. The book does not get into technical details, but, it will make sure that people relate to the different events mentioned in the book. It tells the story of a person who is born into the system and grew up to become part of the system, and then learns to question the same system.

I bought the book at midnight on Kindle (I also ordered the physical copies), slept for 3 hours in between and finished it off in the morning. Anyone born in 80s will find so many similarities as an 80s kid. Let it be the Commodore 64 as the first computer we saw or basic as the first-ever programming language to try. The lucky ones also got Internet access and learned to roam around of their own and build their adventure along with the busy telephone lines (which many times made the family members unhappy).

If you are someone from the technology community, I don't think you will find Ed's life was not as much different than yours. It has a different scenario and different key players, but, you will be able to match the progress in life like many other tech workers like ourselves.

Maybe you are reading the book just to learn what happened, or maybe you want to know why. But, I hope this book will help to think about the decisions you make in your life and how that affects the rest of the world. Let it be a group picture posted on Facebook or writing the next new tool for the intelligence community.

Go ahead and read the book, and when you are finished, make sure you pass it across to your friend, or buy them new copies. If you have some free time, you may consider to run a Tor relay or a bridge, a simple step will help many around the world.

On a side note, the book mentions SecureDrop project at the very end, and today is also the release of SecureDrop 1.0.0 (the same day of the book release).

Fedora 30 : Interactive learning and reinventing the wheel in programming.

Posted by mythcat on September 16, 2019 07:49 PM
Today I returned from an activity that prompted me to find a solution to display logos.
I found this GitHub repo that I read and then turned it into a single script.
It took about an hour.

EPEL Bug: Bash errors on recent EL-8 systems.

Posted by Stephen Smoogen on September 16, 2019 05:47 PM
Last week, I got asked about a problem with using EPEL-8 on Oracle Enterprise Linux 8 where trying to install packages failed due to bad license file. I duplicated the problem on RHEL-8 which had not happened before some recent updates.

[smooge@localhost ~]$ repoquery
bash: repoquery: command not found...
Failed to search for file: Failed to download gpg key for repo 'epel': Curl error (37): Couldn't read a file:// file for file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8.0 [Couldn't open file /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-8.0]
The problem seems to be that the EPEL release package uses the string $releasever for various short-cut strings. Take for example:

name=Extra Packages for Enterprise Linux $releasever - Playground - $basearch

The problem is that when I wrote new versions of the EPEL-8 repo file, I replaced the old key phrase gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 with gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever .  When I tested things with the dnf command it worked fine but I didn't check to see where things like bash completion would show up.

Moving back to the format that EPEL-6 and EPEL-7 used fixes the problem, so I will be pushing an updated release file out this week.  My apologies for people seeing the errors.

Onboarding Fedora Infrastructure

Posted by Karsten Hopp on September 16, 2019 02:02 PM

I'm using / working on Fedora since FC-1 and just recently joined the Infrastructure team.

My first task was getting up to speed wrt ansible as all of the repeating tasks of the team are automated. Fortunately there are lots of tutorials and documents available on the web, as usual some of them outdated or incorrect, but where's the fun in having everything working out of the box ?  A good starting point is Ansible's own documentation, i.e. https://docs.ansible.com/ansible/latest/user_guide/intro_getting_started.html , but I've also read some german docs like https://jaxenter.de/multi-tier-deployment-mit-ansible-76731 or the very basic intro https://www.biteno.com/tutorial/ansible-tutorial-einfuehrung/

There are a couple of hundred  playbooks in the infrastructure ansible git repository (https://infrastructure.fedoraproject.org/cgit/ansible.git/) with an inventory of almost 900 hosts. I run into problems when I tried to fix https://pagure.io/fedora-infrastructure/issue/8156 , which is about cleaning up ansible_distribution conditionals. Many playbooks already check for ansible_distribution == "RedHat" or ansible_distribution == "Fedora", but as a newbie in Fedora-Infrastructure I have no idea which distribution a certain host is currently running and if a given playbook will ever be run for this host. Does adding checks for the distribution even make sense when conditions maybe never become true ? What seems to be missing (at least I haven't found it) is a huge map with all our machines together with a description of the OS they are running and which playbooks apply to them. There is an ancient (3yrs) old issue open (https://pagure.io/fedora-infrastructure/issue/5290) that already requests and implements part of this, but unfortunately there has been no progress for some time now.


Looking at the nagios map of all our hosts, there already seem to be  host groups 'RedHat' 'CentOS' and 'Fedora', so at least part of the required  information is already available.

Copying large files with Rsync, and some misconceptions

Posted by Fedora Magazine on September 16, 2019 08:00 AM

There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.

Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.

The friend believed that rsync is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what rsync really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.

About rsync

rsync is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:

Imagine you have two files, file_A and file_B. You wish to update file_B to be the same as file_A. The obvious method is to copy file_A onto file_B.

Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If file_A is large, copying it onto file_B will be slow, and sometimes not even possible. To make it more efficient, you could compress file_A before sending it, but that would usually only gain a factor of 2 to 4.

Now assume that file_A and file_B are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between file_A and file_B down the link and then use such list of differences to reconstruct the file on the remote end.

The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that rsync addresses.

The rsync algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.

The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.

Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.

The rsync algorithm addresses this problem in a lovely way as we all might know.

After this introduction on rsync, Back to the story!

Problem 1: Thin provisioning

There were two things that would help the friend understand what was going on.

The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).

The source file was only 10GB because of TP being enabled, and when transferred over using rsync without any additional configuration, the target destination was receiving the full 100GB of size. rsync could not do the magic automatically, it had to be configured.

The Flag that does this work is -S or –sparse and it tells rsync to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.

Problem 2: Updating files

The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.

The command used for this transfer was:

rsync -avS vmdk_file syncuser@host1:/destination

Again, understanding how rsync works would help with this problem as well.

The above is the biggest misconception about rsync. Many of us think rsync will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of rsync.

As the man page says, the default behaviour of rsync is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.

To change this default behaviour of rsync, you have to set the following flags and then rsync will send only the deltas:

--inplace               update destination files in-place
--partial               keep partially transferred files
--append                append data onto shorter files
--progress              show progress during transfer

So the full command that would do exactly what the friend wanted is:

rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination

Note that the sparse flag -S had to be removed, for two reasons. The first is that you can not use –sparse and –inplace together when sending a file over the wire. And second, when you once sent a file over with –sparse, you can’t updated with –inplace anymore. Note that versions of rsync older than 3.1.3 will reject the combination of –sparse and –inplace.

So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.

Fedora 31 Gnome Test Day 2019-09-18

Posted by Fedora Community Blog on September 16, 2019 07:26 AM
F31 Upgrade test day

Wednesday, 2019-09-18 is the Fedora 31 Gnome Test Day! As part of changes Gnome 3.34 in Fedora 31, we need your help to test if everything runs smoothly!

Why Gnome Test Day?

We try to make sure that all the Gnome features are performing as they should. So it’s to see whether it’s working well enough and catch any remaining issues. It’s also pretty easy to join in: all you’ll need is Fedora 31 (which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 31 Gnome Test Day 2019-09-18 appeared first on Fedora Community Blog.

Wrestling with the register allocator: LuaJIT edition

Posted by Siddhesh Poyarekar on September 16, 2019 12:20 AM

For some time now, I kept running into one specific piece of code in luajit repeatedly for various reasons and last month I came across a fascinating bug in the register allocator pitfall that I had never encountered before. But then as is the norm after fixing such bugs, I concluded that it’s too trivial to write about and I was just stupid to not have found it sooner; all bugs are trivial once they’re fixed.

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

After getting over my imposter syndrome (yeah I know, took a while) I finally have the courage to write about it, so like a famous cook+musician says, enough jibberjabber…

Looping through a hash table

One of the key data structures that luajit uses to implement metatables is a hash table. Due to its extensive use, the lookup loop into such hash tables is optimised in the JIT using an architecture-specific asm_href function. Here is what a snippet of the arm64 version of the function looks like. A key thing to note is that the assembly code is generated backwards, i.e. the last instruction is emitted first.

  /* Key not found in chain: jump to exit (if merged) or load niltv. */
  l_end = emit_label(as);
  as->invmcp = NULL;
  if (merge == IR_NE)
    asm_guardcc(as, CC_AL);
  else if (destused)
    emit_loada(as, dest, niltvg(J2G(as->J)));

  /* Follow hash chain until the end. */
  l_loop = --as->mcp;
  emit_n(as, A64I_CMPx^A64I_K12^0, dest);
  emit_lso(as, A64I_LDRx, dest, dest, offsetof(Node, next));
  l_next = emit_label(as);

  /* Type and value comparison. */
  if (merge == IR_EQ)
    asm_guardcc(as, CC_EQ);
    emit_cond_branch(as, CC_EQ, l_end);

  if (irt_isnum(kt)) {
    if (isk) {
      /* Assumes -0.0 is already canonicalized to +0.0. */
      if (k)
        emit_n(as, A64I_CMPx^k, tmp);
        emit_nm(as, A64I_CMPx, key, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.u64));
    } else {
      Reg ftmp = ra_scratch(as, rset_exclude(RSET_FPR, key));
      emit_nm(as, A64I_FCMPd, key, ftmp);
      emit_dn(as, A64I_FMOV_D_R, (ftmp & 31), (tmp & 31));
      emit_cond_branch(as, CC_LO, l_next);
      emit_nm(as, A64I_CMPx | A64F_SH(A64SH_LSR, 32), tisnum, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.n));
  } else if (irt_isaddr(kt)) {
    Reg scr;
    if (isk) {
      int64_t kk = ((int64_t)irt_toitype(irkey->t) << 47) | irkey[1].tv.u64;
      scr = ra_allock(as, kk, allow);
      emit_nm(as, A64I_CMPx, scr, tmp);
      emit_lso(as, A64I_LDRx, tmp, dest, offsetof(Node, key.u64));
    } else {
      scr = ra_scratch(as, allow);
      emit_nm(as, A64I_CMPx, tmp, scr);
      emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key.u64));
    rset_clear(allow, scr);
  } else {
    Reg type, scr;
    lua_assert(irt_ispri(kt) && !irt_isnil(kt));
    type = ra_allock(as, ~((int64_t)~irt_toitype(ir->t) << 47), allow);
    scr = ra_scratch(as, rset_clear(allow, type));
    rset_clear(allow, scr);
    emit_nm(as, A64I_CMPw, scr, type);
    emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key));

  *l_loop = A64I_BCC | A64F_S19(as->mcp - l_loop) | CC_NE;

Here, the emit_* functions emit assembly instructions and the ra_* functions allocate registers. In the normal case everything is fine and the table lookup code is concise and effective. When there is register pressure however, things get interesting.

As an example, here is what a typical type lookup would look like:

0x100	ldr x1, [x16, #52]
0x104	cmp x1, x2
0x108	beq -> exit
0x10c	ldr x16, [x16, #16]
0x110	cmp x16, #0
0x114	bne 0x100

Here, x16 is the table that the loop traverses. x1 is a key, which if it matches, results in an exit to the interpreter. Otherwise the loop moves ahead until the end of the table. The comparison is done with a constant stored in x2.

The value of x is loaded later (i.e. earlier in the code, we are emitting code backwards, remember?) whenever that register is needed for reuse through a process called restoration or spilling. In the restore case, it is loaded into register as a constant or expressed as another constant (look up constant rematerialisation) and in the case of a spill, the register is restored from a slot in the stack. If there is no register pressure, all of this restoration happens at the head of the trace, which is why if you study a typical trace you will notice a lot of constant loads at the top of the trace.

Like the Spill that ruined your keyboard the other day…

Things get interesting when the allocation of x2 in the loop above results in a restore. Looking at the code a bit closer:

    type = ra_allock(as, ~((int64_t)~irt_toitype(ir->t) << 47), allow);
    scr = ra_scratch(as, rset_clear(allow, type));
    rset_clear(allow, scr);
    emit_nm(as, A64I_CMPw, scr, type);
    emit_lso(as, A64I_LDRx, scr, dest, offsetof(Node, key));

The x2 here is type, which is a constant. If a register is not available, we have to make one available by either rematerializing or by restoring the register, which would result in something like this:

0x100   ldr x1, [x16, #52]
0x104   cmp x1, x2
0x108	mov x2, #42
0x10c   beq -> exit
0x110   ldr x16, [x16, #16]
0x114   cmp x16, #0
0x118   bne 0x100

This ends up breaking the loop because the allocator restore/spill logic assumes that the code is linear and the restore will affect only code that follows it, i.e. code that got generated earlier. To fix this, all of the register allocations should be done before the loop code is generated.

Making things right

The result of this analysis was this fix in my LuaJIT fork that allocates registers for operands that will be used in the loop before generating the body of the loop. That is, if the registers have to spill, they will do so after the loop (we are generating code in reverse order) and leave the loop compact. The fix is also in the luajit2 repository in the OpenResty project. The work was sponsored by OpenResty as this wonderfully vague bug could only be produced by some very complex scripts that are part of the OpenResty product.

Episode 161 - Human nature and ad powered open source

Posted by Open Source Security Podcast on September 16, 2019 12:00 AM
Josh and Kurt start out discussing human nature and how it affects how we view security. A lot of things that look easy are actually really hard. We also talk about the npm library Standard showing command line ads. Are ads part of the future of open source?

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11260388/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    It's time to talk about post-RMS Free Software

    Posted by Matthew Garrett on September 14, 2019 11:57 AM
    Richard Stallman has once again managed to demonstrate incredible insensitivity[1]. There's an argument that in a pure technical universe this is irrelevant and we should instead only consider what he does in free software[2], but free software isn't a purely technical topic - the GNU Manifesto is nakedly political, and while free software may result in better technical outcomes it is fundamentally focused on individual freedom and will compromise on technical excellence if otherwise the result would be any compromise on those freedoms. And in a political movement, there is no way that we can ignore the behaviour and beliefs of that movement's leader. Stallman is driving away our natural allies. It's inappropriate for him to continue as the figurehead for free software.

    But I'm not calling for Stallman to be replaced. If the history of social movements has taught us anything, it's that tying a movement to a single individual is a recipe for disaster. The FSF needs a president, but there's no need for that person to be a leader - instead, we need to foster an environment where any member of the community can feel empowered to speak up about the importance of free software. A decentralised movement about returning freedoms to individuals can't also be about elevating a single individual to near-magical status. Heroes will always end up letting us down. We fix that by removing the need for heroes in the first place, not attempting to find increasingly perfect heroes.

    Stallman was never going to save us. We need to take responsibility for saving ourselves. Let's talk about how we do that.

    [1] There will doubtless be people who will leap to his defense with the assertion that he's neurodivergent and all of these cases are consequences of that.

    (A) I am unaware of a formal diagnosis of that, and I am unqualified to make one myself. I suspect that basically everyone making that argument is similarly unqualified.
    (B) I've spent a lot of time working with him to help him understand why various positions he holds are harmful. I've reached the conclusion that it's not that he's unable to understand, he's just unwilling to change his mind.

    [2] This argument is, obviously, bullshit

    comment count unavailable comments

    Cascade – a turn-based text arcade game

    Posted by Richard W.M. Jones on September 14, 2019 09:00 AM


    I wrote this game about 20 years ago. Glad to see it still compiled out of the box on the latest Linux distro! Download it from here. If anyone can remember the name or any details of the original 1980s MS-DOS game that I copied the idea from, please let me know in the comments.

    PulseCaster 0.9 released!

    Posted by Paul W. Frields on September 14, 2019 02:57 AM

    The post PulseCaster 0.9 released! appeared first on The Grand Fallacy.

    It says… It says, uh… “Virgil Brigman back on the air”.

    The Abyss, 1989 (J. Cameron)

    OK, I feel slightly guilty using a cheesy quote from James Cameron for this post. But not as guilty as I feel for leaving development of this tool to hang out so long.

    That’s right, there’s a brand new release of PulseCaster available out there — 0.9 to be exact. There are multiple fixes and enhancements in this version.

    (By the way… I don’t have experience packaging for Debian or Ubuntu. If you’re maintaining PulseCaster there and have questions, don’t hestitate to get in touch. And thank you for helping make PulseCaster available for users!)

    For starters, PulseCaster is now ported to Python 3. I used Python 3.6 and Python 3.7 to do the porting. Nothing in the code should be particular to either version, though. But you’ll need to have Python 3 installed to use it, as most Linux bistros do these days.

    Another enhancement is that PulseCaster now relies on the excellent pulsectl library for Python, by George Filipkin and Mike Kazantsev. Hats off to them for doing a great job, which allowed me to remove many, many lines of code from this release.

    Also, due the use of PyGObject3 in this release, there are numerous improvements that make it easier for me to hack on. Silly issues with the GLib mainloop and other entrance/exit stupidity are hopefully a bit better now.

    Also, the code for dealing with temporary files is now a bit less ugly. I still want to do more work on the overall design and interface, and have ideas. I’ve gotten way better at time management since the last series of releases and hope to do some of this over the USA holiday season this late fall and winter (but no promises).

    A new release should be available in Fedora’s Rawhide release by the time you read this, and within a few days in Fedora 31. Sorry, although I could bring back a Fedora 30 package, I’m hoping this will entice one or two folks to get on Fedora 31 sooner. So grab that when the Beta comes out and I’ll see you there!

    If you run into problems with the release, please file an issue in Github. I have fixed mail filters so that I’ll be more responsive to them in the future.files

    Photo by neil godding on Unsplash.

    FPgM report: 2019-37

    Posted by Fedora Community Blog on September 13, 2019 08:41 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. Fedora 31 Beta is go!

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


    Help wanted

    Upcoming meetings & test days

    Fedora 31


    • 17 September — Beta release
    • 8 October — Final freeze begins
    • 22 October — Final release preferred target


    Blocker bugs

    Bug IDBlocker statusComponentBug status
    1749433Accepted (final)gnome-control-centerNEW
    1750394Accepted (final)gnome-control-centerNEW
    1749086Accepted (final)kdeMODIFIED
    1751438Accepted (final)LiveCDNEW
    1728240Accepted (final)sddmNEW
    1750805Proposed (final)gnome-control-centerNEW
    1751673Proposed (final)gnome-sessionNEW
    1749868Proposed (final)gnome-softwareNEW
    1747408Proposed (final)libgit2NEW
    1750036Proposed (final)selinux-policyASSIGNED
    1750345Proposed (final)webkit2gtk3NEW
    1751852Proposed (final)xfdesktopASSIGNED

    Fedora 32



    Submitted to FESCo


    The post FPgM report: 2019-37 appeared first on Fedora Community Blog.

    GNOME Firmware 3.34.0 Release

    Posted by Richard Hughes on September 13, 2019 01:12 PM

    This morning I tagged the newest fwupd release, 1.3.1. There are a lot of new things in this release and a whole lot of polishing, so I encourage you to read the release notes if this kind of thing interests you.

    Anyway, to the point of this post. With the new fwupd 1.3.1 you can now build just the libfwupd library, which makes it easy to build GNOME Firmware (old name: gnome-firmware-updater) in Flathub. I tagged the first official release 3.34.0 to celebrate the recent GNOME release, and to indicate that it’s ready for use by end users. I guess it’s important to note this is just a random app hacked together by 3 engineers and not something lovelingly designed by the official design team. All UX mistakes are my own :)

    GNOME Firmware is designed to be a not-installed-by-default power-user tool to investigate, upgrade, downgrade and re-install firmware.
    GNOME Software will continue to be used for updates as before. Vendor helpdesks can ask users to install GNOME Firmware rather than getting them to look at command line output.

    We need to polish up GNOME Firmware going forwards, and add the last few features we need. If this interests you, please send email and I’ll explain what needs doing. We also need translations, although that can perhaps wait until GNOME Firmware moves to GNOME proper, rather than just being a repo in my personal GitLab. If anyone does want to translate it before then, please open merge requests, and be sure to file issues if any of the strings are difficult to translate or ambigious. Please also file issues (or even better merge requests!) if it doesn’t build or work for you.

    If you just want to try out a new application, it takes 10 seconds to install it from Flathub.

    PHP version 7.2.23RC1 and 7.3.10RC1

    Posted by Remi Collet on September 13, 2019 08:10 AM

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

    RPM of PHP version 7.3.10RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Fedora 29 and Enterprise Linux.

    RPM of PHP version 7.2.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 29 or remi-php72-test repository for Enterprise Linux.


    emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

    emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

    Parallel installation of version 7.3 as Software Collection:

    yum --enablerepo=remi-test install php73

    Parallel installation of version 7.2 as Software Collection:

    yum --enablerepo=remi-test install php72

    Update of system version 7.3:

    yum --enablerepo=remi-php73,remi-php73-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module enable php:remi-7.3
    dnf --enablerepo=remi-modular-test update php\*

    Update of system version 7.2:

    yum --enablerepo=remi-php72,remi-php72-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module enable php:remi-7.2
    dnf --enablerepo=remi-modular-test update php\*

    Notice: version 7.3.10RC1 in Fedora rawhide for QA.

    emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

    emblem-notice-24.pngPackages of 7.4.0RC1 are also available

    emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

    Software Collections (php72, php73)

    Base packages (php)

    nbdkit supports exportnames

    Posted by Richard W.M. Jones on September 13, 2019 08:02 AM

    (You’ll need the very latest version of libnbd and nbdkit from git for this to work.)

    The NBD protocol lets the client send an export name string to the server. The idea is a single server can serve different content to clients based on a requested export. nbdkit has largely ignored export names, but we recently added basic support upstream.

    One consequence of this is you can now write a shell plugin which reflects the export name back to the client:

    $ cat export.sh
    #!/bin/bash -
    case "$1" in
        open) echo "$3" ;;
        get_size) LC_ALL=C echo ${#2} ;;
        pread) echo "$2" | dd skip=$4 count=$3 iflag=skip_bytes,count_bytes ;;
        *) exit 2 ;;
    $ chmod +x export.sh
    $ nbdkit -f sh export.sh

    The size of the disk is the same as the export name:

    $ nbdsh -u 'nbd://localhost/fooooo' -c 'print(h.get_size())'

    The content and size of the disk is the exportname:

    │ f o o o o o │

    Not very interesting in itself. But we can now pass the content of small disks entirely in the export name. Using a slightly more advanced plugin which supports base64-encoded export names (so we can pass in NUL bytes):

    $ cat export-b64.sh
    #!/bin/bash -
    case "$1" in
        open) echo "$3" ;;
        get_size) echo "$2" | base64 -d | wc -c ;;
        pread) echo "$2" | base64 -d |
               dd skip=$4 count=$3 iflag=skip_bytes,count_bytes ;;
        can_write) exit 0 ;;
        pwrite) exit 1 ;;
        *) exit 2 ;;
    $ chmod +x export-b64.sh
    $ nbdkit -f sh export-b64.sh

    We can pass in an entire program to qemu:

    qemu-system-x86_64 -fda 'nbd:localhost:10809:exportname=uBMAzRD8uACgjtiOwLQEo5D8McC5SH4x//OriwVAQKuIxJK4AByruJjmq7goFLsQJbELq4PAFpOr/sST4vUFjhWA/1x167+g1LEFuAQL6IUBg8c84vW+lvyAfAIgci3+xYD9N3SsrZetPCh0CzwgdQTGRP4o6F4Bgf5y/XXbiPAsAnLSNAGIwojG68qAdAIIRYPlB1JWVXUOtADNGjsWjPx09okWjPy+gPy5BACtPUABl3JD6BcBge9CAYoFLCByKVZXtAT25AHGrZfGBCC4CA7oAgFfXusfrQnAdC09APCXcxTo6ACBxz4BuAwMiXz+gL1AAQt1BTHAiUT+gD0cdQbHBpL8OAroxgDizL6S/K0IwHQMBAh1CLQc/g6R/HhKiUT+izzorgB1LrQCzRaoBHQCT0+oCHQCR0eoA3QNgz6A/AB1Bo1FCKOA/Jc9/uV0Bz0y53QCiQRdXlqLBID6AXYKBYACPYDUchvNIEhIcgODwARQ0eixoPbx/syA/JRYcgOAzhaJBAUGD5O5AwDkQDz8cg2/gvyDPQB0A6/i+Ikd6cL+GBg8JDx+/yQAgEIYEEiCAQC9234kPGbDADxa/6U8ZmYAAAAAAAAAAHICMcCJhUABq8NRV5xQu6R9LteTuQoA+Ij4iPzo4f/Q4+L1gcdsAlhAqAd14J1fWcNPVao='


    GNOME 3.34 released — coming soon in Fedora 31

    Posted by Fedora Magazine on September 13, 2019 08:00 AM

    Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

    GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

    <figure class="wp-block-image"><figcaption> GNOME 3.34 desktop environment at work</figcaption></figure>

    Notable features

    The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

    There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

    You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

    Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

    More details

    These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

    The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

    Update on Easy PXE boot testing post: minus PXELINUX

    Posted by Dusty Mabe on September 13, 2019 12:00 AM
    Introduction This is an update to my previous post about easily testing PXE booting by using libvirt + iPXE. Several people have notified me (thanks Lukas Zapletal and others) that instead of leveraging PXELINUX that I could just use an iPXE script to do the same thing. I hadn’t used iPXE much so here’s an update on how to achieve the same goal using an iPXE script instead of a PXELINUX binary+config.

    Insider 2019-09: syslog-ng basics; relays; NGINX; Tic-Tac-Toe; sudo; Elastic stack 7; GitHub;

    Posted by Peter Czanik on September 12, 2019 11:02 AM

    Dear syslog-ng users,

    This is the 75th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.


    Building blocks of syslog-ng

    Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that content into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.

    This one gives you an overview of syslog-ng, its major features and an introduction to its configuration.


    What syslog-ng relays are good for

    While there are some users who run syslog-ng as a stand-alone application, the main strength of syslog-ng is central log collection. In this case the central syslog-ng instance is called the server, while the instances sending log messages to the central server are called the clients. There is a (somewhat lesser known) third type of instance called the relay, too. The relay collects log messages via the network and forwards them to one or more remote destinations after processing (but without writing them onto the disk for storage). A relay can be used for many different use cases. We will discuss a few typical examples below.


    Visualizing NGINX or Apache access logs in Kibana

    This tutorial shows you how to parse NGINX or Apache access logs with syslog-ng and create ECS compatible data in Elasticsearch.


    syslog-ng Tic-Tac-Toe

    You can play the game of Tic-Tac-Toe using syslog-ng 3.22.1 or later. Learn how to configure syslog-ng for that:


    Alerting on sudo events using syslog-ng

    Why use syslog-ng to alert on sudo events? At the moment, alerting in sudo is limited to E-mail. Using syslog-ng, however, you can send alerts (more precisely, selected logs) to a wide variety of destinations. Logs from sudo are automatically parsed by recent (3.13+) syslog-ng releases, enabling fine-grained alerting. There is a lot of hype around our new Slack destination, so that is what I’ll show here. Naturally, there are many others available as well, including Telegram and, of course, good old E-mail. If something is not yet directly supported by syslog-ng, you can often utilize an HTTP API or write some glue code in Python.

    From this blog post you can learn how to build up a syslog-ng configuration step by step and how to use different filters to make sure that you only receive logs (i.e. alerts) that are truly relevant for you.


    GitHub and syslog-ng

    As many of you know, the source code of syslog-ng is available on GitHub, just like its issue tracker. We just learned that GitHub itself is running syslog-ng as part of its stack: https://help.github.com/en/enterprise/2.18/admin/installation/log-forwarding

    syslog-ng with Elastic Stack 7

    For many years, anything I wrote about syslog-ng and Elasticsearch was valid for all available versions. Well, not anymore. With version 7 of Elasticsearch, there are some breaking changes. These changes are mostly related to the fact that Elastic is phasing out type support. This effects mapping (as the _default_ keyword is not used any more) and the syslog-ng configuration as well (as even though type() is a mandatory parameter, you should leave it empty).

    This blog post is a rewrite of one of my earlier blog posts (about creating a heat map using syslog-ng + Elasticsearch + Kibana), focusing on the changes and the new elasticsearch-http() destination:




    Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

    GSoC summer 2019: Fedora Gooey Karma

    Posted by Fedora Community Blog on September 12, 2019 06:41 AM
    Fedora Summer Coding 2019

    This blog post is to summarise my journey for Google Summer of Code (GSoC) with the Fedora community, The journey started the day I mailed my mentor about the project, and it was a hell of a ride for sure. Let’s get started.

    About Me

    Name – Zubin Choudhary



    I am a 3rd-year B.tech student at Bennett university (India). I was interested in making and breaking stuff since forever, Gaming got me into programming and I’ve been writing small scripts for automating stuff for the last couple of years, I stumbled upon GitHub just to get amazed by the concept of sharing your code with strangers on the internet for no direct profit whatsoever. But later realising the beauty of opensource made me fell in love with it, I switched to Linux, switched to opensource alternatives of all the software I used then. Years later I decided to give back to the community. GSoC was the perfect opportunity for that.

    Getting started

    The day GSoC projects list was published I started sorting out all the organizations that I’d enjoy working with. Being a Linux user/enthusiast I filtered down to a bunch of Linux distros and desktop managers. Sorting out all the projects, Fedora-Gooey-Karma seemed to be a project that suited the skills I have.

    Once I was sure that Fedora Gooey Karma is a project that I would love to work on during the summer, I mailed @sumantro about the project. We talked about the project on mails.


    My project was Fedora Gooey Karma. The aim of this project was to provide a simple fast user interface for testers to submit karma on the Bodhi.

    Using the web interface for Bodhi isn’t very efficient, we all end up opening a bunch of tabs, verify out package multiple times, crosschecking and submit takes a ton of extra effort, which eventually holds a lot of people from testing packages.

    This application has an intuitive UI that allows users to look for packages to test, look up the details, bugs and let them post karma on bodhi platform.

    The user interface looks like this, Thanks to the Fedora Design team for the mockup.

    <figure class="wp-block-image"></figure>

    This project was previously built on python2 and the old bodhi2 bindings, The project originally was to port the application from python2 to python3 and make it work again, but later looking at the codebase I decided that re-writing the code would be much easier.


    The decided aims for this project were:

    1. Application ported to Bodhi2 API and Python3
    2. Revamped UI built with QT
    3. Application ready to be packaged and ship

    But now that I decided to rewrite everything from scratch there were different challenges that I had to face.

    Some of them would be:

    1. The Bodhi bindings changed a lot, a bunch of features were added and removed that were required in the gooey karma project.
    2. Being new to Qt UI I took some time to get used to it.


    Rough timeline for the project goes like this:

    Week 1 and Week 2

    • Initialising python3 port
    • Learning about the bodhi and FAS bindings

    Week 3 and Week 4

    • Initialising the Qt user interface
    • Writing the bodhi bindings

    Week 5 and Week 6

    • Writing login controller
    • Writing functions to fetch all data and display

    Week 7 and Week 8

    • Writing functions to post karma
    • Fixing bugs

    Week 9 and Week 10


    The project isn’t fully completed yet, I will keep working on this project for a while implementing features like fetching related packages, listing only installed packages, etc.

    Also while presenting the project on showcase Stephen Gallagher requested that project gets ported to Cockpit as well.


    I’ve had a wonderful time at Fedora since I began contributing this summer, I was later invited to the Flock to Fedora conference where I met a lot of contributors. This was the summer I will never forget.

    The post GSoC summer 2019: Fedora Gooey Karma appeared first on Fedora Community Blog.

    Unit-testing static functions in C

    Posted by Peter Hutterer on September 12, 2019 04:21 AM

    An annoying thing about C code is that there are plenty of functions that cannot be unit-tested by some external framework - specifically anything declared as static. Any larger code-base will end up with hundreds of those functions, many of which are short and reasonably self-contained but complex enough to not trust them by looks only. But since they're static I can't access them from the outside (and "outside" is defined as "not in the same file" here).

    The approach I've chosen in the past is to move the more hairy ones into separate files or at least declare them normally. That works but is annoying for some cases, especially those that really only get called once. In case you're wondering whether you have at least one such function in your source tree: yes, the bit that parses your commandline arguments is almost certainly complicated and not tested.

    Anyway, this week I've finally found the right combination of hacks to make testing static functions easy, and it's:

    • #include the source file in your test code.
    • Mock any helper functions you'd need to trick the called functions
    • Instruct the linker to ignore unresolved symbols
    And boom, you can write test cases to only test a single file within your source tree. And without any modifications to the source code itself.

    A more detailed writeup is available in this github repo.

    For the impatient, the meson snippet for a fictional source file example.c would look like this:

    'example.c', 'test-example.c',
    dependencies: [dep_ext_library],
    link_args: ['-Wl,--unresolved-symbols=ignore-all',
    install: false),

    There is no restriction on which test suite you can use. I've started adding a few of test cases based on this approach to libinput and so far it's working well. If you have a better approach or improvements, I'm all ears.

    Wherefore Art Thou CentOS 8?

    Posted by Scott Dowdle on September 11, 2019 09:04 PM
    Wherefore Art Thou CentOS 8? Scott Dowdle Wed, 09/11/2019 - 15:04

    UPDATE: CentOS announced on their twitter account that CentOS 8 will be released on Sept. 24th.

    IBM's Red Hat Enterprise Linux 8 (and I'm not sure if Red Hat likes me putting IBM in front of it or not) was released on May 7th, 2019.  I write this on Sept. 11th, 2019 and CentOS 8 still isn't out.  RHEL 7.7 came out on August 6, 2019.  In an effort to be transparent, CentOS does have wiki pages for both Building_8 and Building_7 where they enumerate the various steps they have to go through to get the final product out the door.

    Up until early August they were making good progress on CentOS 8.  In fact they had made it to the last step which was titled, "Release work" which had a Started date of "YYYY-MM-DD", an Ended date of "YYYY-MM-DD", and a Status  "NOT STARTED YET".  That was fine for a while and then almost a month had passed with the NOT STARTED YET status.  If you are like me, when they completed every step but the very last, you are thinking that the GA release will be available Real-Soon-Now but after waiting a month, not so much.

    It was also obvious that CentOS had started work on the 7.7 update and the status indicators for that have progressed nicely but they still have a ways to go.  Of course one of the hold ups is that they have quite a few arches to support (more than Red Hat themselves) even though their most used platform (x86_64) had its Continuous Release (CR) repository populated and released on August 30th, 2019.  There is still a ways to go on 7.7 but they are generally much quicker with the point update releases.

    Users started complaining on the CentOS Devel mailing list harkening back to an earlier time in CentOS' history where they lagged way behind.  There were lots of responses to that thread, many thanking the CentOS developers for all of their hard work, some name calling, and a lot of back and forth with plenty of repetition.  Everyone understands that it takes a while for a major new release to come out and it'll be done when it is good and ready... however... the main complaint was that the development team (which long-time CentOS developer Johnny Hughes Jr. said numbered 3 people) wasn't being transparent enough given the fact that the wiki pages hadn't been updated in some time.  Johnny Hughes finally explained the reason 8 has stalled:

    WRT CentOS 8 .. it has taken a back seat to 7.7.1908.  Millions of users already use CentOS Linux 7.  Those people needs updates.

    That totally makes sense, doesn't it?  Everyone was happy with that answer... and I updated the Building_8 wiki page to reflect that by changing the status to, "Deferred for 7.7 work" and adding a note that said, "2019-09-10 According to this thread, work was stopped on CentOS 8 after upstream released 7.7. Since so many more users have CentOS 7.x in production, and no one has 8 yet, priority has been given to the 7.7 update... and once it is done, work will continue on 8."

    Someone asked JH Jr. if they could use some help and he said that building the packages was easy enough and there wasn't really a way to speed it up... but testing all of the packages, especially all of the various arches, was a way the greater community could help.  That was a poor summary so if interested I encourage you to read the full thread.

    While I'm definitely looking forward to the release of CentOS 8, I understand the 7.7 release takes priority and I now better know what to expect.  As has been said so many times, thanks for all of the hard work devs, it is appreciated.

    <section class="field field--name-comment field--type-comment field--label-above comment-wrapper" rel="schema:comment"></section>

    Please welcome Acer to the LVFS

    Posted by Richard Hughes on September 11, 2019 11:44 AM

    Acer has now officialy joined the LVFS, promoting the Aspire A315 firmware to stable.

    Acer has been testing the LVFS for some time and now all the legal and technical checks have been completed. Other models will follow soon!

    Announcing Linux Autumn 2019

    Posted by Rafał Lużyński on September 11, 2019 09:59 AM

    Summer is not yet over (in my climate zone) but it’s time to think about the autumn. Yes, I mean the Linux Autumn, the annual Polish conference of Linux and free software enthusiasts organized by PLUG. I wrote about this event many times in the past, I don’t want to make you bored by the same things again. This year we hope to invite more foreign guests and make the conference more international, possibly with one day full of English talks.

    It’s going to be the 17th edition. The venue is exactly the same as the last year: Gwarek Hotel in Ustroń, southern Poland, and dates, as usually a weekend, November 29th til December 1st. Attendees registration is open until November 21st but the talk proposals must be submitted until September 15th. It’s little time, you must hurry.

    Remember that the conference is paid for attendees. The money is spent to pay for the accommodation and food for everyone. Why do I ever write in the article for Fedora Planet about a paid and not strictly Fedora-oriented event? First of all, the participation (including accommodation and food) is fully refunded for speakers. I’m not encouraging you to attend a paid event, although if you want you are most welcome. I’m encouraging you to give your talks and participate in a three-days long event for free. Second, this is a Linux event and Fedora is still a Linux distribution. Third, as we all know, many Fedora contributors live and work in the Czech Republic, especially in Brno, and this event is organized in Poland just across the Czech border. It cannot be closer.

    How to arrive

    There are more details on the organizer’s page but here is a small summary:

    • From Poland: Ustroń is a tourist resort so it has good public transport connection with the rest of the country. It’s very easy to arrive by train or by bus from Katowice. Alternatively you can reach Bielsko-Biała or Cieszyn first and then continue by some local public transport. Of course, if you travel by your own car nothing limits you.
    • From the Czech Republic: first you must reach Český Těšín, there is only 25 km (16 miles) left to Ustroń. It’s easy to find a local public transport. You can also ask organizers for help (e.g., maybe someone gives a ride by car, that’s not a problem indeed).
    • From Slovakia: you may choose to travel via the Czech Republic and Český Těšín but you can also choose Skalité/Zwardoń border crossing.
    • From other countries: if you travel by plane you should choose Krakow or Katowice airport. Both are located about 120 km (75 miles) from Ustroń. Actually the nearest airport is Ostrava but it’s a small airport and I doubt you will find a convenient flight. Of course, you may also prefer to fly to Prague, Brno, or even Vienna and travel across the Czech Republic by train or by bus according to the guidelines above.

    Please come because it’s really worth. Those who participated already said the conference was really interesting and it’s too bad that so few people knew about it.

    In order to register please visit the organizers’ website.

    How to set up a TFTP server on Fedora

    Posted by Fedora Magazine on September 11, 2019 08:00 AM

    TFTP, or Trivial File Transfer Protocol, allows users to transfer files between systems using the UDP protocol. By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do Fedora installations, or other diskless operations.

    TFTP can only read and write files to or from a remote system. It doesn’t have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN).

    TFTP server installation

    The first thing you will need to do is install the TFTP client and server packages:

    dnf install tftp-server tftp -y

    This creates a tftp service and socket file for systemd under /usr/lib/systemd/system.


    Next, copy and rename these files to /etc/systemd/system:

    cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service
    cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket

    Making local changes

    You need to edit these files from the new location after you’ve copied and renamed them, to add some additional parameters. Here is what the tftp-server.service file initially looks like:

    Description=Tftp Server
    ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot

    Make the following changes to the [Unit] section:


    Make the following changes to the ExecStart line:

    ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

    Here are what the options mean:

    • The -c option allows new files to be created.
    • The -p option is used to have no additional permissions checks performed above the normal system-provided access controls.
    • The -s option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.

    The default upload/download location for transferring the files is /var/lib/tftpboot.

    Next, make the following changes to the [Install] section:


    Don’t forget to save your changes!

    Here is the completed /etc/systemd/system/tftp-server.service file:

    Description=Tftp Server
    ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

    Starting the TFTP server

    Reload the systemd daemon:

    systemctl daemon-reload

    Now start and enable the server:

    systemctl enable --now tftp-server

    To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.

    chmod 777 /var/lib/tftpboot

    Configure your firewall to allow TFTP traffic:

    firewall-cmd --add-service=tftp --perm
    firewall-cmd --reload

    Client Configuration

    Install the TFTP client:

    yum install tftp -y

    Run the tftp command to connect to the TFTP server. Here is an example that enables the verbose option:

    [client@thinclient:~ ]$ tftp
    tftp> verbose
    Verbose mode on.
    tftp> get server.logs
    getting from to server.logs [netascii]
    Received 7 bytes in 0.0 seconds [inf bits/sec]
    tftp> quit
    [client@thinclient:~ ]$ 

    Remember, TFTP does not have the ability to list file names. So you’ll need to know the file name before running the get command to download any files.

    Photo by Laika Notebooks on Unsplash.

    [F31] Participez à la journée de test consacrée à l'internationalisation

    Posted by Charles-Antoine Couret on September 10, 2019 09:41 PM

    Cette semaine, à partir du 9 septembre, est une semaine dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

    Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

    En quoi consiste ce test ?

    Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

    Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

    Les tests du jour couvrent :

    • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
    • La personnalisation des polices de caractères ;
    • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
    • La traduction fonctionnelle par défaut des applications ;
    • Les nouvelles dépendances des paquets de langue pour installer les polices et les entrées de saisie nécessaires.

    Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

    Comment y participer ?

    Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

    Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

    En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

    De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

    Towards a UX Strategy for GNOME (Part 2)

    Posted by Allan Day on September 10, 2019 03:43 PM

    This post is a part of a short series, in which I’m setting out what I think could be the beginnings of a UX strategy for GNOME. In this, the second post, I’m going to describe a potential GNOME UX strategy in high-level terms. These goals are a response to the research and analysis that was described in the previous post and, it is hoped, point the way forward for how GNOME can achieve new success in the desktop market.

    Strategic goals

    For me, the main goals of a GNOME UX strategy could be:

    1. Deliver quality

    If GNOME is going to succeed in today’s desktop market, UX quality has to be job #1.

    UX quality includes what the software looks like and how it is designed, but it also refers to how the software functions. Performance and bugs (or the lack of them) are both aspects of UX!

    More than anything else, people are looking for a desktop that Just Works: they want a solution that allows them to get their work done without getting in their way. This means having a desktop that is reliable, stable, which does what people want, and which is easy to use.

    People value solutions that Just Work. They’re also prepared to abandon them when they don’t Just Work.

    To its credit, the GNOME project has historically recognised the importance of Just Works, and it has delivered huge improvements in this area. However, there is still a lot of work to be done.

    My sense is that driving up quality is one of the key strategic challenges that the GNOME project needs to face up to; I’ll be returning to this topic!

    2. Factor in the cloud

    In my previous post, I worte about how the cloud has reconfigured the landscape in which GNOME operates. Accordingly, it’s important for our UX strategy to account for the cloud. There are various ways we can do this:

    • Focus on those bits of the desktop that are used by all users, even if they mainly use a web browser. This includes all the parts of the core system, as well as the most essential desktop apps.
    • Enable and encourage native cloud applications (including Electron and Progressive Web Apps)
    • Add value with high-quality native apps.
    • Integrate with existing cloud services, when it is safe to do so.

    The last point might seem counter-intuitive, but it makes sense: in a world where the web is dominant, a fantastic set of native apps can be a powerful differentiator.

    At the same time, GNOME needs to be careful when it comes to directly competing with sophisticated web apps and services, and it needs to recognise that, nowadays, many apps aren’t worth doing if they don’t have a cloud/cross-device component.

    3. Grow the app ecosystem

    The primary purpose of a platform like GNOME is to run apps, so it stands to reason that the number and quality of the apps that are available for the platform is of critical importance.

    Recently, Flatpak has allowed the GNOME project to make great progress around application distribution, and this is already positively impacting app availability for GNOME. However, there is a lot of work still to be done, particularly around GNOME’s application development platform. This includes work for both designers and developers.

    4. Support modern hardware

    One of the things that my research revealed is that, for most users, their choice of desktop OS is thoroughly entwined with hardware purchasing choices, with hardware and software typically being seen as part of the same package. Attracting users to GNOME therefore requires that GNOME be available for, work well with, and be associated with high-quality hardware.

    A lot of hardware enablement work is done by distros, but a lot also happens in GNOME, including things like high-definition display support, touchscreen support, screen casting, and more. This is important work!

    Do less, prioritise

    Any UX strategy should address the question of prioritisation: it ought to be able to determine how resources can be directed in order to have maximum impact. This is particularly important for the GNOME project, because its resources are limited: the core community is fairly small, and there’s a lot of code to maintain.

    The idea of prioritisation has therefore both influenced the goals I’ve set out above, as well as how I’ve been trying to put them into practice.

    When thinking about prioritisation in the context of GNOME UX, there are various principles that we can follow, including:

    • User exposure, both in terms of the proportion of people that use a feature, and also the frequency with which they use it. Improvements to features that everyone uses all the time have a bigger impact than improvements to features that are only used occasionally by a subset of the user base.
    • User needs and desires: features that are viewed as being highly attractive by a lot of people are more impactful than those which are only interesting to a small subset.
    • Common underpinnings: we can prioritise by focusing on common subsystems and technical components. The key example here is something like GTK, where improvements can surface themselves in all the apps that use the toolkit.

    When we decide which design and development initiatives we want to focus on (either by working on them ourselves, or advertising them to potential contributors), principles like these, along with the high-level goals that I’ve described above, can be very helpful.

    I also believe that, in some cases, the GNOME project needs to have some hard conversations, and think about giving up some of its existing software. If quality is job #1, one obvious answer is to reduce the amount of software we care about, in order to increase the overall quality of everything else. This is particularly relevant for those parts of our software that don’t have great quality today.

    Of course, these kinds of conversations need to be handled delicately. Resources aren’t fungible in an upstream project like GNOME, and contributors can and should be free to work on what they want.

    What’s next

    In my previous post, I described the research and analysis that serves as inputs to the strategy I’m setting out. In this post I’ve translated that background into a high-level plan: four strategic goals, and an overarching principle of prioritisation.

    In the next post, I’m going to introduce a raft of design work which I think fits into the strategy that I’ve started to lay out. GNOME is lucky to have a great design team at the moment, which has been pumping out high-quality design work over the past year, so this is a great opportunity to showcase what we’ve been doing, but what I want to do is also show how it fits into context.

    Flock to Fedora ’19

    Posted by Fedora Community Blog on September 10, 2019 06:37 AM

    Attending a tech conference is not what I’ve experienced before, but I’m sure I’ll keep doing so forever. Flock ‘19 was an amazing one to start with, meeting a flock with same interest always gets you an amazing time. I’ll be sharing down some of the things that I took away from Flock to Fedora ‘19

    The community planned a tonne of talks for everyone to attend, unfortunately, it was impossible to attend all of them. These are the talks that I decided to attend.

    State of Fedora

    This talk gave me the insights of the current state of Fedora, what’s coming up and what’s going wrong.

    Facebook loves Fedora

    During this one, I got to learn about what cooperates look at an opensource operating system, what they look for and how they implement it.

    Improving Packaging Experience with Automation

    This talk was about automating the build process on koji, It turned out to be a sort of debate regarding storage management and the way koji works.

    Fedora CoreOS

    I’ve heard about CoreOS before but this talk introduced me to the working and uses of CoreOS.

    Fedora IoT

    Me being interested in IoT automatically got attracted to this talk, got to know about the Fedora effort.

    Fedora RISC-V

    In this talk, we discussed the state of fedora running on RISC-V architecture. Got to know about the work put into porting systems to another architecture.

    Fedora Summer Coding 2019 Project Showcase and Meetup

    This was the last event where all the summer interns showcased their project.

    Other than talks I met a bunch of great developers/designers and a lot of people that I really loved to hang out with. I really like the concept of lanyards and stickers to find out who’s fine talking to you and the fact that can simply go talk to anyone with a green sticker on their badge.

    It was definitely an awesome experience, I’d really love to attend another Flock conference.

    The post Flock to Fedora ’19 appeared first on Fedora Community Blog.

    Exciting few weeks in the SecureDrop land

    Posted by Kushal Das on September 10, 2019 04:52 AM

    Eric Trump tweet

    Last week there was an interesting tweet from Eric Trump, son of US President Donald Trump. Where he points out how Mr. David Fahrenthold, a journalist from Washington Post did some old school journalism and made sure that every Trump organization employee knows about how to securely leak information or talk to a journalist via SecureDrop.

    I want to say thank you to him for this excellent advertisement for our work. There were many people over Twitter, cheering him for this tweet.

    julian and matt's tweet Parker's tweet Harlo's tweet

    If you don’t know what SecureDrop is, it is an open-source whistleblower submission system that media organizations and NGOs can install to securely accept documents from anonymous sources. It was originally created by the late Aaron Swartz and is now managed by Freedom of the Press Foundation. It is mostly written in Python and uses a lot of Ansible. Jennifer Helsby, the lead developer of SecureDrop and I took part in this week’s Python podcast along with our host Tobias. You can listen to learn about many upcoming features and plans.

    If you are interested to contribute to the SecureDrop project, come over to our gitter channel and say hello.


    Last month, during Defcon 27, there was a panel about DEF CON to help hackers anonymously submit bugs to the government, interestingly the major suggestion in that panel is to use SecureDrop (hosted by Defcon) so that the researchers can safely submit vulnerabilities to the US government. Watch the full panel discussion to learn more in details.

    Inkscape 1.0 Beta

    Posted by Gwyn Ciesla on September 09, 2019 06:33 PM

    Fresh and hot in f32! Come test and enjoy!

    Gthree – ready to play

    Posted by Alexander Larsson on September 09, 2019 09:07 AM

    Today I made a new release of Gthree, version 0.2.0.

    Newly added in this release is support for Raycaster, which is important if you’re making interactive 3D applications. For example, it’s used if you want clicks on the window to pick a 3D object from the scene. See the interactive demo for an example of this.

    Also new is support for shadow maps. This allows objects between a light source and a target to cast shadows on the target. Here is an example from the demos:

    I’ve been looking over the list of feature that we support, and in this release I think all the major things you might want to do in a 3D app is supported to at least a basic level.

    So, if you ever wanted to play around with 3D graphics, now would be a great time to do so. Maybe just build the code and study/tweak the code in the examples subdirectory. That will give you a decent introduction to what is possible.

    If you just want to play I added a couple of new features to gnome-hexgl based on the new release. Check out how the tracks casts shadows on the buildings!

    Firefox 69 available in Fedora

    Posted by Fedora Magazine on September 09, 2019 08:00 AM

    When you install the Fedora Workstation, you’ll find the world-renowned Firefox browser included. The Mozilla Foundation underwrites work on Firefox, as well as other projects that promote an open, safe, and privacy respecting Internet. Firefox already features a fast browsing engine and numerous privacy features.

    A community of developers continues to improve and enhance Firefox. The latest version, Firefox 69, was released recently and you can get it for your stable Fedora system (30 and later). Read on for more details.

    New features in Firefox 69

    The newest version of Firefox includes Enhanced Tracking Protection (or ETP). When you use Firefox 69 with a new (or reset) settings profile, the browser makes it harder for sites to track your information or misuse your computer resources.

    For instance, less scrupulous websites use scripts that cause your system to do lots of intense calculations to produce cryptocurrency results, called cryptomining. Cryptomining happens without your knowledge or permission and is therefore a misuse of your system. The new standard setting in Firefox 69 prevents sites from this kind of abuse.

    Firefox 69 has additional settings to prevent sites from identifying or fingerprinting your browser for later use. These improvements give you additional protection from having your activities tracked online.

    Another common annoyance is videos that start in your browser without warning. Video playback also uses extra CPU power and you may not want this happening on your laptop without permission. Firefox already stops this from happening using the Block Autoplay feature. But Firefox 69 also lets you stop videos from playing even if they start without sound. This feature prevents unwanted sudden noise. It also solves more of the real problem — having your computer’s power used without permission.

    There are numerous other new features in the new release. Read more about them in the Firefox release notes.

    How to get the update

    Firefox 69 is available in the stable Fedora 30 and pre-release Fedora 31 repositories, as well as Rawhide. The update is provided by Fedora’s maintainers of the Firefox package. The maintainers also ensured an update to Mozilla’s Network Security Services (the nss package). We appreciate the hard work of the Mozilla project and Firefox community in providing this new release.

    If you’re using Fedora 30 or later, use the Software tool on Fedora Workstation, or run the following command on any Fedora system:

    $ sudo dnf --refresh upgrade firefox

    If you’re on Fedora 29, help test the update for that release so it can become stable and easily available for all users.

    Firefox may prompt you to upgrade your profile to use the new settings. To take advantage of new features, you should do this.

    Episode 160 - Disclosing security issues is insanely complicated: Part 2

    Posted by Open Source Security Podcast on September 09, 2019 12:06 AM
    Josh and Kurt talk about disclosing security flaws in open source. This is part two of a discussion around how to disclose security issues. This episode focuses on some expectations and behaviors for open source projects as well as researchers trying to disclose a problem to a project.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/11171180/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes