Fedora People

A tale of missing touches

Posted by Peter Hutterer on February 20, 2020 07:39 AM

libinput 1.15.1 had a new feature: it matched the expected touch count with the one actually seen as opposed to the one advertised by the kernel. That is good news for ALPS devices whose kernel driver lies about their capabilities because these days who doesn't. However, in some cases that feature had the side-effect of reducing the touch count to zero - meaning libinput would ignore any touch. This caused a slight UX degradation.

After a bit of debugging and/or cursing, the issue was identified as a libevdev issue, specifically - the way libevdev replays events after a SYN_DROPPED event. And after several days of fixing things, adding stuff to the CI and adding meson support for libevdev so the CI can actually run a few useful things, it's time for a blog post to brain-dump and possibly entertain the occasional reader such as you are. Congratulations, I guess.

The Linux kernel's evdev protocol is a serial protocol where all events have a type, a code and a value. Events are grouped by EV_SYN.SYN_REPORT events, so the event type is EV_SYN (0), the event code is SYN_REPORT (also 0). The value is usually (but not always), you guessed it, zero. A SYN_REPORT signals that the current event sequence (also called a "frame") is to be interpreted as one hardware event [0]. In the simplest case, two hardware events from a mouse could look like this:

While we have five evdev events here, those represent one hardware event with an x movement of 1 and a second hardware event with a diagonal movement by 1/1. Glorious, we all understand evdev now (if not, read this and immediately afterwards this, although that second post will be rather reinforced by this post).

Life as software developer would be quite trivial but our universe hates us and we need an extra event code called SYN_DROPPED. This event is used by the kernel when events from the device come in faster than you're reading them. This shouldn't happen given that most input devices scan out at the casual rate of every 7ms or slower and we're not exactly running on carrier pigeons here. But your compositor has been a busy bee rendering all these browser windows containing kitten videos and thus completely neglected to check whether you've moved the finger on the touchpad recently. So the kernel sort-of clears the current event buffer and positions a shiny steaming SYN_DROPPED in there to notify the compositor of its wrongdoing. [1]

Now, we could assume that every evdev client (libinput, every Xorg driver, ...) knows how to handle SYN_DROPPED events correctly but we're self-aware enough that we don't. So SYN_DROPPED handling is wrapped via libevdev, in a way that lets the clients use almost exactly the same processing paths they use for normal events. libevdev gives you a notification that a SYN_DROPPED occured, then you fetch the events one-by-one until libevdev tells you have the complete current state of the device, and back to kittens you go. In pseudo-code, your input stack's event loop works like this:

while (user_wants_kittens):
event = libevdev_get_event()

if event is a SYN_DROPPED:
while (libevdev_is_still_synchronizing):
event = libevdev_get_event()
Now, this works great for keys where you get the required events to release or press new keys. This works great for relative axes because meh, who cares [2]. This works great for absolute axes because you just get the current state of the device and done. This works great for touch because, no wait, that bit is awful.

You see, the multi-touch protocol is ... special. It uses the absolute axes, but it also multiplexes over those axes via the slot protocol. A normal two-touch event looks like this:

The first two evdev events are slot 0 (first touch [3]), the second two evdev events are slot 1 (second touch [3]). Both touches update their X position but the second touch also updates its Y position. But for single-touch emulation we also get the normal absolute axis event [3]. Which is equivalent to the first touch [3] and can be ignored if you're handling the MT axes [3] (I'm getting a lot of mileage out of that footnote). And because things aren't confusing enough: events within an evdev frame are position-independent except the ABS_MT axes which need to be processed in sequence. So that ABS_X events could be anywhere within that frame, but the ABS_MT axes need to be grouped by slot.

About that single-touch emulation... We also have a single-touch multi-touch protocol via EV_KEY. For devices that can only track N fingers but can detect N+M fingers, we have a set of BTN_TOOL defines. Two fingers down sets BTN_TOOL_DOUBLETAP, three fingers down sets BTN_TOOL_TRIPLETAP, etc. Those are just a bitfield though, so no position data is available. And it tops out at BTN_TOOL_QUINTTAP but then again, that's a good maximum backed by a lot of statistical samples from users hands. On many devices, we have to combine that single-touch MT protocol with the real MT protocol. Synaptics touchpads on PS/2 only support 2 finger positions but detect up 5 touches otherwise [4]. And remember the ALPS devices? They say they have 4 slots but may only send data for two or three, so we have to detect this at runtime and switch to the BTN_TOOL bits for some touches.

So anyway, now that we unfortunately all understand the MT protocol(s), let's look at that libevdev bug. libevdev checks the slot states after SYN_DROPPED to detect whether any touch has stopped or started during SYN_DROPPED. It also detects whether a touch has changed, i.e. the user lifted the finger(s) and put the finger(s) down again while SYN_DROPPED was happening. For those touches it generates the events to stop the original touch, then events to start the new touch. This needs to be done over two event frames, i.e. with a SYN_REPORT in between [5]. But the implementation ended up splitting those changes - any touch that changed was terminated in the first event frame, any touch that outright stopped was terminated in the second event frame. That in itself wasn't the problem yet, the problem was that libevdev didn't emulate the single-touch multi-touch protocol with those emulated frames. So we ended up with event frames where slots would terminate but the single-touch protocol didn't update until a frame later.

This doesn't matter for most users. Both protocols were still correct-enough in their own bubble, only once you start mixing protocols was where it all started getting wonky. libinput does this because it has to, too many devices out there only track two fingers. So if you want three-finger tapping and pinch gestures, you need to handle both protocols. Despite this we didn't notice until we added the quirk for ALPS devices. Because now libinput sometimes noticed that after a SYN_DROPPED there were no fingers on the touchpad (because they all stopped/changed) but the BTN_TOOL bits were still on so clearly we have a touchpad that cannot track all fingers it detects - in this case zero. [6]

So to recap: libinput's auto-adjustment of the touch count for buggy touchpad devices failed thanks to libevdev's buggy workaround of the device sync. The device sync we need because we can't rely on userspace handling touches correctly across SYN_DROPPED. An event which only gets triggered because the compositor is too buggy to read input events in time. I don't know how to describe it exactly, but what I can see all the way down are definitely not turtles.

And the sad thing about it: if we didn't try to correct for the firmware and accepted that gestures are just broken on ALPS devices because the kernel driver is lying to us, none of the above would have mattered. Likewise, the old xorg synaptics driver won't be affected by this because it doesn't handle multitouch properly anyway, so it doesn't need to care about these discrepancies. Or, in other words and much like real life: the better you try to be, the worse it all gets.

And as the take-home lesson: do upgrade to libinput 1.15.2 and do upgrade to libevdev 1.9.0 when it's out. Your kittens won't care but at least that way it won't make me feel like I've done all this work in vain.

[0] Unless the SYN_REPORT value is nonzero but let's not confuse everyone more than necessary
[1] A SYN_DROPPED is per userspace client, so a debugging tool reading from the same event node may not see that event unless it too is busy with feline renderings.
[2] yes, you'll get pointer jumps because event data is missing but since you've been staring at those bloody cats anyway, you probably didn't even notice
[3] usually, but not always
[4] on those devices, identifying a 3-finger pinch gesture only works if you put the fingers down in the correct order
[5] historical reasons: in theory a touch could change directly but most userspace can't handle it and it's too much effort to add now
[6] libinput 1.15.2 leaves you with 1 finger in that case and that's good enough until libevdev is released

Fedora Council November 2019 meeting: more miscellaneous stuff

Posted by Fedora Community Blog on February 20, 2020 06:50 AM
Fedora community elections

This is part three of a four-part series recapping the Fedora Council’s face-to-face meeting in November 2019.

In addition to the big topic of the Fedora Project Vision, we used the opportunity to cover some other Fedora Council business. Because it’s a lot, we’re breaking the reporting on this into two posts, kind of arbitrarily — here’s the second of those.

Net Promoter Score

When companies want to know how their customers feel about them, they often turn to net promoter score (NPS) surveys. At their core, they ask a simple question: would you recommend our goods and/or services? There are a lot of reasons that someone might answer yes or no, so you can’t read too much into the results. But you can get a gauge of the sentiment that you can track over time.

For years, we’ve wanted to better understand the Fedora user community. It’s difficult to know much for a few reasons: we want to respect the privacy of our users and comply with laws like GDPR, we have no way to contact everyone (or even a representative sample of “everyone”), we’re not even sure what we’d ask. And of course, long surveys have abysmal response rates. Most people give up before they get to the end.

I took an internal Red Hat course last fall, and there I happened to meet someone who works on NPS surveys for Red Hat products. She told me that her team can help us develop an NPS survey to start measuring sentiment. The Council discussed this and we agreed there are a lot of opportunities. To start with, we want to focus on contributors and potential contributors. I’ll be working with the Red Hat team to develop a short survey that we can use to regularly gauge contributor sentiment. No matter what we do, there will be legitimate statistical criticisms, but the Council believes that having an imperfect measure will be better than no measure. If nothing else, the change over time will provide us a meaningful signal.

Team directory

Teams within Fedora generally self-organize, and we love that! The down side is that it’s not always clear to the Council or the community when a team has turned into a zombie. A year ago, we’d hoped that Taiga would serve as a team directory where teams would be sorted by recent activity — that’s what it’s on teams.fedoraproject.org. But while teams have found Taiga to be useful as a project planning tool, it’s not useful for this purpose.

But we still want to know which teams are active. Absent an automated way to do this, the Council decided that the Mindshare Committee and FESCo will maintain a list of the active teams that exist under their umbrella. They’ll be responsible for keeping it up to date. We’re still working out how exactly to present this to the community, but we want it to be a useful directory that shows what teams are active and how to communicate with or join them. Stay tuned for more on this in the spring.

Event focus

The Mindshare Committee (and the Fedora community in general) has done a good job of giving Fedora a presence at conferences and other events. But we haven’t had a defined strategy for these events. Whoever works the booth is left to their own devices, which isn’t fair to them. The Council agreed we should provide a general strategy for our presence at events.

We decided to focus on recruiting contributors. This isn’t to say that our booth at events can’t talk about new features in the latest release, hand out stickers, etc. That is still very important to our community. But the focus — and how Mindshare evaluates the success of events — will be recruiting, not just general fan service.

The Council tasked Ben Cotton to put together a team and come up with a plan to present at our January face-to-face meeting (look for more on that soon). The Join SIG has developed an excellent workflow for onboarding new contributors, and we want this plan to compliment it by giving the Join SIG more new contributors to onboard.

On a related note, the Council was asked about a local event that was being held in English instead of the local native language. The Council has no objections to events being held in the local language or in English, and separate events can be held if there are multiple audiences. We encourage event organizers to specify the language or languages for their event.

Editions, Labs, Spins, and redone getfedora

We introduced Fedora.next back in 2013. From that came our first three Editions: Workstation, Server, and Cloud. Since then, the Cloud Edition is retired, and new Editions (Fedora CoreOS, Fedora IoT, and Fedora Silverblue) are rising to meet new needs. The Edition concept has been a success, but we’re open to reconsidering it for the future. 

In addition, we don’t think the distinction between Spins and Labs isn’t clear to our contributor and user communities. We’d like to combine those into a single website as we work to move all Fedora websites to the new framework that Get Fedora uses. As part of that, should we combine all variants of Fedora onto a single Get Fedora website again? We split them out to reduce confusion for newcomers, and if we merge them we’ll need to be careful to prevent decision paralysis.

The Council hasn’t come to any decisions on this. We’re always open to the community feedback on how we can adapt the project to changes in the world around us.

The post Fedora Council November 2019 meeting: more miscellaneous stuff appeared first on Fedora Community Blog.

What usage restrictions can we place in a free software license?

Posted by Matthew Garrett on February 20, 2020 12:45 AM
Growing awareness of the wider social and political impact of software development has led to efforts to write licenses that prevent software being used to engage in acts that are seen as socially harmful, with the Hippocratic License being perhaps the most discussed example (although the JSON license's requirement that the software be used for good, not evil, is arguably an earlier version of the theme). The problem with these licenses is that they're pretty much universally considered to fall outside the definition of free software or open source licenses due to their restrictions on use, and there's a whole bunch of people who have very strong feelings that this is a very important thing. There's also the more fundamental underlying point that it's hard to write a license like this where everyone agrees on whether a specific thing is bad or not (eg, while many people working on a project may feel that it's reasonable to prohibit the software being used to support drone strikes, others may feel that the project shouldn't have a position on the use of the software to support drone strikes and some may even feel that some people should be the victims of drone strikes). This is, it turns out, all quite complicated.

But there is something that many (but not all) people in the free software community agree on - certain restrictions are legitimate if they ultimately provide more freedom. Traditionally this was limited to restrictions on distribution (eg, the GPL requires that your recipient be able to obtain corresponding source code, and for GPLv3 must also be able to obtain the necessary signing keys to be able to replace it in covered devices), but more recently there's been some restrictions that don't require distribution. The best known is probably the clause in the Affero GPL (or AGPL) that requires that users interacting with covered code over a network be able to download the source code, but the Cryptographic Autonomy License (recently approved as an Open Source license) goes further and requires that users be able to obtain their data in order to self-host an equivalent instance.

We can construct examples of where these prevent certain fields of endeavour, but the tradeoff has been deemed worth it - the benefits to user freedom that these licenses provide is greater than the corresponding cost to what you can do. How far can that tradeoff be pushed? So, here's a thought experiment. What if we write a license that's something like the following:

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. All permissions granted by this license must be passed on to all recipients of modified or unmodified versions of this work
2. This work may not be used in any way that impairs any individual's ability to exercise the permissions granted by this license, whether or not they have received a copy of the covered work

This feels like the logical extreme of the argument. Any way you could use the covered work that would restrict someone else's ability to do the same is prohibited. This means that, for example, you couldn't use the software to implement a DRM mechanism that the user couldn't replace (along the lines of GPLv3's anti-Tivoisation clause), but it would also mean that you couldn't use the software to kill someone with a drone (doing so would impair their ability to make use of the software). The net effect is along the lines of the Hippocratic license, but it's framed in a way that is focused on user freedom.

To be clear, I don't think this is a good license - it has a bunch of unfortunate consequences like it being impossible to use covered code in self-defence if doing so would impair your attacker's ability to use the software. I'm not advocating this as a solution to anything. But I am interested in seeing whether the perception of the argument changes when we refocus it on user freedom as opposed to an independent ethical goal.



Rich Felker on Twitter had an interesting thought - if clause 2 above is replaced with:

2. Your rights under this license terminate if you impair any individual's ability to exercise the permissions granted by this license, even if the covered work is not used to do so

how does that change things? My gut feeling is that covering actions that are unrelated to the use of the software might be a reach too far, but it gets away from the idea that it's your use of the software that triggers the clause.

comment count unavailable comments

Contribuite al Gnome 3.36 Test Day

Posted by alciregi-it on February 19, 2020 10:18 PM
Il 20 Febbraio si terrà un nuovo Test Day per Fedora. Questa volta alla ricerca di bug sulla versione di GNOME che troveremo su #Fedora 32.Sicuramente un bel po' di bug :–)Maggiori informazioni sulla pagina del wiki dedicata all'evento.

Fedora 31 : The Fyne UI toolkit for Go programming language.

Posted by mythcat on February 19, 2020 03:13 PM
Today I will show you how to use a UI toolkit with the Go programming. language. The development team comes with this toolkit at the GitHub official webpage.
Fyne is an easy to use UI toolkit and app API written in Go. It is designed to build applications that run on desktop and mobile devices with a single codebase...
[mythcat@desk ~]$ sudo dnf install golang
[sudo] password for mythcat:
golang-1.13.6-1.fc31.x86_64 golang-bin-1.13.6-1.fc31.x86_64
golang-src-1.13.6-1.fc31.noarch mercurial-4.9-2.fc31.x86_64

First is need to install these packages with DNF tool:
[root@desk mythcat]# dnf install libX11-devel libXcursor-devel libXrandr-devel libXinerama-devel 
mesa-libGL-devel libXi-devel
Last metadata expiration check: 0:04:28 ago on Sun 16 Feb 2020 12:25:04 PM EET.
Package libX11-devel-1.6.9-2.fc31.x86_64 is already installed.
Package mesa-libGL-devel-19.2.8-1.fc31.x86_64 is already installed.
Package libXi-devel-1.7.10-2.fc31.x86_64 is already installed.
Dependencies resolved.

libXcursor-devel-1.1.15-6.fc31.x86_64 libXinerama-devel-1.1.4-4.fc31.x86_64
libXrandr-devel-1.5.2-2.fc31.x86_64 libXrender-devel-0.9.10-10.fc31.x86_64

Let's install the fyne toolkit and the demo application:
[mythcat@desk ~]$ go get fyne.io/fyne
[mythcat@desk ~]$ go get fyne.io/fyne/cmd/fyne_demo/
I run the demo with this command and works very well:
[mythcat@desk ~]$ go run /home/mythcat/go/src/fyne.io/fyne/cmd/fyne_demo/main.go  

New releases

Posted by ABRT team on February 19, 2020 11:00 AM

Just prior to branching of Fedora 32, we released new versions of abrt, gnome‑abrt, abrt‑java‑connector, libreport, satyr and retrace‑server.

The current latest versions are:

  • abrt-2.14.0
  • gnome-abrt-1.3.1
  • abrt-java-connector-1.1.4
  • libreport-2.12.0
  • satyr-0.30
  • retrace-server-1.21.0

The new releases come with the following changes, among others:


  • CLI fixes
  • Avoid warnings about abrt-ccpp.service not existing during installation
  • dbus: Warn the user when GetProblems() is called with a large (>100) number of problems
  • Bring back journal catalog file for C/C++ crashes
  • Add short stack trace to the C/C++ crash journal catalog file
  • Fix abrt-dump-oops finishing with the wrong exit code


  • Change build system to Meson
  • Do not require X11 to be available
  • GUI tweaks gnome-abrt-GUI


  • Fix build failure with GCC 10


  • Switch to Nettle for SHA digest computation
  • Fix runtime warning in reporter-rhtsupport


  • Switch to Nettle for SHA digest computation


  • Add the possibility to run retraces in a podman container
  • Rework the pre- and post-retrace hook mechanism

More detailed changelogs are available for abrt and libreport.

Fedora at the Czech National Library of Technology

Posted by Fedora Magazine on February 19, 2020 09:00 AM

Where do you turn when you have a fleet of public workstations to manage? If you’re the Czech National Library of Technology (NTK), you turn to Fedora. Located in Prague, the NTK is the Czech Republic’s largest science and technology library. As part of its public service mission, the NTK provides 150 workstations for public use.

In 2018, the NTK moved these workstations from Microsoft Windows to Fedora. In the press release announcing this change, Director Martin Svoboda said switching to Fedora will “reduce operating system support costs by about two-thirds.” The choice to use Fedora was easy, according to NTK Linux Engineer Miroslav Brabenec. “Our entire Linux infrastructure runs on RHEL or CentOS. So for desktop systems, Fedora was the obvious choice,” he told Fedora Magazine.

User reception

Changing an operating system is always a little bit risky—it requires user training and outreach. Brabenec said that non-IT staff asked for training on the new system. Once they learned that the same (or compatible) software was available, they were fine.

The Library’s customers were on board right away. The Windows environment was based on thin client terminals, which were slow for intensive tasks like video playback and handling large office suite files. The only end-user education that the NTK needed to create was a basic usage guide and a desktop wallpaper that pointed to important UI elements.

<figure class="wp-block-image size-large"><figcaption>User guidance desktop wallpaper from the National Technology Library.</figcaption></figure>

Although Fedora provides development tools used by the Faculty of Information Technology at the Czech Technical University—and many of the NTK’s workstation users are CTU students—most of the application usage is what you might expect of a general-purpose workstation. Firefox dominates the application usage, followed by the Evince PDF viewer,  and the LibreOffice suite.


NTK first deployed the workstations with Fedora 28. They decided to skip Fedora 29 and upgraded to Fedora 30 in early June 2019. The process was simple, according to Brabenec. “We prepared configuration, put it into Ansible. Via AWX I restarted all systems to netboot, image with kickstart, after first boot called provisioning callback on AWX, everything automatically set up via Ansible.”

Initially, they had difficulties applying updates. Now they have a process for installing security updates daily. Each system is rebooted approximately every two weeks to make sure all of the updates get applied.

Although he isn’t aware of any concrete plans for the future, Brabenec expects the NTK to continue using Fedora for public workstations. “Everyone is happy with it and I think that no one has a good reason to change it.”

Which verison of Python are you running?

Posted by Kushal Das on February 19, 2020 04:44 AM

The title of the is post is misleading.

I actually want to ask you which version of Python3 are you running? Yes, it is a question I have to ask myself based on projects I am working on. I am sure there are many more people in the world who are also in the similar situation.

Just to see what all versions of Python(3) I am running in different places:

  • Python 3.7.3
  • Python 3.5.2
  • Python 3.6.9
  • Python 3.7.4
  • Python 2.7.5
  • Python 3.7.6

What about you?

Linux as a L2 VLAN switch

Posted by Lukas "lzap" Zapletal on February 19, 2020 12:00 AM

Linux as a L2 VLAN switch

I needed to join two segments of virtual LAN together via a linux bridge. That’s an easy task for Linux, however problem was that one segment was a VLAN (with id of 13) and the other segment was native (VLAN 0). I struggled hard figuring out the proper commands with NetworkManager until Beniamino Galvani of Red Hat put me on the right track.

Assuming that enp2s0 uses tagged frames and enp1s0 doesn’t, enabling VLAN filtering can do the trick:

nmcli connection add type bridge ifname bridge0 \
  bridge.vlan-filtering on bridge.vlan-default-pvid 13 \
  ipv4.method disabled ipv6.method ignore

nmcli connection add type ethernet ifname enp1s0 master bridge0
nmcli connection add type ethernet ifname enp2s0 master bridge0 bridge-port.vlans "13"

This will only work on RHEL 8.1 or newer as it relies on newest NetworkManager VLAN filtering switches. Kudos to everyone who made this possible and Beniamino for helping me out!

All systems go

Posted by Fedora Infrastructure Status on February 18, 2020 03:22 PM
Service 'COPR Build System' now has status: good: Everything seems to be working.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on February 18, 2020 03:21 PM
Service 'COPR Build System' now has status: scheduled: testing status

Stories from the amazing world of release-monitoring.org #9

Posted by Fedora Community Blog on February 18, 2020 07:39 AM

I woke up to the cold morning in my tower. The sun shone brightly on the sky, but the stone of the tower was cold as it takes some time to make it warm. Everything was already prepared for today’s journey. I sat at my table and started going through some reports from workers. I still had some time til the traveler arrived. So I started reading the reports …


Anitya has a new home

Because of the small interest outside of the Fedora universe and being the only project outside fedora-infra consortium (GitHub organization), that is handled by Conclave of Mages (CPE Team), this same Conclave decided to move it from release-monitoring consortium to fedora-infra consortium. This move is done and Anitya is now here.

Anitya 0.18.0 released

Anitya 0.18.0 was released on 13th January and for week it was tested in the purgatory (staging). This version is now live and available for entities (users), who can enjoy new features and fixes:

  • We will no longer tolerate intruders without proper documentation (Automatically delete projects, that are incorrectly setup and reach error threshold)
  • We don’t need whole projects history from GitHub realm anymore (Use and store cursor to latest commit for GitHub projects)
  • Nothing is stuck at our door anymore when retrieving messages from other realms (Check service has timeout option now to prevent infinite loop)
  • Abstract Positive Intuition (API) now supports various kind of thoughts (Filters in APIv2 are now case insensitive)
  • When entities try to enter Anitya, they are no longer redirected back to the door (Current page is no longer forgotten on login)

This is only a small list of changes, to see full list look at the Book of Releases.


Dist-git realm integration

We are working on better integration between dist-git realm and release-monitoring.org. Recently we replaced Great Oraculum (scm-requests pagure repository) with direct communication with the-new-hotness. This is much more convenient for entities and it allows us to get rid of the Great Oraculum, when we no longer needs their service.

The next thing that is currently work in progress is possibility to interact with dist-git realm directly. This will be done with the help of Packit. With it we will be able to propose changes directly to dist-git and skipping Bugzilla realm. Bugzilla will be still used as a realm that will notify ever vigilant guards (package maintainers) about anything new.

Internal struggle with Anitya

Some time ago I was contacted by another mage from our conclave that something is wrong with the realm of release-monitoring.org. Messengers were stuck in front of the-new-hotness and nobody opened the door to them. Only one was taken inside, but immediately thrown outside and because he wanted to keep his place in line, he went back to stand in front of the doors.

I started asking around what is happening (checking the logs of the-new-hotness) and it seemed to me that the messenger, who was always in front of the queue, had something wrong with the message. I followed the clues in the reports and after while I found out that this message requires the-new-hotness to contact Anitya and trigger a new check for any news, which failed and the-new-hotness thrown out the messenger and spend some time recovering from the issue (openshift pod was restarted). But because the messenger was always first and others were not allowed in, the queue was getting bigger and bigger.

I teleported myself to Anitya island and looked at the specific project, which was checked for news. And I found out that the project is using a new feature introduced in 0.17.0. This feature allows entities (users) to specify what they want to check in GitHub universe. If they want to check any news (tags) or just certified news (releases). But not every project in GitHub universe makes certified news nice (no version in name of the release) and because Anitya wasn’t able to understand this nonsense it just thrown error back to the-new-hotness.

I fixed this quickly by changing configuration of the project. After the change the messenger went through without any issue and the queue was processed one by one. This was a temporal fix, because this could happen in any other project that is trying to check certified news. The solid fix is part of Anitya 0.18.0, so this shouldn’t happen anymore.

Post scriptum

This is all for now from the world of release-monitoring.org. Do you like this world and want to join our conclave of mages? Seek me (mkonecny) in the magical yellow pages (IRC freenode #fedora-apps) and ask how can you help. Or visit the Bugcronomicon (GitHub issues on Anitya or the-new-hotness) directly and pick something to work on.

The post Stories from the amazing world of release-monitoring.org #9 appeared first on Fedora Community Blog.

Buying A Used Thinkpad T470s

Posted by Devan Goodwin on February 17, 2020 07:00 PM
my X1 Carbon (work) and new to me T470s (personal) I’ve generally been a desktop guy for most of my life, but since unboxing a X1 Carbon (pictured left) from work last year, I’ve been falling in love with Thinkpads. I’d had an X230 back around 2011 which was a great piece of hardware, ran Linux like a dream, but it was relatively thick and heavy with a very dim display.

Fedora 32 Gnome 3.36 Test Day 2020-02-20

Posted by Fedora Community Blog on February 17, 2020 03:25 PM
Test Day : Fedora 32 Gnome 3.36

Thursday, 2020-02-20 is the Fedora 32 Gnome Test Day! As part of changes Gnome 3.36 in Fedora 32, we need your help to test if everything runs smoothly!

Why Gnome Test Day?

We try to make sure that all the Gnome features are performing as they should. So it’s to see whether it’s working well enough and catch any remaining issues. It’s also pretty easy to join in: all you’ll need is Fedora 32 (which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 32 Gnome 3.36 Test Day 2020-02-20 appeared first on Fedora Community Blog.

How to get MongoDB Server on Fedora

Posted by Fedora Magazine on February 17, 2020 02:09 PM

Mongo (from “humongous”) is a high-performance, open source, schema-free document-oriented database, which is one of the most favorite so-called NoSQL databases. It uses JSON as a document format, and it is designed to be scalable and replicable across multiple server nodes.

Story about license change

It’s been more than a year when the upstream MongoDB decided to change the license of the Server code. The previous license was GNU Affero General Public License v3 (AGPLv3). However, upstream wrote a new license designed to make companies running MongoDB as a service contribute back to the community. The new license is called Server Side Public License (SSPLv1) and more about this step and its rationale can be found at MongoDB SSPL FAQ.

Fedora has always included only free (as in “freedom”) software. When SSPL was released, Fedora determined that it is not a free software license in this meaning. All versions of MongoDB released before the license change date (October 2018) could be potentially kept in Fedora, but never updating the packages in the future would bring security issues. Hence the Fedora community decided to remove the MongoDB server entirely, starting Fedora 30.

What options are left to developers?

Well, alternatives exist, for example PostgreSQL also supports JSON in the recent versions, and it can be used in cases when MongoDB cannot be used any more. With JSONB type, indexing works very well in PostgreSQL with performance comparable with MongoDB, and even without any compromises from ACID.

The technical reasons that a developer may have chosen MongoDB did not change with the license, so many still want to use it. What is important to realize is that the SSPL license was only changed to the MongoDB server. There are other projects that MongoDB upstream develops, like MongoDB tools, C and C++ client libraries and connectors for various dynamic languages, that are used on the client side (in applications that want to communicate with the server over the network). Since the license is kept free (Apache License mostly) for those packages, they are staying in Fedora repositories, so users can use them for the application development.

The only change is really the server package itself, which was removed entirely from Fedora repos. Let’s see what a Fedora user can do to get the non-free packages.

How to install MongoDB server from the upstream

When Fedora users want to install a MongoDB server, they need to approach MongoDB upstream directly. However, the upstream does not ship RPM packages for Fedora itself. Instead, the MongoDB server is either available as the source tarball, that users need to compile themselves (which requires some developer knowledge), or Fedora user can use some compatible packages. From the compatible options, the best choice is the RHEL-8 RPMs at this point. The following steps describe, how to install them and how to start the daemon.

1. Create a repository with upstream RPMs (RHEL-8 builds)

$ sudo cat > /etc/yum.repos.d/mongodb.repo &lt;&lt;EOF
name=MongoDB Upstream Repository

2. Install the meta-package, that pulls the server and tools packages

$ sudo dnf install mongodb-org
  mongodb-org-4.2.3-1.el8.x86_64           mongodb-org-mongos-4.2.3-1.el8.x86_64  
  mongodb-org-server-4.2.3-1.el8.x86_64    mongodb-org-shell-4.2.3-1.el8.x86_64


3. Start the MongoDB daemon

$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
   Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-02-08 12:33:45 EST; 2s ago
     Docs: https://docs.mongodb.org/manual
  Process: 15768 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15769 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15770 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15771 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 15773 (mongod)
   Memory: 70.4M
      CPU: 611ms
   CGroup: /system.slice/mongod.service
           └─15773 /usr/bin/mongod -f /etc/mongod.conf

4. Verify that the server runs by connecting to it from the mongo shell

$ mongo
MongoDB shell version v4.2.3
connecting to: mongodb://;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("20b6e61f-c7cc-4e9b-a25e-5e306d60482f") }
MongoDB server version: 4.2.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see

> _

That’s all. As you see, the RHEL-8 packages are pretty compatible and it should stay that way for as long as the Fedora packages remain compatible with what’s in RHEL-8. Just be careful that you comply with the SSPLv1 license in your use.

Catch-up with Kiwi TCMS in Sofia, Singapore, Kiev & Moscow

Posted by Kiwi TCMS on February 17, 2020 01:37 PM

Hello testers, you can catch-up with your favorite open source test case management system during the month of March. Here's a list of events we are going to:

  • March 14 - QA: Challenge Accepted, Sofia where we will have an info booth. You will get a 15% community discount if you email tickets@qachallengeaccepted.com and mention this blog post
  • March 19-21 - OpenTechSummit, Singapore - aka FOSS ASIA summit:
    • Kiwi TCMS exhibition booth - 3 days
    • How to write pylint plugins for fun & profit workshop on March 19th
    • Testing [for] security [in] open source presentation on March 21st

To claim a free Community Standard Ticket use code atodorov. First 5 tickets only! For a 25% discount use code fossasia-speaker.

  • March 27-28 - TestingStage, Kiev where Alex will present his Static analysis as a test tool session. You can also claim 15% ticket discount by using promo-code AlexanderTodorov
  • April 1-2 - TestCon Moscow where Alex will present the Static analysis as a test tool again

Original plan was to visit OpenTest Con, Beijing between March 30-31 which has now been cancelled! The new plan is to stay 2-3 more days in Kiev and join some meetups if available.

Feel free to ping us at @KiwiTCMS or look for the kiwi bird logo and come to say hi. Happy testing!

GNU Social Contract version 1.0

Posted by Mark J. Wielaard on February 17, 2020 10:47 AM

Andreas Enge announced the GNU Social Contract version 1.0:

Hello all,

just a public heads-up on progress on the GNU Social Contract. Following our initially announced timeline, we had put online the first draft at the end of January. The goal of the document is to formulate a common core set of values for the GNU Project, on which we can jointly build to form a stronger community. It is both an agreement among us, GNU contributors, and a pledge to the broader free software community. Additionally, we think it can be a first step towards formalising a transparent and collective governance of the GNU Project.

We received a number of questions and suggestions on the first draft of the document, witnesses to our collective approach to shaping a document that can help us go forward together. We discussed all the input with great care; it is documented, together with the adopted resolutions, at:


The result of all this is version 1.0 of the GNU Social Contract, see


We believe that the outcome is an even snappier document, which lays out our common foundations even more clearly, and thank everyone of you who contributed to improving it.

We have invited all GNU maintainers to send a message until February 24, the end of the endorsement period, to endorse this version 1.0 of the GNU Social Contract, or to declare they do not wish to adhere to it. The current status is maintained at:


Happy “I Love Free Software” day, and thank you for supporting GNU!



Posted by Solanch96 on February 17, 2020 01:00 AM

TELEGRAM Es una aplicación de mensajería instantánea.Es multiplataformaTodos los mensajes son cifrados y no dejan ningún rastro en los servidores de la appNo posee ningún tipo de publicidad. PYTHON Lenguaje de programación de fácil uso.Python es: Interpretado: Se ejecuta sin necesidad de ser procesado por el compilador y se detectan los errores en tiempo de… Sigue leyendo TELEGRAM+ PYTHON EN FEDORA

Episode 183 - The great working from home experiment

Posted by Open Source Security Podcast on February 17, 2020 12:13 AM
Josh and Kurt talk about a huge working from home experiment because of the the Coronavirus. We also discuss some of the advice going on around the outbreak, as well as how humans are incredibly good at ignoring good advice, often to their own peril. Also an airplane wheel falls off.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13175570/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    No summer training 2020

    Posted by Kushal Das on February 16, 2020 04:32 PM

    No summer training 2020 for me. Last year’s batch was beyond my capability to handle. Most of the participants did not follow anything we taught in the course, instead, they kept demanding more things.

    I already started receiving mails from a few people who wants to join in the training in 2020. But, there is no positive answer from my side.

    All the course materials are public, the logs are also available. We managed to continue this training for 12 years. This is way more than I could ever imagine.

    As I was feeling a bit sad about this, the keynote at Railsconf 2019 from DHH actually helped me a lot to feel better.

    Untitled Post

    Posted by Luigi Votta on February 16, 2020 01:06 PM
    Tested today my Fedora Linux Box for linux-kernel 5.5.3-200.

    Fedora 31 : Can be better? part 006.

    Posted by mythcat on February 16, 2020 09:59 AM
    I try to use the Selinux MLS with Fedora 31 and I wrote on my last article about Fedora 31 : Can be better? part 005.After relabeling the files and start the environment I get multiple errors and I ask an answer at fedoraproject lists: This is an example of the problem of implementing MLS in Fedora and can be remedied because MLS Selinux is old in implementing Selinux.

    SELinux is preventing su from open access on the file /var/log/lastlog.

    ***** Plugin catchall (100. confidence) suggests **************************

    If you believe that su should be allowed open access on the lastlog file by default.
    Then you should report this as a bug.
    You can generate a local policy module to allow this access.
    allow this access for now by executing:
    # ausearch -c 'su' --raw | audit2allow -M my-su
    # semodule -X 300 -i my-su.pp
    I try to fix it but I got this error:
    [root@desk mythcat]# ausearch -c 'su' --raw | audit2allow -M my-su
    compilation failed:
    my-su.te:36:ERROR 'syntax error' at token 'mlsconstrain' on line 36:
    mlsconstrain file { write create setattr relabelfrom append unlink link rename mounton } ((l1 eq l2 -Fail-)
    or (t1 == mlsfilewritetoclr -Fail-) and (h1 dom l2 -Fail-) and (l1 domby l2) or (t2 ==
    mlsfilewriteinrange -Fail-)
    and (l1 dom l2 -Fail-) an
    # mlsconstrain file { read getattr execute } ((l1 dom l2 -Fail-) or (t1 ==
    mlsfilereadtoclr -Fail-)
    and (h1 dom l2 -Fail-) or (t1 == mlsfileread -Fail-) or (t2 == mlstrustedobject -Fail-) ); Constraint DENIED
    /usr/bin/checkmodule: error(s) encountered while parsing configuration
    [root@desk mythcat]# ausearch -c 'su' --raw | audit2allow -M my-su
    compilation failed:
    my-su.te:36:ERROR 'syntax error' at token 'mlsconstrain' on line 36:
    mlsconstrain file { write create setattr relabelfrom append unlink link rename mounton } ((l1 eq l2 -Fail-)
    or (t1 == mlsfilewritetoclr -Fail-) and (h1 dom l2 -Fail-) and (l1 domby l2) or (t2 ==
    mlsfilewriteinrange -Fail-)
    and (l1 dom l2 -Fail-) an
    # mlsconstrain file { read getattr execute } ((l1 dom l2 -Fail-) or (t1 ==
    mlsfilereadtoclr -Fail-)
    and (h1 dom l2 -Fail-) or (t1 == mlsfileread -Fail-) or (t2 == mlstrustedobject -Fail-) ); Constraint DENIED
    /usr/bin/checkmodule: error(s) encountered while parsing configuration...

    Do not upgrade to Fedora 32, and do not adjust your sets

    Posted by Adam Williamson on February 15, 2020 01:30 AM

    If you were unlucky today, you might have received a notification from GNOME in Fedora 30 or 31 that Fedora 32 is now available for upgrade.

    This might have struck you as a bit odd, it being rather early for Fedora 32 to be out and there not being any news about it or anything. And if so, you’d be right! This was an error, and we’re very sorry for it.

    What happened is that a particular bit of data which GNOME Software (among other things) uses as its source of truth about Fedora releases was updated for the branching of Fedora 32…but by mistake, 32 was added with status ‘Active’ (meaning ‘stable release’) rather than ‘Under Development’. This fooled poor GNOME Software into thinking a new stable release was available, and telling you about it.

    Kamil Paral spotted this very quickly and releng fixed it right away, but if your GNOME Software happened to check for updates during the few minutes the incorrect data was up, it will have cached it, and you’ll see the incorrect notification for a while.

    Please DO NOT upgrade to Fedora 32 yet. It is under heavy development and is very much not ready for normal use. We’re very sorry for the incorrect notification and we hope it didn’t cause too much disruption.

    rpminspect-0.11 released

    Posted by David Cantrell on February 14, 2020 07:50 PM

    The first release of rpminspect in 2020! I release rpminspect-0.11 today. Aside from the usual load of bug fixes and performance improvements, this release comes with a range of new features. New inspections, expanded configuration file options, and runtime profiles. Here are some highlights:

    • Caches. librpminspect will cache file MIME types, RPM headers, and file checksums. These are used throughout the library and it makes sense to save the values for later use. The result is faster execution time and lower memory usage.
    • The annocheck inspection is now present. This runs annocheck on ELF objects. You can define different annocheck tests to run in rpminspect.conf in the [annocheck] section. The left of the equal sign is the name of the annocheck test you’re defining and the right of the equal sign are the arguments to the annocheck program.
    • Add the filesize inspection to report file size increases or decreases between builds.
    • Add the permissions inspection to report stat(2) changes between builds. The stat-whitelist per product release is checked to see if anything is allowed to be setgid or setuid. If those settings are found but it’s not whitelisted, report a security review result.
    • rpminspect.conf has a [specname] section to help guide the specname inspection which is useful for projects like SCL which do some package name modifications on top of what the actual spec file is named.
    • Add the DT_NEEDED inspection to compare the DT_NEEDED entries in dynamic ELF executables and shared libraries between before and after builds.
    • Use the Freedesktop.org icon lookup routine for the desktop inspection when validating desktop entry files.
    • Use libcap rather than libcap-ng in the inspections that do things with POSIX capabilities.
    • Add the capabilities inspection to report capabilities(7) changes between builds. Checks findings against the capabilities whitelist for the product release. Anything found that is not whitelisted will report a security review result.
    • Add the kmod inspection to check for new or removed kernel module parameters. For builds with the same version, report lost module parameters as VERIFY. For builds with different versions, report lost module parameters as INFO. Always report added module parameters as INFO.
    • Shorten the output of rpminspect -l. To get the long descriptions, pass -v with -l.
    • Properly distinguish between built noarch packages and source packages.
    • Change the result severity in the upstream inspection based on package versions. If the builds are the same version, the severity is VERIFY is sources change. If the builds are different versions, that change is expected.
    • Handle s390 and s390x ELF objects in the elf inspection.
    • Handle ENOENT failures in realpath(3) when unpacking archives.
    • Implement runtime profiles. These are like rpminspect.conf files that are loaded after rpminspect.conf and can further override settings. Profiles are selected with the -p option. Any section in rpminspect.conf except [common] is valid in a profile. Profiles are files of the format /etc/rpminspect/profiles/NAME.conf where NAME is what you pass to the -p option.
    • Handle symlinks correctly when copying build trees.
    • Fix Koji scratch build handling in librpminspect. There may be instances where the XMLRPC API gives nil return values.
    • Plus much more. See the RPM changelog or git commit history for more details.

    See https://github.com/rpminspect/rpminspect/releases/tag/v0.11 for more information. Builds are available in my Copr repository and in Fedora rawhide.

    In addition to the new rpminspect release, there is also a new rpminspect-data-fedora release. This data file package contains updates that match the changes in this new release of rpminspect. The new rpminspect-data-fedora release is available in my Copr repo and in rawhide.

    Fedora program update: 2020-07

    Posted by Fedora Community Blog on February 14, 2020 05:15 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora this week.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.



    <figure class="wp-block-table">
    Open Source Summit NAAustin, TX, US22–24 Jun 2020closes 16 Feb
    Facebook CentOS DojoMenlo Park, CA, US24 Apr 2020closes 16 Mar
    All Things OpenRaleigh, NC, US18–20 Oct 2020closes 20 Mar

    Help wanted

    Prioritized Bugs

    <figure class="wp-block-table">
    Bug IDStatusComponent

    Upcoming meetings


    Fedora 32


    • 2020-02-25: 100% Code complete deadline
    • 2020-02-25: Beta freeze begins


    <figure class="wp-block-table">
    Enable EarlyOOMSystem-WideApproved
    Additional buildroot to test x86-64 micro-architecture updateself-ContainedReady for FESCo
    Reduce installation media size by improving the compression ratio of SquashFS filesystemSystem-WideRejected
    PostgreSQL 12Self-ContainedApproved
    ARM Release Criteria ChangesSelf-ContainedReady for FESCo

    Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.


    <figure class="wp-block-table">
    Bug IDBug StatusComponentBlocker Status
    1801820NEWgnome-shellProposed (Beta)
    1801272NEWkdbProposed (Beta)
    1797274MODIFIEDanacondaAccepted (Beta)
    1767351NEWdistributionAccepted (Beta)
    1795000NEWgnome-sessionAccepted (Beta)
    1801353MODIFIEDkernelAccepted (Beta)
    1798792ASSIGNEDpython-blivetAccepted (Beta)
    1795524ASSIGNEDselinux-policyAccepted (Beta)

    CPE update

    • Datacenter move
      • Services will run on the ‘Minimum Viable Fedora’ from 20 May through 1 July.
    • FAS replacement project
      • See the CommBlog post for sprints 1 & 2.
      • Join us in #fedora-aaa to discuss the project

    See the CPE weekly for more information.

    The post Fedora program update: 2020-07 appeared first on Fedora Community Blog.

    We're not participating in 'QA of the year' award

    Posted by Kiwi TCMS on February 14, 2020 12:45 PM

    Hello testers, this is the story of how our team is not taking part of the "QA of the year" contest organized by the QA: Challenge Accepted conference despite being nominated by Alex. In collaboration with Peter Sabev (organizer) we've managed to figure out what happened:

    • On Nov 17th Alex nominated the Kiwi TCMS team for the award
    • Last week Alex discovered our team is not listed on the voting page
    • Then Peter told us he's not seen any nomination related to Kiwi TCMS at all which made everything feel even stranger
    • We've managed to dig out browser history from November and it clearly shows the nomination form was submitted correctly
    • It was even possible to load the confirmation URL and edit the submission
    • Upon second submission the nomination was clearly visible on the other side, Peter confirmed this

    Then after a few days we've got word back - Peter had figured out what happened. Apparently the same Google form has been opened on 2 different computers and one of them overwrote the existing submissions.

    This kind of issue can be avoided by employing the following measures:

    • Make the submission results public so that everyone can verify their nomination is indeed present on the list. It does take away anonymity and can also expose personal information like email/phone/employer. ID, name and submission time-stamp however will be enough
    • History of edits could also be exposed publicly for extra safety
    • Turn on some sort of overwrite protection similar to what you have for git branches. At the very least have a warning before allowing data overwrite
    • Turn on email confirmation - the existing form didn't have this enabled
    • On our side: double check submission has been received - will put more pressure on the organizing team

    Sadly the issue was discovered after the submission deadline has ended so Kiwi TCMS can't participate in this year's contest. We wish the rest of the finalists best of luck and we're going to see you at QA: Challenge Accepted next month.

    Happy testing!

    PHP Development on Fedora with Eclipse

    Posted by Fedora Magazine on February 14, 2020 08:00 AM

    Eclipse is a full-featured free and open source IDE developed by the Eclipse Foundation. It has been around since 2001. You can write anything from C/C++ and Java to PHP, Python, HTML, JavaScript, Kotlin, and more in this IDE.


    The software is available from Fedora’s official repository. To install it, invoke:

    sudo dnf install eclipse

    This will install the base IDE and Eclipse platform, which enables you to develop Java applications. In order to add PHP development support to the IDE, run this command:

    sudo dnf install eclipse-pdt

    This will install PHP development tools like PHP project wizard, PHP server configurations, composer support, etc.


    This IDE has many features that make PHP development easier. For example, it has a comprehensive project wizard (where you can configure many options for your new projects). It also has built-in features like composer support, debugging support, a browser,a terminal, and more.

    Sample project

    Now that the IDE is installed, let’s create a simple PHP project. Go to File →New → Project. From the resulting dialog, select PHP project. Enter a name for your project. There are some other options you might want to change, like changing the project’s default location, enabling JavaScript, and changing PHP version. See the following screenshot.

    <figure class="wp-block-image size-large"><figcaption>Create A New PHP Project in Eclipse</figcaption></figure>

    You can click the Finish button to create the project or press Next to configure other options like adding include and build paths. You don’t need to change those in most cases.

    Once the project is created, right click on the project folder and select New → PHP File to add a new PHP file to the project. For this tutorial I named it index.php, the conventionally-recognized default file in every PHP project.

    <figure class="wp-block-image size-large"></figure>

    Then add the your code to the new file.

    <figure class="wp-block-image size-large"><figcaption>Demo PHP code</figcaption></figure>

    In the example above, I used CSS, JavaScript, and PHP tags on the same page mainly to show that the IDE is capable of supporting all of them together.

    Once your page is ready, you can see the result output by moving the file to your web server document root or by creating a development PHP server in the project directory.

    Thanks to the built-in terminal in Eclipse, we can launch a PHP development server right from within the IDE. Simply click the terminal icon on the toolbar (Terminal Icon) and click OK. In the new terminal, change to the project directory and run the following command:

    php -S localhost:8080 -t . index.php 
    <figure class="wp-block-image size-large"><figcaption>Terminal output</figcaption></figure>

    Now, open a browser and head over to http://localhost:8080. If everything has been done correctly per instructions and your code is error-free, you will see the output of your PHP script in the browser.

    <figure class="wp-block-image size-large"><figcaption>PHP output in Fedora</figcaption></figure>

    AAA: FAS replacement project update

    Posted by Fedora Community Blog on February 14, 2020 06:24 AM

    The Community Platform Engineering (CPE) team and community contributors began building our new Fedora Account System (FAS) application system on the 8th of January 2020 and completed the first two-week sprint on the 21st of January 2020.

    Sprint one

    This sprint focused on foundational (user stories) tasks to set the stage for our development sprints. Within this sprint the team and contributors made great progress completing many foundational (user stories) tasks. These included:

    Unfortunately, we could not complete 5 tasks due to dependencies we didn’t capture at sprint planning and user stories initially estimated as Medium (10 hours) actually being Large (15 hours), we will take these as learnings allowing us to identify these in our next couple of sprints.

    These included: 

    These user stories (tasks) have been carried into sprint two along with user stories (tasks) noted below:

    Sprint two

    Sprint two began on the 22nd of January with completion and review due on the 4th of February. It is important to note that during sprint two, a lot of team members traveled to Brno for DevConf and face-to-face team meetings, so we may not meet our commitment of all user stories being completed for our review, but we will try our very best to do so.

    We welcome all feedback and thoughts as we progress through this project, please feel free to comment on any issue to log your thoughts.

    • For more information regarding outstanding issues, please see here.
    • To view our current scrum board, please see here.

    We have just completed sprint two review and have prioritized our backlog for sprint three, and we will include this detail in the CPE Weekly emails – and the next blog post!

    Please reach out to us on our IRC channel (#fedora-aaa) if you have any questions, and links to our GitHub page are listed above.

    The post AAA: FAS replacement project update appeared first on Fedora Community Blog.

    Git commit reordering

    Posted by Pablo Iranzo Gómez on February 13, 2020 07:30 PM

    While I was working for a presentation for kid’s school at Magnetic field, Aurora, Lunar Phases and Rockets, I added 4 big videos to the presentation (as I was going to use them offline while presenting).

    I know what git is not the place for big binary files, and even Github proposed to use the LFS backend for that, but as it was just temporary, I went ahead.

    After that commit, I also wrote two more articles, the one on Lego Speed Champions and the one on Galleria.io and PhotoSwipe, so it became a problem to have big files in between, when my plan was to remove them in the end.

    Git allows you to create branches at any point and play around with the commits, cherry-picking them into your branch, etc so for continue working I did create a new branch:

    git checkout -b branchwithoutpresentation


    Until this point, we’ve not performed any ‘damage’ to the repository (and we still could revert back), make sure you’re testing on a repository suitable before doing this on valuable data.

    Then, I wanted to remove the ‘problematic commit’ by running:

    git rebase -i HEAD~20

    In that way, git offers you an editor with the latest 20 commits in the branch so that you can elect to ‘drop’ the ones that are problematic, in this case the one from the presentation.

    To do so, go to the line describing the commit of the presentation and change pick to d and when the editor saves the changes and exits, the git history will be rewritten and the files dropped.

    We’ve done that only in a new branch, so the original branch with the code (source in my case), still contains the presentation.

    To rewrite the history and have the presentation in the end, we need to:

    git checkout source
    git rebase -f branchwithoutpresentation

    Above command will rewrite ‘source’ commits to be ‘on top’ of the branchwithoutpresentation branch (the one without the presentation), leaving us with all the commits ordered, and in the last one, the presentation itself.

    This allowed me to continue editing the last commit (git commit --amend --no-edit) adding or removing files always on the same commit, so once the big files where uploaded to youtube, I could just drop them from the repo leaving it clean again.

    However, this means that the commits where altered in order, being the latest one, a commit ‘dated’ earlier, of course, the end results didn’t changed, but wanted my git history to look ‘linear’, so I did the following procedure to ‘insert’ the commit back where it belonged:

    git log  # to get list of commits (write down the commit number for the presentation)
             # also, write down the commit 'after' the presentation should be inserted
    git checkout -b sortedbranch commitafterthepresentationshouldbeinserted
    git cherry-pick commitnumberofthepresentation
    git checkout source # to get back to the regular branch
    git rebase -f sortedbranch

    At this point, the remaining commits of the source branch were added on top of sortedbranch, leaving us in a ordered git log and in this case, without the big files in the repository.


    At this point, a simple git push will not allow you to update your upstream repository as the history is not linear (from their point of view), so a git push --force is needed to overwrite remote repository with local changes. This is destructive if others have made changes on the repository, so be really sure about what you do before doing it.

    Happy git playing!

    Fedora Council November 2019 meeting: Councily business

    Posted by Fedora Community Blog on February 13, 2020 02:54 PM

    This is part two of a four-part series recapping the Fedora Council’s face-to-face meeting in November 2019.

    In addition to the big topic of the Fedora Project Vision, we used the opportunity to cover some other Fedora Council business. Because it’s a lot, we’re breaking the reporting on this into two posts, kind of arbitrarily — here’s the first of those.

    Fedora Objectives and Objective leads

    The Fedora Council’s primary responsibility is to identify the short-, medium-, and long-term goals of the Fedora community and to organize and enable the project to best achieve them. Our mechanism for handling medium-term goals is the Fedora Objectives process. We spent some time reviewing this process and the associated Objective Lead roles.

    Although Objectives were invented to help bring visibility and clarity to big project initiatives, we know there is still a communications gap: most of the community doesn’t know exactly what it means for something to be an Objective, and many people don’t know what the current Objectives even are. Plus, being an Objective Lead is extra work — what’s the benefit? And why are Objective Leads given Council seats rather than just asked to report in periodically?

    We asked the Objective leads how they felt about it. Overall, they found it beneficial to have a seat on the Council. It helps make the work of the Objective more visible and lends credibility to resource requests. The act of writing and submitting an Objective proposal made them organize their thoughts, goals, and plans in a way that’s more easily understood by others.

    But it’s not perfect. The Council can do a better job of onboarding new Objective leads by pairing them with an experienced Council member who can show them the expectations. Part of the problem is that we haven’t clearly identified and documented what those expectations are, so we will work as a Council to improve the documentation and process for Objectives. We also intend to become more rigorous about regularly reviewing the status of Objectives to make sure they’re still providing the planned value to the community. We will also begin setting more concrete end dates for Objectives. They’re not intended to be open-ended, and we want to have a defined process by which they’re extended or retired.

    Council member expectations

    We’re also going to better clarify the expectations for other Council members. The Fedora Council is intended to be a working body that provides active leadership to the community. Part of the expectation-setting is defining a regular schedule for Council face-to-face meetings. Having an extended retreat for Council members the last two years has been productive, both in terms of the output and the improved working relationships of the members. So we want to continue doing this as a regular activity. In addition, one-day meetings before Flock and after DevConf.CZ give us additional checkpoints throughout the year without imposing a heavy travel burden. 


    This whole post is largely about communication — and we talked about Council communications at the hackfest, too. Recently, we tried an idea where we asked Objective leads, body representatives (Diversity & Inclusion, Engineering, and  Mindshare), and Edition representatives to provide regular status reports that we publish to the web. From those status reports, Ben Cotton has been writing summary posts on the Community Blog each month. To be frank, this hasn’t worked so well. For the new year, the Council has asked Ben to develop guidance on how to write status reports and work with the representatives to come up with an update frequency that makes sense for each area. We hope this will provide an easy way to see what key areas of the project are doing at a high level.

    Speaking of the Community Blog, we discussed the ways we communicate. Specifically, the Council will use the Community Blog for contributor-focused communication and Fedora Magazine for user-focused communication. We encourage members of the Fedora community to contribute to both of these sites. We’re going to ask those sites to cross-link to each other to help drive traffic. “Do we have anything to announce” will become a standing question in Council meetings, and when the answer is “yes”, we will prepare a Community Blog or Magazine article as appropriate.

    On the subject of meetings, you may have noticed that we switched from weekly meetings with varying focus to fortnightly meetings to cover ongoing tickets and issues. The idea was to reduce the burden on Council members who have jobs outside of Fedora. The status reports I talked about above were supposed to replace some of the regular reporting. But it turns out the video meetings we did every four weeks were useful, so we’re going to bring those back starting in February. We’ll do a video meeting on the second Tuesday of every month. When that conflicts with the fortnightly IRC meeting, we’ll cancel the IRC meeting.

    The post Fedora Council November 2019 meeting: Councily business appeared first on Fedora Community Blog.

    Insider 2020-02: Portability; secure logging; Mac support; RPM;

    Posted by Peter Czanik on February 13, 2020 12:38 PM

    Dear syslog-ng users,

    This is the 78th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.


    Keeping syslog-ng portable

    I define syslog-ng as an “Enhanced logging daemon with a focus on portability and high-performance central log collection”. One of the original goals of syslog-ng was portability: running the same application on a wide variety of architectures and operating systems. After one of my talks mentioning syslog-ng, I was asked how we ensure that syslog-ng stays portable when all the CI infrastructure focus on 64bit x86 architecture and Linux. You can learn my answer from:


    Why choose syslog-ng over rsyslog

    A question I often receive is ‘what are the differences between rsyslog and syslog-ng?’ It’s a little tricky to answer. First, because my experience is mostly with syslog-ng, and because there are many similarities between the two projects. This is where the syslog-ng users help me. They can clearly explain from firsthand experience why they chose syslog-ng. The following blog post includes some of the most common reasons why they choose syslog-ng.


    Secure logging with syslog-ng

    In his presentation at FOSDEM, Stephan Marwedel mentioned a new syslog-ng feature he is working on, namely “(...) the design, implementation, and configuration of the secure logging service. Its aim is to provide tamper evident logging, i.e., to adequately protect log records of an information system against tampering and to provide a sensor indicating attack attempts.” (Marwedel, S. Secure logging with syslog-ng. (n.d.). Retrieved from https://fosdem.org/2020/schedule/event/security_secure_logging_with_syslog_ng/)Learn more about the presentation, and access the slides at https://fosdem.org/2020/schedule/event/security_secure_logging_with_syslog_ng/ A pull request implementing this feature was already sent to syslog-ng on GitHub. You can follow its progress at https://github.com/syslog-ng/syslog-ng/pull/3121

    State of syslog-ng on Mac

    Mac support is a returning question among syslog-ng users, especially when I talk to users in the US. For recent releases, each commit is automatically tested on macOS. However, there is not much information available on Mac support. Recently, I bought a MacBook to be able to test and document syslog-ng on Mac. Here are my first experiences and some future plans.


    Installing the latest syslog-ng on openSUSE, RHEL and other RPM distributions

    The syslog-ng application is included in all major Linux distributions, and you can usually install syslog-ng from the official repositories. If the core functionality of syslog-ng meets your needs, use the package in your distribution repository (yum install syslog-ng), and you can stop reading here. However, if you want to use the features of newer syslog-ng versions, for example, sending log messages to Elasticsearch or Apache Kafka, you have to either compile syslog-ng from source, or install it from unofficial repositories. This post explains how to do that.




    Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/

    Google Translate in Sheets

    Posted by Paul Mellors [MooDoo] on February 13, 2020 09:08 AM

    Did you know that you could use google's translate feature in their Sheets application?  Well I didn't until I wanted to translate something and wondered if there was a way.  Turns out there is and it's quite easy.  Check out the image below

    In the first column, you have [English] and in the second column, in this case [french], the formula you need to add to B2 would be

    =GOOGLETRANSLATE(A2, "en","fr")

    Basically translate A2 from english to french.  You could even use other country code languages ja for japanese for example.

    Read more here.


    DevConf CZ 2020: play by play

    Posted by Justin W. Flory on February 13, 2020 08:00 AM
    DevConf CZ 2020: play by play

    DevConf CZ 2020 took place from Friday, January 24th to Sunday January 27th in Brno, Czech Republic:

    DevConf.CZ 2020 is the 12th annual, free, Red Hat sponsored community conference for developers, admins, DevOps engineers, testers, documentation writers and other contributors to open source technologies. The conference includes topics on Linux, Middleware, Virtualization, Storage, Cloud and mobile. At DevConf.CZ, FLOSS communities sync, share, and hack on upstream projects together in the beautiful city of Brno, Czech Republic.


    This is my third time attending DevConf CZ. I attended on behalf of RIT LibreCorps for professional development, before a week of work-related travel. DevConf CZ is also a great opportunity to meet friends and colleagues from across time zones. This year, I arrived hoping to better understand the future of Red Hat’s technology, see how others are approaching complex problems in emerging technology and open source, and of course, to have yummy candy.

    Sessions: Play-by-play

    Event reports take many forms. My form is an expanded version of my session notes along with key takeaways. Said another way, my event report is biased towards what is interesting to me. You can also skim the headings to find what interests you.

    Diversity and inclusion meet-up

    Would you like to meet other attendees who stand under the umbrella of “Diversity and Inclusion” or would you like a introduction into what Diversity and inclusion is and why it’s a good thing? this is the session for you! All are welcome!

    Imo Flood-Murphy

    This was a short, informal session ran by Imo to network and get a high-level introduction to diversity and inclusion in open source. Everyone in the room introduced themselves and gave a short explanation of who they were or what projects they represent. I appreciated the opportunity to meet others and better understand how Red Hat approaches diversity and inclusion.

    A suggestion for next time is to allow more unstructured time for conversations. I think fun icebreakers get folks comfortable in a short amount of time to help make connections for the rest of the weekend.

    Lessons learned from testing over 200,000 lines of Infrastructure Code

    If we are talking that infrastructure is code, then we should reuse practices from development for infrastructure, i.e.

    1. S.O.L.I.D. for Ansible.

    2.Pair devops-ing as part of XP practices.

    3. Infrastructure Testing Pyramid: static/unit/integration/e2e tests.

    Lev Goncharov

    Lev shared best practices on building sustainable, tested infrastructure. Infrastructure-as-Code (IaC) was important to how T-Systems scaled their infrastructure over time.

    My key takeaways:

    1. Smaller components:
      1. More sustainable
      2. Easier to maintain
      3. Easier to test
    2. Ansible Roles encourage best use practices for Ansible
    3. Spreading knowledge is essential (if nobody understands it, the code is broken)
    4. Code review creates accountability
    5. Use static analysis tools (Shellcheck, Pylint, Ansible Lint)
    6. Write unit tests (shUnit2, Rspec, Pytest, Testinfra, Ansible Molecule)

    Content as code, technical writers as developers

    In the open-source project Kyma, documentation is an integral part of code delivery. We, the project’s Information Developers, believe that using the same tools and methodology as your good old code developers, we can create comprehensive and accurate documentation. During our talk, we’ll share the whys and hows of our approach, showing you that the “developer” in “Information Developer” isn’t there just because it sounds cool. We’ll prove that creating documentation goes beyond linguistic shenanigans and salvaging whatever information there is from a trainwreck that is the developer’s notes. Testing solutions, finding our way around Kubernetes, tweaking the website, engaging with the community are just a few examples of what keeps us busy every day.

    Barbara Czyz, Tomasz Papiernik

    “Information Developers” is a cool phrase I learned. Barbara and Tomasz explained the value of technical writing and asserted documentation should live close to project code.

    My key takeaways:

    1. Documenting processes like release notes enables others to join with less barriers
    2. Docs-as-Code (DaC): Visibility of docs across development process is important
      1. Placing docs with code encourages feedback loops and avoids silos
    3. Put links to docs in visible places (e.g. API messages, console messages)
    4. Management aside: Emphasize/incentivize value of technical writing in your team
    5. Remember bridges across skill areas is possible (technical writers can also be community-oriented people too)

    Uncharted waters: Documenting emerging technology

    We can’t help but feel the lure towards the hot new thing, especially when it comes to technology. Part of that lure is the breaking of ground, venturing into the unknown, and working on solutions to new problems. But a lot of the same things that make emerging technology fun and exciting to work on are exactly why it can be difficult to document. These challenges are quite different to those associated with mature products.

    This talk is for anyone working on new products and emerging technology, or just interested in learning about fast-moving documentation. It is for the developer as much as it is for the writer, since it usually falls to them to write the early docs before a writer is added to the team.

    Andrew Burden

    This was the talk I didn’t know I needed to go to.

    Lately I work with “emerging technology,” which means different things to different people. Regardless of what emerging tech means to you, Andrew focused on how to write documentation in a fast-paced environment with “pre-release” technology, where things change fast and suddenly. Normally this is an excuse to not write docs, but Andrew showed, yes! It is possible to write good docs, even when context changes fast and often.

    Key considerations of fast-paced technical writers

    An even balance of these considerations helps get into a user’s mindset:

    1. Scope / scale of release
    2. Release schedule
    3. Developer meetings / face-time
    4. Exposure with $TECHNOLOGY
    5. Deployment experience with $TECHNOLOGY

    Surviving the information wall

    The “information wall” is the endless wall of information and things to know about a project. If information is endless, how do technical writers survive?

    • Take notes: Be like a scientist
    • Take notes about your notes
    • Be organized with your notes

    Obviously Andrew was getting at the value of note-taking. Practicing note-taking skills is critical to keep up with the pace of change.

    “Multi-Version Syndrome”

    Sometimes you are writing features for things that will not be released in the next release. There is a risk of losing information across multiple releases (e.g. publishing the wrong thing too soon, or the right thing too late). Clarify the release schedule as you go. A good safeguard against losing information is to rigorously understand release cycle cadence and priority.

    If your product isn’t mature yet, anticipate change instead of avoiding it.

    Access to technology is critical

    Technical writers are often User 0. To understand the technology, you need access. There are interactive and non-interactive ways of getting access. Interactive ways are preferred because they are always reproducible.

    • Interactive
      • Deploy your own
      • Get someone else to deploy it for you (but lose install context)
    • Non-interactive

    Other takeaways

    • Screenshots have high maintainability cost; avoid if possible
      • Sometimes good stop-gaps until something more maintainable
    • Where to begin? Make a table-of-contents for the Minimum Viable Product
      • Never underestimate outlines (ahem, like how I wrote this blog post…)
    • Avoid documentation scramble near release day:
      • Make lists / check-lists
      • Take more notes
      • Pre-release checklist
      • Think now, and for the future
    • Audit your docs: On-boarding new people is a powerful opportunity to test out your docs

    Thanks Andrew for a deep dive on this narrow but important topic.

    Community management: not less than a curry

    Every volunteer joins an Open Source community for a reason. The reasons could range from technical gains to finding his/her/their passion. This community of diverse volunteers require a leader who can not just mentor them with their interests but also a manager managing the community activities in terms of community engagement and planning. A community manager is not less than a candle of light and in this presentation, I would be highlighting my learnings and experiences about starting a community from scratch around a project and maintaining a healthy community management practices.

    Prathamesh Chavan

    Prathamesh designed an activity to help the audience understand community management. My key takeaway was community management is about connecting and understanding others as their authentic self.

    In the activity, Prathamesh passed papers and pens to the audience. His session had three steps. Between each step, all attendees traded papers with another attendee:

    1. Define a project idea (why, how, what)
    2. Identify challenges to idea (i.e. questions)
    3. Answer challenges

    It reminded me of a similar workshop I attended before. This inspired me to work on my own workshop idea for a future conference.

    Cognitive biases, blind spots, and inclusion

    Open source thrives on diversity. The last couple of years has seen huge strides in that aspect with codes of conduct and initiatives like the Contributor Covenant. While these advancements are crucial, they are not enough. In order to truly be inclusive, it’s not enough for the community members to be welcoming and unbiased, the communities’ processes and procedures really support inclusiveness by not only making marginalized members welcome, but allowing them to fully participate.

    Allon Mureinik

    Allon recognizes the importance of diversity, but asking for improved diversity is one side of the coin. A friend recently shared a powerful quote with me: “If diversity is being invited to the dance, inclusion is being invited to dance.” Allon’s message was to dig deeper on including marginalized people in our project communities.

    He identified ways we accidentally make our communities less inclusive because of our cognitive/unconscious biases. Everyone has blind spots! Allon suggested ways to be more conscious about inclusion in open source:

    • Knowledge barriers
      • Procedural knowledge, not just technical
        • How do you submit code? File a bug? Make meaningful contributions? These need to be documented
      • Documentation fosters inclusivity
    • Language barriers
      • Working proficiency in English not universal
      • Conversations have extra barriers (e.g. communicating complex ideas, understanding advanced words)
    • Time barriers
      • Work schedules no longer 9 to 5
      • Remember other folks in different time zones
      • On giving feedback: Fast is not a metric! Be smart
        • Merging PRs while others are away, or shortly after opening it
        • Allow time for input on non-trivial changes
    • Transparency barriers
      • If not in the open, it didn’t open
      • Negative example: Contributor makes a PR, reviewer has face-to-face conversation with contributor, reviewer merges PR without public context
      • Default to open: in many ways
        • If you can’t be open, at least be transparent

    Diversity in open source: show me the data!

    How diverse is your work environment? Diverse communities are more effective, they allow us to share the strengths of the individuals who make up the community. Have you ever looked around and noticed that most of our Open Source communities are predominantly male? Why do you think that is? We’ll use gender diversity as a case study and share some intriguing data points. Let us convince you why it’s so important.

    Regardless of your gender, we would love for you to join us! We will also give you some tips on how you can make a difference.

    Serena Chechile Nichols, Denise Dumas

    Serena and Denise divided the talk into two sections: metrics and action. The way they presented, they brought the audience on the same page by visiting a variety of metrics and then transitioned to an empowering discussion about changing trends we see.

    Next time, I hope to see expanded messaging by defining diversity beyond only women. Diversity was frequently tied to gender participation metrics in open source. While women are underrepresented, there are additional aspects of identity that can compound discrimination. Race, socioeconomic status, nationality, sexual orientation, and more also play a part in understanding challenges collectively faced in inclusion work.

    The data

    • Gender differences by # of contributors:
      • GSoC 2018: 11.6% female-identifying contributors
      • OpenStack: 10.4% female-identifying contributors
      • Linux kernel: 9.9% female-identifying contributors
    • U.S. Dept. of Labor: 22.2% of technical roles filled by women
      • 2014-2019: More women entering tech jobs at companies like Apple, Microsoft, Google, etc.
    • Years of experience by gender (<9 years):
      • 66.2% female
      • 52.9% non-binary/queer
      • 50.1% male
    • GitHub user and developer survey:
      • 95% male
      • 3% female
      • 1% non-binary

    Let’s make things better

    Serena and Denise asserted diversity creates change. All changes come with challenges. Diversity can increase the friction in the process, but that is okay. They emphasized the need for multiple perspectives see past our initial biases (conveniently covered by Allon in the previous talk).

    This transitioned to questions, comments, and thoughts from the audience. One interesting point was using the phrase, “we don’t do that here” to create and set norms. I gave a suggestion to look at projects you already participate in and see if there is a diversity and inclusion effort there already. If there is, see if there are ways to help and get involved. If not, consider starting one (or network with the Open Source Diversity community).

    To wrap up, one recurring theme of Serena and Denise’s talk is to make time to step back and evaluate the bigger picture. Questioning our biases is an important skill to practice. We need the space and time to recompute!

    Candy Swap

    Do you have a unique sweet dessert or candy from your country or hometown? Do you love to try new and exciting foods from around the world? Spend an hour with fellows as we share stories and candies from the world with each other. Participants are invited to bring a unique confectionary or candy from their country or city to share with multiple other people. Before going around to try yummy things, all participants explain what item they bring and any story about its origins or where it is normally used. After sharing, everyone who brought something rotates around to try candies brought by others. After all participants have had a chance to sample, the rest of the community is invited to come and try anything remaining.

    Jona Azizaj, Justin W. Flory

    I am biased when I say this is one of my favorite parts of conferences I go to. Jona originally proposed the candy swap for DevConf CZ. After unexpectedly adding DevConf CZ to my travel list for 2020, we teamed up to share the sweet tradition from Fedora Flock to DevConf CZ! This is one of my favorite conference traditions because I get to know other attendees in a context outside of technology. And food is always an easy way to win me over.

    Instead of listening to me, see what other people have to say about it:

    <figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
    <script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
    </figure> <figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
    <script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
    </figure> <figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
    <script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
    </figure> <figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
    <script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

    From Outreachy to cancer research

    Outreachy program is helping women and other underrepresented people to make first steps in tech career. Picking a project, making first open source contributions, working on assigned project and learning from advanced people. But what happens when this three months are over? Can Outreachy be a lifechanging experience?

    I will share my story of conversion from a chemist and full time parent into a Fedora Outreachy intern and how I found my place as a junior software developer in cancer genomics research at IRB Barcelona.

    Lenka Segura

    This was a favorite of the weekend. “Fedora Outreachy intern Lenka Segura on how Outreachy opened the door for her career to cancer research at IRB Barcelona!”

    I put effort into live-tweeting a Twitter thread. Get the full scoop there!

    <figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter">
    <script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

    Connect and grow your community through meetups

    Open source communities collaborate in a multitude of ways – chatting on irc, submitting issues and contributing code on GitHub, discussing and sharing ideas on reddit and other social channels. Face to face gatherings add another dimension to that, where community members can learn and share their experiences. Local meetups provide a venue for people with similar interests to socialize and connect. However, organizing meetups is not trivial. How do we encourage and motivate the community to arrange meetups, and to keep the momentum? In my one year with the Ansible community, we have doubled the number of active meetups in Europe. These meetups are community driven, rather than Red Hat. Find out how we use metrics to analyze the situation and needs, and the steps we are taking to reach our goals of connecting with even more community members. Learn from our mistakes and challenges (100 RSVPs and only 20 turned up?), plus some tips to make your meetups more inclusive.

    Carol Chen

    Carol explained the role of local meet-ups around the world in building communities around software projects. She emphasized that single metrics are not always useful, so it is more helpful to evaluate on multiple areas. The most useful takeaway for me was the 5 W’s: why, who, what, when, where.

    • Why? Common curiosity (noticing something new in your community)
    • Who? Connections and networking
    • What? Concise, compelling content
      • Consider venue travel (how to make it worth their while?)
      • Provide alternatives to git-based submissions
      • All talks don’t have to be technical! Diversify to appeal to wider audiences
        • Announcements for future events, work-life talks
        • We are more than just the technology we work with
    • When? Consistency
      • Helps with venue scheduling
      • Helps retain attendee focus and build habits

    Carol also gave suggestions for common points to think about for improved inclusion. All of these need active, not passive inclusion.

    • Special needs / disabilities
    • Food allergies
    • Beverage preference (often alcohol/non-alcoholic)
    • Language
    • Traffic-light communication stickers
    • Photography lanyards
    • Gender pronouns

    Beyond DevConf CZ

    While the sessions are excellent and fulfilling (and sometimes frustrating when you miss a good talk with a full room), DevConf is also more than the sessions. It’s also the people and conversations that happen in the “hallway track.” It was nice to see many old friends and make new ones.

    I spent a few extra days before and after DevConf CZ in Brno. In some of that time, my colleague Mike Nolan and I rehearsed in-person for our FOSDEM talk the following weekend (to come in a future blog post). I also enjoyed coffee and waffles with Marie, Sumantro, and Misc!

    <figure class="wp-block-gallery columns-3 is-cropped" data-amp-lightbox="true"><figcaption class="blocks-gallery-caption">A few memories of a great week in Brno</figcaption></figure>

    Until next time!

    I learn a lot and have a lot of fun at DevConf CZ. I’m happy to return for a third year. Hats-off to the organizers and volunteers who pulled it all off. Each year, DevConf gradually makes improvements. It’s nice to see inclusion prioritized across the board.

    Thanks also goes out to RIT LibreCorps for sponsoring my trip. Extra thanks to Jona Azizaj!

    The post DevConf CZ 2020: play by play appeared first on Justin W. Flory's Blog.

    Using Zuul CI with Pagure.io

    Posted by Adam Williamson on February 13, 2020 02:15 AM

    I attended Devconf.cz again this year – I’ll try and post a full blog post on that soon. One of the most interesting talks, though, was CI/CD for Fedora packaging with Zuul, where Fabien Boucher and Matthieu Huin introduced the work they’ve done to integrate a specific Zuul instance (part of the Software Factory effort) with the Pagure instance Fedora uses for packages and also with Pagure.io, the general-purpose Pagure instance that many Fedora groups use to host projects, including us in QA.

    They’ve done a lot of work to make it as simple as possible to hook up a project in either Pagure instance to run CI via Zuul, and it looked pretty cool, so I thought I’d try it on one of our projects and see how it compares to other options, like the Jenkins-based Pagure CI.

    I wound up more or less following the instructions on this Wiki page, but it does not give you an example of a minimal framework in the project repository itself to actually run some checks. However, after I submitted the pull request for fedora-project-config as explained on the wiki page, Tristan Cacqueray was kind enough to send me this as a pull request for my project repository.

    So, all that was needed to get a kind of ‘hello world’ process running was:

    1. Add the appropriate web hook in the project options
    2. Add the ‘zuul’ user as a committer on the project in the project options
    3. Get a pull request merged to fedora-project-config to add the desired project
    4. Add a basic Zuul config which runs a single job

    After that, the next step was to have it run useful checks. I set the project up such that all the appropriate checks could be run just by calling tox (which is a great test runner for Python projects) – see the tox configuration. Then, with a bit more help from Tristan, I was able to tweak the Zuul config to run it successfully. This mainly required a couple of things:

    1. Adding nodeset: fedora-31-vm to the Zuul config – this makes the CI job run on a Fedora 31 VM rather than the default CentOS 7 VM (CentOS 7’s tox is too old for a modern Python 3 project)
    2. Modifying the job configuration to ensure tox is installed (there’s a canned role for this, called ensure-tox) and also all available Python interpreters (using the package module)

    This was all pretty small and easy stuff, and we had the whole thing up and running in a few hours. Now it all works great, so whenever a pull request is submitted for the project, the tests are automatically run and the results shown on the pull request.

    You can set up more complex workflows where Zuul takes over merging of pull requests entirely – an admin posts a comment indicating a PR is ready to merge, whereupon Zuul will retest it and then merge it automatically if the test succeeds. This can also be used to merge series of PRs together, with proper testing. But for my small project, this simple integration is enough so far.

    It’s been a positive experience working with the system so far, and I’d encourage others to try it for their packages and Pagure projects!

    Devconf.cz 2020

    Posted by Kevin Fenzi on February 13, 2020 01:12 AM

    This year again I had the honor of being able to attend devconf.cz. Many thanks to Red Hat (My employer) for sending me to the conference (it also allowed me to attend some work meetings after the conference).

    The trip out to Brno was much as it has been for me in the past, except this time it was even longer since the Portland to Amsterdam flight I used to take is no longer offered, so I had to go from Portland to Seattle and then Amsterdam. Due to various scheduling issues I also went to Vienna this time instead of Prague. No particular problems on the trip, just a long haul. The train in Vienna was nice and clean and fast and comfortable.

    Devconf.cz runs over 3 days (friday/saturday/sunday) and is jam packed with talks and people to talk to. A number of talks I wanted to attend this year were full and I couldn’t get in. 🙁 Tickets also “sold” out really fast this year (in days) after they were available. I was amazed in the opening session how many people raised their hands that they were attending for the first time (seemed like >50%?). Thats really a great sign for the health of open source to have that many new people coming in and so many in general.

    Friday opened with a keynote about Open Source SRE (Site reliability Engineering) in several Red Hat OpenShift teams. They did a good job explaining how they divided things up and how they tried to find issues before their customers even knew about them.

    I then took the hallway track for a few sessions. So many people I work with over IRC/email there to talk to face to face. Lots of new folks as well.

    The next session I hit was “Ansible by pull request: a gitops story”. Some good practical story of using git and ansible and ansible tower to manage and deploy things. A good introductory talk on the concept of ‘gitops’ (using git for operations settings.

    I then ran out for a lovely lunch at a local pizza place. Was a great chance to catch up with teammates. Unfortunately it took a while and I missed the talk on Rawhide gating that I was hoping to go to. I hear it went really well however. 🙂

    More sessions and hallway meetings and then off to a lovely team dinner. Had a great time there and then some of us went out for more beers after dinner and had an even more amusing time.

    Saturday I missed the keynote talking to various people, then off to a talk by Ben Cotton: “We won. Now what?”. He talked about things going on in the open source movement these days and what they might mean or not for the future. I had actually heard of all these events, but hadn’t really thought much about what they might mean, so it was some good thought provoking fodder.

    I managed to miss the coffee lovers meetup sadly, but then did manage to get into the packit talk. Great stuff there: automating your package builds and tests in Fedora from upstream projects. Lots of nice questions too, which they actually had answers for! 🙂

    Tim Burke, long time Red Hatter gave a very fun talk called “Teamwork lessions learned in 3500km hiking” about his hiking the Appalachian Trail. There were really a lot of great things in this talk, I recommend watching the video of it. Some highlights: Try and do some random act of ‘trail magic’ kindness each week. Don’t worry about things away, just live in the moment. Lots of great life advice.

    Then it was time for slideshow karioke. Always a ton of laughs. The premise: You get a set of random, advancing slides and you have to come up with some coherent talk using them. I think people are learning that it’s best to come with some idea what you want to talk about, tie it into the first slide and then try and let the audience figure out how the other slides fit in. Some of them were pretty well done this time.

    This year there was a group of us that wasn’t too interested in the devconf party, so we just went for a nice quiet dinner and more relaxing evening.

    Sunday I once again missed most of the keynote by getting stopped talking with folks and getting coffee, etc. But I managed to make the “Alternatives to modularity” talk. This was basically the same proposals as were floated on the devel list, with more detail. There unfortunately wasn’t much Q&A time and the time that they did have seemed pretty harsh feedback to me. I personally think the hardest part of their alternative is getting packages to be parallel installable, sometimes thats really really not easy. Aside from that though, I think their proposal would work and be much more simple. I do hope they try and persue the approach and convince module maintainers to switch. If we see after a while there’s no one using modules they will have won.

    All in all another fun time at devconf. Lots of people, great talks, great area/venue. I recommend it. Of course get your tickets as soon as they go on sale as I am sure 2021 devconf.cz will sell out as quick as this year did. The number of new people really encouraged me, open source is contuing off to the next generation, really great to see!

    Kiwi TCMS 8.0

    Posted by Kiwi TCMS on February 12, 2020 09:45 PM

    We're happy to announce Kiwi TCMS version 8.0!

    IMPORTANT: this is a major release which includes important database and API changes, several improvements and bug fixes. Multiple API methods are now incompatible with older releases and extra caution needs to be applied when upgrading via docker-compose.yml because newer MariaDB versions are breaking direct upgrades from existing installations!

    You can explore everything at https://public.tenant.kiwitcms.org!

    Supported upgrade paths:

    5.3   (or older) -> 5.3.1
    5.3.1 (or newer) -> 6.0.1
    6.0.1            -> 6.1
    6.1              -> 6.1.1
    6.1.1            -> 6.2 (or newer)

    Docker images:

    kiwitcms/kiwi       latest  71a55e353da2    557 MB
    kiwitcms/kiwi       6.2     7870085ad415    957 MB
    kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
    kiwitcms/kiwi       6.1     b559123d25b0    970 MB
    kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
    kiwitcms/kiwi       5.3.1   a420465852be    976 MB

    Changes since Kiwi TCMS 7.3


    • Update Django from 3.0.2 to 3.0.3
    • Update django-grappelli from 2.13.3 to 2.14.1
    • Update markdown from 3.1.1 to 3.2
    • Update python-gitlab from 1.15.0 to 2.0.1
    • Update pygithub from 1.45 to 1.46
    • Allow customization of test execution statuses via admin. For more information see https://kiwitcms.readthedocs.io/en/latest/admin.html#test-execution-statuses. Fixes Issue #236
    • Add passing rate chart to Execution trends telemetry
    • Documentation updates (@Prome88)


    This release adds several migrations which alter the underlying database schema by renaming multiple columns.


    • SQLite has very poor capabilities for altering schema and it will break when run with existing database! If you had deployed Kiwi TCMS with SQLite for production purposes you will not be able to upgrade! We recommend switching to Postgres first and then upgrading!

    • docker-compose.yml has been updated from MariaDB 5.5 to MariaDB 10.3. The 10.x MariaDB containers change their datadir configuration from /var/lib/mysql to /var/lib/mysql/data! We recommend first upgrading your MariaDB version, using Kiwi TCMS 7.3 and afterwards upgrading to Kiwi TCMS 8.0:

      1. Backup existing database with:

        docker exec -it kiwi_db mysqldump -u kiwi -pYourPass kiwi > backup.sql
      2. docker-compose down

      3. docker volume rm kiwi_db_data - will remove existing data volume b/c of incompatibilities between different MariaDB versions

      4. docker-compose up - will recreate data volume with missing data. e.g. manage.py showmigrations will report that 0 migrations have been applied.

      5. Restore the data from backup:

        cat backup.sql | docker exec -u 0 -i kiwi_db /opt/rh/rh-mariadb103/root/usr/bin/mysql kiwi

        note: This connects to the database as the root user

      6. Proceed to upgrade your Kiwi TCMS container !


    • Remove model fields of type AutoField. They are a legacy construct and shouldn't be specified in the source code! Django knows how to add them dynamically. These are:
      • Tag.id
      • TestCaseStatus.id
      • Category.id
      • PlanType.id
      • TestExecutionStatus.id
    • Remove db_column attribute from model fields
    • Rename several primary key fields to id:
      • Build.build_id -> Build.id
      • TestRun.run_id -> TestRun.id
      • TestPlan.plan_id -> TestPlan.id
      • TestCase.case_id -> TestCase.id
      • TestExecution.case_run_id -> TestExecution.id



    The database schema changes mentioned above affect multiple API methods in a backwards incompatible way! There is possibility that your API scripts will also be affected. You will have to adjust those to use the new field names where necessary!


    • Methods Build.create(), Build.filter() and Build.update() will return id instead of build_id field
    • Method TestRun.get_cases() will return execution_id instead of case_run_id field and id instead of case_id field
    • Methods TestRun.add_case(), TestExecution.create(), TestExecution.filter() and TestExecution.update() will return id instead of case_run_id field
    • Methods TestRun.create(), TestRun.filter(), TestRun.update() will return id instead of run_id field
    • Methods TestPlan.create(), TestPlan.filter() and TestPlan.update() will return id instead of plan_id field
    • Methods TestCase.add_component(), TestCase.create(), TestCase.filter() and TestCase.update() will return id instead of case_id field


    Kiwi TCMS automation framework plugins have been updated to work with the newest API. At the time of Kiwi TCMS v8.0 release their versions are:

    • kiwitcms-tap-plugin v8.0.1
    • kiwitcms-junit.xml-plugin v8.0.1
    • kiwitcms-junit-plugin v8.0

    Bug fixes

    • Allow displaying lists with more then 9 items when reviewing test cases. Fixes Issue #339 (Mfon Eti-mfon)
    • Make tcms.tests.storage.RaiseWhenFileNotFound` capable of finding finding static files on Windows which enables development mode for folks not using Linux environment. See SO #55297178 (Mfon Eti-mfon)
    • Allow changing test execution status without adding comment. Fixes Issue #1261
    • Properly refresh test run progress bar when changing statuses. Fixes Issue #1326
    • Fix a bug where updating test cases from the UI was causing text and various other fields to be reset. Fixes Issue #1318


    • Extract attachments widget to new template. Fixes Issue #1124 (Rosen Sasov)
    • Rename RPC related classes. Fixes Issue #682 (Rosen Sasov)
    • Add new test (Mariyan Garvanski)
    • Start using GitHub actions, first for running flake8
    • Remove unused TestCase.get_previous_and_next()
    • Remove unused TestCaseStatus.string_to_instance()
    • Remove unused TestCase.create()
    • Remove unused json_success_refresh_page()
    • Remove unused fields from SearchPlanForm
    • Use JSON-RPC in previewPlan()
    • Remove toggleTestCaseContents(), duplicate of toggleTestExecutionPane()
    • Refactor a few more views to class-based

    GitHub Marketplace listing deprecation

    As we've stated previously Kiwi TCMS has migrated to a new GitHub backend and OAuth tokens for the previous backend have been revoked and the existing listing on GitHub Marketplace is deprecated. It is a non-functioning app at the moment!

    It is not possible for us to cancel Marketplace subscriptions programatically, that is GitHub does not provide such API. Active subscribers please follow these 3 steps to help us clean up stale information:

    • Go to https://github.com/marketplace/kiwi-tcms
    • From "Edit your plan" button at the top select your GitHub account
    • Then click "Cancel this plan" link which is at the left-hand side at the bottom of the description box!

    How to upgrade

    Backup first! If you are using Kiwi TCMS as a Docker container then:

    cd path/containing/docker-compose/
    docker-compose down
    docker pull kiwitcms/kiwi
    docker pull centos/mariadb-103-centos7
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate

    WHERE: docker-compose.yml has been updated from your private git repository! The file provided in our GitHub repository is an example. Not for production use!

    WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

    # starting from an older Kiwi TCMS version
    docker-compose down
    docker pull kiwitcms/kiwi:<next_upgrade_version>
    edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate
    # repeat until you have reached latest

    Happy testing!

    Galleria.io and PhotoSwipe

    Posted by Pablo Iranzo Gómez on February 12, 2020 06:30 AM


    I was looking for an alternative for my (this) blog and hold pictures. I’m a lego fan so I wanted to get some pictures uploaded but without bloating the site.

    In the article Lego Mini Cooper MOC I did add lot of pictures, same for Lego Chinese dinner and Lego Dragon Dance.

    I was checking and how telegram does handle some links and found that Instragram ‘links’ get expanded to list the images inside directly to see if that could help in a task I was helping at Pelican-Elegant theme used at this site for creating a gallery.

    Instagram and multiple pictures

    While doing some research, I found that apparently, to an instagram url you can append ?__a=1 and it will ‘dump’ a `xml’ of the picture or gallery, like the following one for this picture:

    In this case, the generated JSON, available at https://www.instagram.com/p/B7yh4IdItNd/?__a=1 contains the information we required for getting image size, thumbnail, etc

    The important there is that from a gallery ID, we can get a JSON without requiring any API key (which is required for Flickr for example).

    I’m not an expert in JavaScript, but I did something which was useful enough to get the json, then process the internal code and separate in two cases, ‘gallery ID’ with one or multiple pictures.

    Using it, I was able to then consider using some Gallery Software to get them rendereded.


    https://galleria.io offers a nice gallery, which uses jQuery and also accepts a json as input, so it was more or less straighforward to convert from the json from Instagram to what Galleria.io required.

    I used that to get an intial draft to be added to Pelican-Elegant, but as the goal in the project is to remove dependency on jQuery it was discarded.

    However my personal research came part of another website that even not directly from instagram, takes a good advantage of defining a json of images for showcasing pictures.


    While discussing about Galleria, Talha from Pelican-Elegant found two other projects and it was decided that PhotoSwipe provided nice features, mobile usage like pinch-in and pinch-out.

    Photoswipe required extra steps like creating a div and using html tags for putting pics (check the official documentation for Pelican-Elegant Galleries)

    One of the main problems was getting image size required (Galleria.io didn’t required it), but when using Instagram pictures we can grab the information from the JSON.

    Finally with the help from Talha, we got it also working with Instagram Gallery Embed Instagram Post


    Fedora 31 : Install the drawing GNOME with DNF and flatpak.

    Posted by mythcat on February 11, 2020 05:44 PM
    You can use the DNF tool:
    [root@desk mythcat]# dnf search gnome | grep drawing 
    Last metadata expiration check: 1:53:53 ago on Tue 11 Feb 2020 05:28:15 PM EET.
    drawing.noarch : Drawing application for the GNOME desktop
    [root@desk mythcat]# dnf install drawing.noarch
    Last metadata expiration check: 1:54:28 ago on Tue 11 Feb 2020 05:28:15 PM EET.
    Dependencies resolved.
    Package Architecture Version Repository Size
    drawing noarch 0.4.9-1.fc31 updates 1.0 M

    Transaction Summary
    Install 1 Package

    Total download size: 1.0 M
    Installed size: 1.5 M
    Is this ok [y/N]: y
    Downloading Packages:
    drawing-0.4.9-1.fc31.noarch.rpm 1.4 MB/s | 1.0 MB 00:00
    Total 601 kB/s | 1.0 MB 00:01
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
    Preparing : 1/1
    Installing : drawing-0.4.9-1.fc31.noarch 1/1
    Running scriptlet: drawing-0.4.9-1.fc31.noarch 1/1
    Verifying : drawing-0.4.9-1.fc31.noarch 1/1


    This install use the flatpak tool:
    [root@desk mythcat]# dnf install flatpak
    Last metadata expiration check: 1:47:49 ago on Tue 11 Feb 2020 05:28:15 PM EET.
    Dependencies resolved.
    Package Arch Version Repository Size
    flatpak x86_64 1.4.3-3.fc31 updates 1.1 M
    Installing dependencies:
    flatpak-selinux noarch 1.4.3-3.fc31 updates 24 k
    flatpak-session-helper x86_64 1.4.3-3.fc31 updates 72 k
    Installing weak dependencies:
    p11-kit-server x86_64 0.23.20-1.fc31 updates 186 k
    xdg-desktop-portal x86_64 1.4.2-3.fc31 fedora 386 k
    xdg-desktop-portal-gtk x86_64 1.4.0-1.fc31 fedora 212 k

    Transaction Summary
    Install 6 Packages

    Total download size: 1.9 M
    Installed size: 7.7 M
    Is this ok [y/N]: y

    Let's install the flatpakrepo:
    [mythcat@desk ~]$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

    Note that the directories


    are not in the search path set by the XDG_DATA_DIRS environment variable, so
    applications installed by Flatpak may not appear on your desktop until the
    session is restarted.
    [mythcat@desk ~]$ flatpak install flathub com.github.maoschanz.drawing

    Note that the directories


    are not in the search path set by the XDG_DATA_DIRS environment variable, so
    applications installed by Flatpak may not appear on your desktop until the
    session is restarted.

    Looking for matches…
    Required runtime for com.github.maoschanz.drawing/x86_64/stable (runtime/org.gnome.Platform/x86_64/3.34)
    found in remote flathub
    Do you want to install it? [Y/n]: Y

    com.github.maoschanz.drawing permissions:
    ipc wayland x11

    ID Arch Branch Remote Download
    1. [✓] org.gnome.Platform x86_64 3.34 flathub 304.2 MB / 318.5 MB
    2. [✓] org.gnome.Platform.Locale x86_64 3.34 flathub 16.8 kB / 322.0 MB
    3. [✓] org.freedesktop.Platform.GL.default x86_64 19.08 flathub 92.6 MB / 92.6 MB
    4. [✓] org.freedesktop.Platform.VAAPI.Intel x86_64 19.08 flathub 8.7 MB / 8.7 MB
    5. [✗] org.freedesktop.Platform.openh264 x86_64 19.08 flathub 594.2 kB / 593.4 kB
    6. [✓] com.github.maoschanz.drawing x86_64 stable flathub 1.0 MB / 1.1 MB
    7. [✓] com.github.maoschanz.drawing.Locale x86_64 stable flathub 1.7 kB / 86.2 kB

    Warning: org.freedesktop.Platform.openh264 not installed
    Installation complete.
    [mythcat@desk ~]$ flatpak install org.freedesktop.Platform/x86_64/19.08

    Note that the directories


    are not in the search path set by the XDG_DATA_DIRS environment variable, so
    applications installed by Flatpak may not appear on your desktop until the
    session is restarted.

    Looking for matches…
    Found similar ref(s) for ‘org.freedesktop.Platform/x86_64/19.08’ in remote ‘flathub’ (system).
    Use this remote? [Y/n]: Y

    ID Arch Branch Remote Download
    1. [✓] org.freedesktop.Platform x86_64 19.08 flathub 11.5 MB / 238.1 MB
    2. [✓] org.freedesktop.Platform.Locale x86_64 19.08 flathub 16.7 kB / 318.2 MB
    3. [✓] org.freedesktop.Platform.openh264 x86_64 19.08 flathub 593.6 kB / 593.4 kB

    Installation complete.
    Restart the session and run it with this command:
    [mythcat@desk ~]$ flatpak run com.github.maoschanz.drawing

    My whole carrier is built on FOSS

    Posted by Marcin 'hrw' Juszkiewicz on February 11, 2020 11:30 AM

    Some time ago at one of Red Hat mailing lists someone asked “how has open source helped your career”. There were several interesting stories. I had mine as well.


    My first contribution to FOSS. It was updating Debian ‘potato’ installation guide for Amiga/m68k. I was writing article to new Amiga magazine called ‘eXec’ about installing Debian. So why not update official instruction at same time?


    Probably my first code contribution: small change to MPlayer. I completely forgot about it but as project was changing it’s license in 2017 I got an email about it.


    I bought my 3rd PDA (Sharp Zaurus SL-5500) and it was running Linux. I started building apps for it, hacking system to run better. Then cooperated with OpenZaurus distro developers and started contributing to OpenEmbedded build system. One day they gave me write access to repo and told to merge my changes.

    When I stopped using OE few years later I was the 5th on list of top contributors.

    I also count this year as first one of my FOSS career.


    Richard Jackson donated Zaurus c760 to me. As a gift for my OpenZaurus work. And then OPIE 1.2.2 release came due to my changes to make better use of VGA screen. I still have this device in running condition.


    Became release manager of OpenZaurus distribution, with team of users testing pre-release images. Released 3.5.4 (and later and version.

    Started my own consulting company. Got some serious customers. End of work as PHP programmer.

    2007 - 2010

    I am doing what was hobby as full time job. Full FOSS work. Different companies, ARM architecture for 95% of time. Mostly consulting around OpenEmbedded.


    Due to my ARM foss involvement Canonical hired me. Started working at Linaro as software engineer. Cleaned cross compilers in Ubuntu/Debian, several other things.


    Became one of first AArch64 developers. Published OpenEmbedded support for it right after all toolchain patches became public.


    Left Linaro and Canonical, wrote about it on blog and in less then hour got “send me your CV” from Jon Masters from Red Hat. Joined company, did lot of changes in RHEL 7 and Fedora — mostly fixing FTBFS on !x86 architectures.


    My manager asked me do I want to go back to Linaro. This time as Red Hat assignee. Went, met old chaps, working mostly around OpenStack. Still on 64bit Arm.

    2017 - 2020

    Lot of work in OpenStack. Some work on Big Data stuff for other team at Linaro. Countless projects where I worked on getting stuff working on AArch64.


    My whole carrier is built on FOSS.

    My x86(-64) desktop runs GNU/Linux since day one (September 2000) as main system. There was OpenDOS as second during studies due to some stuff.

    I had MS Windows XP as second system on one of laptops. But that’s due to some Arm hardware bringup tool being available only for this OS (later also for Linux). My family and friends learnt that I am unable to help them with MS Windows issues as I do not know that OS.

    My whole career is built on FOSS

    Posted by Marcin 'hrw' Juszkiewicz on February 11, 2020 11:30 AM

    Some time ago at one of Red Hat mailing lists someone asked “how has open source helped your career”. There were several interesting stories. I had mine as well.


    My first contribution to FOSS. It was updating Debian ‘potato’ installation guide for Amiga/m68k. I was writing article to new Amiga magazine called ‘eXec’ about installing Debian. So why not update official instruction at same time?


    Probably my first code contribution: small change to MPlayer. I completely forgot about it but as project was changing it’s license in 2017 I got an email about it.


    I bought my 3rd PDA (Sharp Zaurus SL-5500) and it was running Linux. I started building apps for it, hacking system to run better. Then cooperated with OpenZaurus distro developers and started contributing to OpenEmbedded build system. One day they gave me write access to repo and told to merge my changes.

    When I stopped using OE few years later I was the 5th on list of top contributors.

    I also count this year as first one of my FOSS career.


    Richard Jackson donated Zaurus c760 to me. As a gift for my OpenZaurus work. And then OPIE 1.2.2 release came due to my changes to make better use of VGA screen. I still have this device in running condition.


    Became release manager of OpenZaurus distribution, with team of users testing pre-release images. Released 3.5.4 (and later and version.

    Started my own consulting company. Got some serious customers. End of work as PHP programmer.

    2007 - 2010

    I am doing what was hobby as full time job. Full FOSS work. Different companies, ARM architecture for 95% of time. Mostly consulting around OpenEmbedded.


    Due to my ARM foss involvement Canonical hired me. Started working at Linaro as software engineer. Cleaned cross compilers in Ubuntu/Debian, several other things.


    Became one of first AArch64 developers. Published OpenEmbedded support for it right after all toolchain patches became public.


    Left Linaro and Canonical, wrote about it on blog and in less then hour got “send me your CV” from Jon Masters from Red Hat. Joined company, did lot of changes in RHEL 7 and Fedora — mostly fixing FTBFS on !x86 architectures.


    My manager asked me do I want to go back to Linaro. This time as Red Hat assignee. Went, met old chaps, working mostly around OpenStack. Still on 64bit Arm.

    2017 - 2020

    Lot of work in OpenStack. Some work on Big Data stuff for other team at Linaro. Countless projects where I worked on getting stuff working on AArch64.


    My whole career is built on FOSS.

    My x86(-64) desktop runs GNU/Linux since day one (September 2000) as main system. There was OpenDOS as second during studies due to some stuff.

    I had MS Windows XP as second system on one of laptops. But that’s due to some Arm hardware bringup tool being available only for this OS (later also for Linux). My family and friends learnt that I am unable to help them with MS Windows issues as I do not know that OS.

    I deleted our newsletter from Mailchimp ! Please re-subscribe

    Posted by Kiwi TCMS on February 11, 2020 10:25 AM

    Hello testers, I have to admit that I made a rookie mistake and deleted the entire email database for the Kiwi TCMS newsletter! And of course we didn't have a backup of this database :-(. Please re-subscribe here and read below if you are interested to know what happened.

    Last week, while exploring how to cancel active subscriptions for our deprecated GitHub Marketplace listing I found there is no way to cancel those programatically. So I've compiled a list of email addresses and decided to send subscribers an email asking them to cancel their subscriptions.

    For this purpose I decided to import the contacts into Mailchimp because it gives you a better interface to design the actual message, include images in the message body, preview and test the message before it is sent! The import of addresses went fine, new addresses were tagged appropriately to separate them from the rest of the newsletter audience but they were not subscribed to receive emails automatically.

    I selected "non-subscribed" option when importing as a second barrier to accidentally emailing people who do not want to receive regular news from us! However it turned out Mailchimp can't send messages to non-subscribed addresses! Maybe that's part of their attempts to be GDPR compliant.

    So I decided to delete the freshly imported addresses, import them again and this time tag + subscribe them during the import! When selecting the addresses for deletion I am 99% confident I did filter them by tag first and then selected DELETE! And the entire contacts list was gone!

    I've also contacted Mailchimp immediately to ask whether or not the addresses can be restored. Unfortunately they are trying to be super GDPR compliant and claim they don't have this information into their system anymore. And in this particular case we've been relying on the vendor to keep backups on their end so didn't even think about trying to backup this database!

    For users who have accounts at https://public.tenant.kiwitcms.org we do have their email addresses but we're not going to automatically re-subscribe them. We've stopped auto-subscribing 2 years ago and also there's no way of telling which addresses were on the list in the first hand.

    Please re-subscribe here and I promise we're going to start backing up the newsletter database as well.

    Thank you!

    30 projects migrated their translation to Weblate, what about yours?

    Posted by Fedora Community Blog on February 11, 2020 06:36 AM
    Fedora Localization Project

    The localization community gave it’s approval: Weblate fits our expectations. Many projects have already migrated. It’s time for yours to migrate, because the next Fedora release will mark the end of the old translation platform.

    A process that started a year ago

    Following the migration plan that was validated during Flock 2019 @ Budapest, we have tested Weblate since September. In January, we asked the community: “It’s time to say if you feel confident the tool can allow us to do our job as a translation community”. Twenty contributors confirmed, and we are now officially using Weblate.

    You are welcome to help! The process is easy: just connect with a FAS account and start translating!

    Advice and feedback for migrating your project

    With 30 projects already using Weblate daily, we have learned a lot about the strengths and weaknesses of the tool.

    Remember: Weblate directly interacts with a git repository which should contain both pot and po files. One of the benefits is that translators get notification when new strings arrives, preventing last minute rushes.

    If you want to keep it simple:

    • commit pot and po files in your repository,
    • Weblate will open a pull request with translations,
    • update your pot file each time you have string changes,
    • merge Weblate’s pull request before each release.

    If you are a big team or have plenty of translators:

    • create a dedicated repository for translators,
    • allow Weblate to commit on this repository,
    • update your pot file often,
    • merge translations before each release (or include it inside your package).

    To keep your life easy, Jean-Baptiste will do the first configuration of Weblate and set you as admin.

    If you are a free and open source project, you can join us. Even if you never used Zanata before.

    You are not using gettext files (po and pot)? No worries, many file formats are supported.

    The end of Zanata

    The Fedora community may seem big, but don’t expect most translators to connect to multiple places to contribute projects. The Zanata homepage was updated, only DNF and Anaconda are considered as not yet migrated critical projects.

    Need help?

    Open a ticket https://pagure.io/fedora-l10n/tickets or start a discussion on the translator mailing list.

    The post 30 projects migrated their translation to Weblate, what about yours? appeared first on Fedora Community Blog.

    Playing Music on your Fedora Terminal with MPD and ncmpcpp

    Posted by Fedora Magazine on February 10, 2020 08:00 AM

    MPD, as the name implies, is a Music Playing Daemon. It can play music but, being a daemon, any piece of software can interface with it and play sounds, including some CLI clients.

    One of them is called ncmpcpp, which is an improvement over the pre-existing ncmpc tool. The name change doesn’t have much to do with the language they’re written in: they’re both C++, but ncmpcpp is called that because it’s the NCurses Music Playing Client Plus Plus.

    Installing MPD and ncmpcpp

    The ncmpmpcc client can be installed from the official Fedora repositories with DNF directly with

    $ sudo dnf install ncmpcpp

    On the other hand, MPD has to be installed from the RPMFusion free repositories, which you can enable, as per the official installation instructions, by running

    $ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm

    and then you can install MPD by running

    $ sudo dnf install mpd

    Configuring and Starting MPD

    The most painless way to set up MPD is to run it as a regular user. The default is to run it as the dedicated mpd user, but that causes all sorts of issues with permissions.

    Before we can run it, we need to create a local config file that will allow it to run as a regular user.

    To do that, create a subdirectory called mpd in ~/.config:

    $ mkdir ~/.config/mpd

    copy the default config file into this directory:

    $ cp /etc/mpd.conf ~/.config/mpd

    and then edit it with a text editor like vim, nano or gedit:

    $ nano ~/.config/mpd/mpd.conf

    I recommend you read through all of it to check if there’s anything you need to do, but for most setups you can delete everything and just leave the following:

    db_file "~/.config/mpd/mpd.db" 
    log_file "syslog"

    At this point you should be able to just run

    $ mpd

    with no errors, which will start the MPD daemon in the background.

    Using ncmpcpp

    Simply run

    $ ncmpcpp

    and you’ll see a ncurses-powered graphical user interface in your terminal.

    Press 4 and you should see your local music library, be able to change the selection using the arrow keys and press Enter to play a song.

    Doing this multiple times will create a playlist, which allows you to move to the next track using the > button (not the right arrow, the > closing angle bracket character) and go back to the previous track with <. The + and – buttons increase and decrease volume. The Q button quits ncmpcpp but it doesn’t stop the music. You can play and pause with P.

    You can see the current playlist by pressing the 1 button (this is the default view). From this view you can press i to look at the information (tags) about the current song. You can change the tags of the currently playing (or paused) song by pressing 6.

    Pressing the \ button will add (or remove) an informative panel at the top of the view. In the top left, you should see something that looks like this:


    Pressing the r, z, y, R, x buttons will respectively toggle the repeat, random, single, consume and crossfade playback modes and will replace one of the characters in that little indicator to the initial of the selected mode.

    Pressing the F1 button will display some help text, which contains a list of keybindings, so there’s no need to write a complete list here. So now go on, be geeky, and play all your music from your terminal!

    Episode 182 - Does open source owe us anything?

    Posted by Open Source Security Podcast on February 10, 2020 12:01 AM
    Josh and Kurt talk about open source maintainers and building communities. While an open source maintainer doesn't owe anyone anything, there are some difficult conversations around holding back a community rather than letting it flourish.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13053860/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes

      Contribuite ai test del kernel 5.5 su Fedora

      Posted by alciregi-it on February 09, 2020 10:40 AM
      Il Kernel Team e il QA Team di #Fedora hanno organizzato una Fedora Test Week per il Kernel 5.5, che si terrà dal 10 al 17 febbraio.Anche chi non è coinvolto nello sviluppo della distribuzione è invitato a partecipare.La comunità di Fedora, in occasione dell'uscita di una nuova release della distribuzione, dell'arrivo di una nuova release del kernel #Linux, o di qualche altro componente, organizza delle giornate, o come in questo caso delle settimane, in cui le persone sono invitate a eseguire dei test. A caccia di bug.È possibile ritrovarsi su un canale IRC dove condividere esperienze o problemi riscontrati, oppure chiedere delucidazioni per l'esecuzione dei test.Non è difficile partecipare. Sarà possibile eseguire i test sia in una macchina virtuale (KVM, Virtualbox, whatever) che su una macchina fisica. In questo caso è sconsigliato installare il nuovo kernel se è una macchina di produzione, ossia contiene dati e servizi importanti.Verrà resa disponibile una Live, oppure si potrà installare l'RPM del nuovo kernel. Oltre a valutare se il sistema e le varie periferiche sono funzionanti, sarà possibile eseguire un tool che effettuerà dei test (robe che chi le capisce è bravo) e restituirà dei risultati.I risultati ed eventuali bug riscontrati si dovranno poi inserire in un'apposita pagina web dedicata all'evento.Leggete l'articolo su Fedora MagazineConsultate la pagina del wiki dedicata all'evento per avere delle informazioni dettagliate.

      Python course inside of NSA via a FOIA request

      Posted by Kushal Das on February 09, 2020 09:08 AM

      Woke on on Sunday morning, and found Chris Swenson's tweet, he did a FOIA request about the Python course inside of NSA, and then scanned the almost 400 pages of course material. It is 118MB :)

      I just went though the document quickly, and a few points from there.

      • isDivisibleBy7(), sounds like wriiten by a JAVA person :)
      • too many extra parathesis in the conditional statements.
      • same goes to while statement, while (i <= 20):
      • while (True)
      • They have an internal Python package index: http://bbtux022.gp.proj.nsa.ip.gov/PYPI (seems only for education purpose)
      • Their gitlab instance is: gitlab.coi.nsa.ic.gov
      • Exception handling came too late in the course.
      • They teach profiling using cProfile
      • They also teach f-strings.
      • They have some sort of internal cloud MACHINESHOP, most probably the instances are on CentOS/RHEL as they are using yum commands two years ago.
      • They have internal safari access too, but, again on http, http://ncmd-ebooks-1.ncmd.nsa.ic.gov/9781785283758
      • They also have an internal wikipedia dump or just some sort of proxy to the main instance, https://wikipedia.nsa.ic.gov/en/Colossally_abundant_number
      • An internal jupyter gallery which runs over HTTPS.
      • Mentions pickle, but, no mention of the security implications.
      • Internal pip mirror: https://pip.proj.nsa.ic.gov/
      • git installation instructions are for CentOS/RHEL/Ubuntu/Windows, no Debian :(

      Contribute at the Fedora Test Week for Kernel 5.5

      Posted by Fedora Magazine on February 08, 2020 08:00 AM

      The kernel team is working on final integration for kernel 5.5. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, February 10, 2020 through Monday, February 17, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

      How does a test week work?

      A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

      To contribute, you only need to be able to do the following things:

      • Download test materials, which include some large files
      • Read and follow directions step by step

      The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

      Happy testing, and we hope to see you in the Test Week.

      Fedora program update: 2020-06

      Posted by Fedora Community Blog on February 07, 2020 07:27 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora this week.

      I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.



      <figure class="wp-block-table">
      Open Source Summit NAAustin, TX, US22–24 Jun 2020closes 16 Feb

      Help wanted

      Upcoming meetings


      Fedora 32


      • 2020-02-11: Mass rebuild ends
      • 2020-02-11: Branch day
      • 2020-02-11: Code Complete (testable) deadline
      • 2020-02-25: 100% Code complete deadline
      • 2020-02-25: Beta freeze begins


      <figure class="wp-block-table">
      Enable EarlyOOMSystem-WideReady for FESCo
      Provide OpenType Bitmap FontsSelf-ContainedAccepted
      Additional buildroot to test x86-64 micro-architecture updateself-ContainedReady for FESCo
      Reduce installation media size by improving the compression ratio of SquashFS filesystemSystem-WideReady for FESCo
      Deprecate python-noseSelf-ContainedAccepted
      Update Haskell packages to Stackage LTS 14Self-ContainedApproved
      PostgreSQL 12Self-ContainedReady for FESCo

      Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

      CPE update

      The post Fedora program update: 2020-06 appeared first on Fedora Community Blog.

      Contribute at the Fedora Test Week for Kernel 5.5

      Posted by Fedora Community Blog on February 07, 2020 05:01 PM
      F31 Modularity test day

      The kernel team is working on final integration for kernel 5.5. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, February 10, 2020 through Monday, February 17, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

      How does a test week work?

      A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

      To contribute, you only need to be able to do the following things:

      • Download test materials, which include some large files
      • Read and follow directions step by step

      The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

      Happy testing, and we hope to see you on test day.

      The post Contribute at the Fedora Test Week for Kernel 5.5 appeared first on Fedora Community Blog.

      PHPUnit 9

      Posted by Remi Collet on February 07, 2020 04:35 PM

      RPM of PHPUnit version 9 are available in remi repository for Fedora ≥ 29 and for Enterprise Linux (CentOS, RHEL...).

      Documentation :

      emblem-notice-24.pngThis new major version requires PHP ≥ 7.3 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 5, 6, 7 and 8.

      Installation, Fedora and Enterprise Linux 8:

      dnf --enablerepo=remi install phpunit9

      Installation, Enterprise Linux 6 and 7:

      yum --enablerepo=remi install phpunit9

      Notice: this tool is an essential component of PHP QA in Fedora. This version should be soon available in Fedora ≥ 31 (after the review of 19 new packages).