Fedora People

FPgM report: 2019-07

Posted by Fedora Community Blog on February 15, 2019 10:08 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week.

I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Announcements

Meetings

Events

Fedora 30 Status

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.

Changes

Submitted to FESCo

Approved by FESCo

Fedora 31 Status

Changes

Announced

Approved by FESCo

The post FPgM report: 2019-07 appeared first on Fedora Community Blog.

SFK, OSCAL and Toastmasters expanding into Kosovo

Posted by Daniel Pocock on February 15, 2019 11:08 AM

Back in August 2017, I had the privilege of being invited to support the hackathon for women in Prizren, Kosovo. One of the things that caught my attention at this event was the enthusiasm with which people from each team demonstrated their projects in five minute presentations at the end of the event.

This encouraged me to think about further steps to support them. One idea that came to mind was introducing them to the Toastmasters organization. Toastmasters is not simply about speaking, it is about developing leadership skills that can be useful for anything from promoting free software to building successful organizations.

I had a look at the Toastmasters club search to see if I could find an existing club for them to visit but there doesn't appear to be any in Kosovo or neighbouring Albania.

Starting a Toastmasters club at the Innovation Centre Kosovo

In January, I had a conference call with some of the girls and explained the idea. They secured a venue, Innovation Centre Kosovo, for the evening of 11 February 2019.

Albiona and I met on Saturday, 9 February and called a few people we knew who would be good candidates to give prepared speeches at the first meeting. They had 48 hours to prepare their Ice Breaker talks. The Ice Breaker is a 4-6 minute talk that people give at the beginning of their Toastmasters journey.

Promoting the meeting

At our club in EPFL Lausanne, meetings are promoted on a mailing list. We didn't have that in Kosovo but we were very lucky to be visited by Sara Koci from the morning TV show. Albiona and I were interviewed on the rooftop of the ICK on the day of the meeting.

<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/xgyt4TbIZA0?start=46" width="560"></iframe>

The first meeting

That night, we had approximately 60 people attend the meeting.

Albiona acted as the meeting presider and trophy master and I was the Toastmaster. At the last minute we found volunteers for all the other roles and I gave them each an information sheet and a quick briefing before opening the meeting.

One of the speakers, Dion Deva, has agreed to share the video of his talk publicly:

<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/_PsRxJzzaEM" width="560"></iframe>

The winners were Dhurata, best prepared speech, Arti, best impromptu speech and Ardora for best evaluation:

After party

Afterwards, some of us continued around the corner for pizzas and drinks and discussion about the next meeting.

Future events in Kosovo and Albania

Software Freedom Kosovo will be back from 4-7 April 2019 and I would encourage people to visit.

OSCAL in Tirana, Albania is back on 18-19 May 2019 and they are still looking for extra speakers and workshops.

Many budget airlines now service Prishtina from all around Europe - Prishtina airport connections, Tirana airport connections.

How to watch for releases of upstream projects

Posted by Fedora Magazine on February 15, 2019 08:00 AM

Do you want to know when a new version of your favorite project is released? Do you want to make your job as packager easier? If so, this article is for you. It introduces you to the world of release-monitoring.org. You’ll see how it can help you catch up with upstream releases.

What is release-monitoring.org?

The release-monitoring.org is a combination of two applications: Anitya and the-new-hotness.

Anitya is what you can see when visiting release-monitoring.org. You can use it to add and manage your projects. Anitya also checks for new releases periodically.

The-new-hotness is an application that catches the messages emitted by Anitya. It creates a Bugzilla issue if the project is mapped to a Fedora package.

How to use release-monitoring.org

Now that you know how it works, let’s focus on how you can use it.

<figure class="wp-block-image"><figcaption>Index page of release-monitoring.org</figcaption></figure>

First think you need to do is to log in. Anitya provides a few options you can use to log in, including the Fedora Account System (FAS), Yahoo!, or a custom OpenID server.

<figure class="wp-block-image"><figcaption>Login page</figcaption></figure>

When you’re logged in, you’ll see new options in the top panel.

<figure class="wp-block-image"><figcaption>Anitya top panel</figcaption></figure>

Add a new project

Now you can add a new project. It’s always good to check whether the project is already added.

<figure class="wp-block-image"><figcaption>Add project form</figcaption></figure>

Next, fill in the information about the project:

  • Project name – Use the upstream project name
  • Homepage – Homepage of the project
  • Backend – Backend is simply the web hosting where the project is hosted. Anitya offers many backends you can chose from. If you can’t find a backend for your project, you can use the custom backend. Every backend has its own additional fields. For example, BitBucket has you specify owner/project.
  • Version scheme – This is used to sort received versions. Right now, Anitya only supports RPM version scheme.
  • Version prefix – This is the prefix that is stripped from any received version. For example, if the tag on GitHub is version_1.2.3, you would use version_ as version prefix. The version will then be presented as 1.2.3. The version prefix v is stripped automatically.
  • Check latest release on submit – If you check this, Anitya will do an initial check on the project when submitted.
  • Distro – The distribution in which this project is used. This could be also added later.
  • Package – The project’s packaged name in the distribution. This is required when the Distro field is filled in.

When you’re happy with the project, submit it. Below you can see how your project may look after you submit.

<figure class="wp-block-image"><figcaption>Project page</figcaption></figure>

Add a new distribution mapping

If you want to map the project to a package on a specific distribution, open up the project page first and then click on Add new distribution mapping.

<figure class="wp-block-image"><figcaption>Add distribution mapping form</figcaption></figure>

Here you can chose any distribution already available in Anitya, fill in the package name, and submit it. The new mapping will show up on the project page.

Automatic filing of Bugzilla issues

Now you created a new project and created a mapping for it. This is nice, but how does this help you as a packager? This is where the-new-hotness comes into play.

Every time the-new-hotness sees a new update or new mapping message emitted by Anitya, it checks whether this project is mapped to a package in Fedora. For this to work, the project must have a mapping to Fedora added in Anitya.

If the package is known, the-new-hotness checks the notification setting for this package. That setting can be changed here. The last check the-new-hotness does is whether the version reported by Anitya is newer than the current version of this package in Fedora Rawhide.

If all those checks are positive, the new Bugzilla issue is filed and a Koji scratch build started. After the Koji build is finished, the Bugzilla is updated with output.

Future plans for release-monitoring.org

The release-monitoring.org system is pretty amazing, isn’t it? But this isn’t all. There are plenty of things planned for both Anitya and the-new-hotness. Here’s a short list of future plans:

Anitya

  • Add libraries.io consumer – automatically check for new releases on libraries.io, create projects in Anitya and emit messages about updates
  • Use Fedora package database to automatically guess the package name in Fedora based on the project name and backend
  • Add semantic and calendar version scheme
  • Change current cron job to service: Anitya checks for new versions periodically using a cron job. The plan is to change this to a service that checks projects using queues.
  • Support for more than one version prefix

the-new-hotness

  • File Github issues for Flathub projects when a new version comes out
  • Create pull requests in Pagure instead of filing a Bugzilla issue
  • Move to OpenShift – this should make deployment much easier than how it is now
  • Convert to Python 3 (mostly done)

Both

  • Conversion to fedora-messaging – This is already in progress and should make communication between Anitya and the-new-hotness more reliable.

Photo by Alexandre Debiève on Unsplash.

Bodhi 3.13.0 released

Posted by Bodhi on February 15, 2019 12:18 AM

This is a feature release.

Deprecations

  • Authentication with OpenID is deprecated (#1180).
  • bodhi-untag-branched is deprecated (#1197).
  • The Update's title field is deprecated (#1542).
  • Integration with pkgdb is deprecated (#1970).
  • Support for Python 2 is deprecated (#1871 and #2856).
  • The /masher/ view is deprecated (#2024).
  • bodhi-monitor-composes is deprecated (#2171).
  • The stacks feature is deprecated (#2241).
  • bodhi-manage-releases is deprecated (#2420).
  • Anonymous comments are deprecated (#2700).
  • The ci_url attribute on Builds is deprecated (#2782).
  • The active_releases query parameter on the Updates query URL is deprecated (#2815).
  • Support for fedmsg is deprecated (#2838).
  • Support for Bodhi 1's URL scheme is deprecated (#2869 and #2903).
  • The /admin/ API is deprecated (#2899).
  • The fedmsg UI integration is deprecated (#2913).
  • CVE support is deprecated (#2915).

Dependency changes

  • Bodhi no longer requires iniparse (a910b61).
  • Bodhi now requires fedora_messaging (e30c5f2).

Server upgrade instructions

This release contains database migrations. To apply them, run:

$ sudo -u apache /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head

Features

  • A new bodhi-shell CLI is included with the server package that initialized a Python shell
    with some handy aliases that are useful for debugging Bodhi (#1792).
  • Updates that obsolete security updates will now be set as security updates themselves
    (#1798)
  • The CLI no longer crashes when creating multiple buildroot overrides in one command
    (#2031).
  • The CLI can now display information about waivers (#2267).
  • Releases can now be marked as not being composed by Bodhi, which will be useful if we want to use
    Bodhi to tag builds into releases without composing those releases, e.g., Fedora Rawhide
    (#2317).
  • BuildrootOverrides are now prevented for builds that fail the test gating status (#2537).
  • The web interface now displays instructions for installing updates (#2799).
  • The CLI now has flags that allow users to override the OpenID API URL to use. This is useful
    for deployments that aren't for Fedora and also for developers, who can use it to authenticate
    against a different OpenID server than Fedora's production instance (Fedora's staging instance is
    nice to use for this) (#2820).
  • The web UI search box uses a slightly longer delay before issuing the search request
    (51c2fa8).
  • Messages can now be published by fedora_messaging, a replacement for fedmsg
    (e30c5f2).
  • Associating updates with private Bugzilla tickets is now handled gracefully (7ac316a).

Bug fixes

  • The bodhi-approve-testing CLI script is now more resilient against failures (#1016).
  • The update edit page will return a HTTP 403 instead of a HTTP 400 if a user tries to edit an
    update they don't have permissions to edit (#1737).
  • The bodhi CLI now has a --wait flag when creating BuildrootOverrides that will cause
    bodhi to wait on Koji to finish adding the override to the buildroot before exiting
    (#1807).
  • The waive button is now displayed on locked updates (#2271).
  • Editing an update with the CLI no longer sets the type back to the default of bugfix
    (#2528).
  • bodhi-approve-testing now sets up a log handler (#2698).
  • Some missing commands were added to the bodhi man page (1e6c259).
  • A formatting issue was fixed in the command line client (996b4ec).

Development improvements

  • bodhi-ci now has an integration test suite that launches a live Bodhi server, some of its
    network dependencies, and tests it with some web and CLI requests (2430433).
  • bodhi-ci's status report now displays the run time for finished tasks (26af5ef).
  • bodhi-ci now prints a continuous status report, which is helpful in knowing what it is
    currently doing (f3ca62a).
  • We now do type checking enforcement on bodhi-ci (2c07005).

Contributors

The following developers contributed to Bodhi 3.13.0:

  • Aurélien Bompard
  • Jeremy Cline
  • Mattia Verga
  • Ryan Lerch
  • Sebastian Wojciechowski
  • Vismay Golwala
  • Randy Barlow

Extract Method Refactoring in Rust

Posted by Adam Young on February 14, 2019 06:20 PM

I’m writing a simple utility for manage the /etc/hosts file. I want it in a native language so I can make it SUID, or even better, to lock it down via capabilities. I want to remember how to code in rust. Once I get a simple bit working, I want to refactor. Here’s what I did.

Here is the functioning code:

use std::env;
use std::fs;

fn words_by_line<'a>(s: &'a str) -> Vec<vec><&'a str>> {
    s.lines().map(|line| {
        line.split_whitespace().collect()
    }).collect()
}

fn main() {
    let args: Vec<string> = env::args().collect();
    let filename = &args[1];
    let _operation = &args[2];
    let contents = fs::read_to_string(filename)
        .expect("Something went wrong reading the file");
    
    let wbyl = words_by_line(&contents);
    for i in &wbyl {
        for j in i{
            print!("{} ", j);
        }
        println!("");
    }
}



Build and run with

cargo build
./target/debug/hostsman ./hosts list

And it spits out the contents of the local copy of /etc/hosts. We'll treat this as the unit test for now.

The next step is to start working towards a switch based on the _operation variable. To do this, I want to pull the loop that dumps the file out into its own function. And to do that, I need to figure out the type of the Vector. I use the hack of introducing an error to get the compiler to tell me the type. I change the assignment line to get:

let wbyl: u8 = words_by_line(&contents);

And that tells me:

error[E0308]: mismatched types
  --> src/main.rs:18:20
   |
18 |     let wbyl: u8 = words_by_line(&contents);
   |                    ^^^^^^^^^^^^^^^^^^^^^^^^ expected u8, found struct `std::vec::Vec`
   |
   = note: expected type `u8`
              found type `std::vec::Vec<std::vec::vec><&str>>`

So I convert the code to use that, build and run. Code now looks like this:

let wbyl: std::vec::Vec> = words_by_line(&
contents)

Now now create a function by copying the existing code block and using the variable type in the parameter list. It looks like this:

use std::env;
use std::fs;

fn words_by_line<'a>(s: &'a str) -> Vec<vec><&'a str>> {
    s.lines().map(|line| {
        line.split_whitespace().collect()
    }).collect()
}

fn list(wbyl: std::vec::Vec<std::vec::vec><&str>>){    
    for i in &wbyl {
        for j in i{
            print!("{} ", j);
        }
        println!("");
    }
}

fn main() {
    let args: Vec<string> = env::args().collect();
    let filename = &args[1];
    let _operation = &args[2];
    let contents = fs::read_to_string(filename)
        .expect("Something went wrong reading the file");
    
    let wbyl: std::vec::Vec<std::vec::vec><&str>> = words_by_line(&contents);

    list(wbyl);
}

Now we are prepped to continue development. Next up is to parse the command and execute a different function based on it.

PackageKit is dead, long live, well, something else

Posted by Richard Hughes on February 14, 2019 06:01 PM

It’s probably no surprise to many of you that PackageKit has been in maintenance mode for quite some time. Although started over ten years ago (!) it’s not really had active maintenance since about 2014. Of course, I’ve still been merging PRs and have been slinging tarballs over the wall every few months, but nothing new was happening with the project, and I’ve worked on many other things since.

I think it’s useful for a little retrospective. PackageKit was conceived as an abstraction layer over about a dozen different package management frameworks. Initially it succeeded, with a lot of front ends UIs being written for the PackageKit API, making the Linux desktop a much nicer place for many years. Over the years, most package managers have withered and died, and for the desktop at least really only two remain, .rpm and .deb. The former being handled by the dnf PackageKit backend, and the latter by aptcc.

Canonical seems to be going all in on Snaps, and I don’t personally think of .deb files as first class citizens on Ubuntu any more – which is no bad thing. Snaps and Flatpaks are better than packages for desktop software in almost every way. Fedora is concentrating on Modularity and is joining with most of the other distros with a shared Flatpak and Flathub future and seems to be thriving because of it. If course, I’m missing out a lot of other distros, but from a statistics point of view they’re unfortunately not terribly relevant. Arch users are important, but they’re also installing primarily on the command line, not using an abstraction layer or GUI. Fedora is also marching towards an immutable base image using rpmostree, containers and flatpaks, and then PackageKit isn’t only not required, but doesn’t actually get installed at all in Fedora SilverBlue.

GNOME Software and the various KDE software centers already have an abstraction in the session; which they kind of have to to support per-user flatpak applications and per-user pet containers like Fedora Toolbox. I’ve also been talking to people in the Cockpit project and they’re in the same boat, and basically agree that having a shared system API to get the installed package list isn’t actually as useful as it used to be. Of course, we’ll need to support mutable systems for a long time (RHEL!) and so something has to provide a D-Bus interface to provide that. I’m not sure whether that should be dnfdaemon providing a PackageKit-compatible API, or it should just implement a super-simple interface that’s not using an API design from the last decade. At least from a gnome-software point of view it would just be one more plugin, like we have a plugin for Flatpak, a plugin for Snap, and a plugin for PackageKit.

Comments welcome.

Using fwupd and updating firmware without using the LVFS

Posted by Richard Hughes on February 14, 2019 12:43 PM

The LVFS is a webservice designed to allow system OEMs and ODMs to upload firmware easily, and for it to be distributed securely to tens of millions of end users. For some people, this simply does not work for good business reasons:

  • They don’t trust me, fwupd.org, GPG, certain OEMs or the CDN we use
  • They don’t want thousands of computers on an internal network downloading all the files over and over again
  • The internal secure network has no internet connectivity

For these cases there are a few different ways to keep your hardware updated, in order of simplicity:

Download just the files you need manually

Download the .cab files you found for your hardware and then install them on the target hardware via Ansible or Puppet using fwupdmgr install foo.cab — you can use fwupdmgr get-devices to get the existing firmware versions of all hardware. If someone wants to write the patch to add JSON/XML export to fwupdmgr that would be a very welcome thing indeed.

Download and deploy firmware as part of an immutable image

If you’re shipping an image, you can just dump the .cab files into a directory in the deployment along with something like /etc/fwupd/remotes.d/immutable.conf (only on fwupd >= 1.2.3):

[fwupd Remote]
Enabled=false
Title=Vendor (Automatic)
Keyring=none
MetadataURI=file:///usr/share/fwupd/remotes.d/vendor/firmware

Then once you disable the LVFS, running fwupdmgr or fwupdtool will use only the cabinet archives you deploy in your immutable image (or even from an .rpm for that matter). Of course, you’re deploying a larger image because you might have several firmware files included, but this is how Google ChromeOS is using fwupd.

Sync all the public firmware from the LVFS to a local directory

You can use Pulp to mirror the entire contents of the LVFS (not private or embargoed firmware, for obvious reasons). Create a repo pointing to PULP_MANIFEST and then sync that on a regular basis to download the metadata and firmware. The Pulp documentation can explain how to set all this up. Make sure the local files are available from a webserver in your private network using SSL.

Then, disable the LVFS by deleting/modifying lvfs.conf and then create a myprivateserver.conf file on the clients /etc/fwupd/remotes.d:

[fwupd Remote]
Enabled=true
Type=download
Keyring=gpg
MetadataURI=https://my.private.server/mirror/firmware.xml.gz
FirmwareBaseURI=https://my.private.server/mirror

Export a share to all clients

Again, use Pulp to create a big directory holding all the firmware (currently ~10GB), and keep it synced. This time create a NFS or Samba share and export it to clients. Map the folder on clients, and then create a myprivateshare.conf file in /etc/fwupd/remotes.d:

[fwupd Remote]
Enabled=false
Title=Vendor
Keyring=none
MetadataURI=file:///mnt/myprivateshare/fwupd/remotes.d/firmware.xml.gz
FirmwareBaseURI=file:///mnt/myprivateshare/fwupd/remotes.d

Create your own LVFS instance

The LVFS is a free software Python 3 Flask application and can be set up internally, or even externally for that matter. You have to configure much more this way, including things like generating your own GPG keys, uploading your own firmware and setting up users and groups on the server. Doing all this has a few advantages, namely:

  • You can upload your own private firmware and QA it, only pushing it to stable when ready
  • You don’t ship firmware which you didn’t upload
  • You can control the staged deployment, e.g. only allowing the same update to be deployed to 1000 servers per day
  • You can see failure reports from clients, to verify if the deployment is going well
  • You can see nice graphs about how many updates are being deployed across your organisation

I’m hoping to make the satellite deployment LVFS use cases more concrete, and hopefully add some code to the LVFS to make this easier, although it’s not currently required for any Red Hat customer. Certainly a “setup wizard” would make setting up the LVFS much easier than obscure commands on the console.

Comments welcome.

Tag1 Announces Technical Architecture & Leadership (TAL) Support

Posted by Jeff Sheltren on February 14, 2019 11:42 AM
Tag1, works with a wide array of technologies and is the 2nd all-time leading contributor to the Drupal platform, specializing in architecting, optimizing, securing, and delivering large scale systems. Our unparalleled history of major open source contributions, client list, partnerships with global agencies and the leading platform providers, along with our stewardship of the Drupal platform itself sets Tag1 apart as a leader in the Drupal industry. TAL is built to support world’s most ambitious development teams through delivery optimization and technical mentorship. For over 10 years, we have provided expert insight into configuration and infrastructure management, security, performance, HA, disaster recovery, and Drupal development services for Fortune 100s, governments, higher-education, not-for-profits, and growing startups. With each project, we’ve continued our commitment to open source—both in elevating the code and the teams who power the community and further digital platforms across the world. The shared benefit of open source is driven by its shared excellence. It is in that spirit of sharing excellence that we are proud to launch Technical Architecture & Leadership (TAL) Support. TAL is inspired by the format of Technical Account Management many are familiar with—but redesigned to our expertise in architecture, delivering technology organization transformation through...
michaelemeyers Thu, 02/14/2019 - 03:42

News from Fedora Infrastructure

Posted by Fedora Community Blog on February 14, 2019 11:00 AM
IndiaHacks 2016

Most of the Community Platform Engineering (CPE) team met in person last month in Brno during a week. CPE team is the team at Red Hat that works on Fedora and CentOS infrastructure. As a distributed team, we usually use DevConf.cz as an opportunity to meet face to face.

This is an update on what we have been up to during this week.

Continuous Integration

One of the first tasks we have achieved is to move as many application we maintain to use CentOS CI for our Continuous Integration pipeline. CentOS CI provides us with a Jenkins instance that is running in an OpenShift cluster, you can have a look at the this instance here.

Since a good majority of our application are developed in Python, we agreed on using tox to execute our CI tests. Adopting tox on our application allows us  to use a really convenient way to configure the CI pipeline in Jenkins. In fact we only needed to create .cico.pipeline file in the application repository with the following.

fedoraInfraTox { }

At the end of the day out of the 27 application which we are directly involved in we have migrated 22 to CentOS CI and tox.

OpenShift

A second task we have focused on was to migrate some of our application to OpenShift. After identifying which applications were good candidate we worked on deploying the following :

Currently these are running on OpenShift in the staging environment.

This was a good opportunity to get the team up to speed with deploying application on OpenShift and using the source-to-image tool. You can have a look at the Ansible playbooks we are using here.

Migration to fedora-messaging

While working on moving application to OpenShift we worked on the migration from fedmsg to fedora-messaging. First we replaced the fedmsg producer by fedora-messaging since it is quite trivial. You can see an example in this Pull-Request.

Planning and prioritization

The rest of the week was spent on planning and prioritizing work for this year. Generally we are looking to increase our transparency and make it easier for us and other teams to see which tasks are currently in progress. Some progress on this effort should happen soon once we have a supported Taiga instance available in Fedora.

If you are interested in helping out with any of these efforts, you can join #fedora-apps channel on the freenode IRC network, or send an email to the infrastructre mailing list

The post News from Fedora Infrastructure appeared first on Fedora Community Blog.

Adding entries to the udev hwdb

Posted by Peter Hutterer on February 14, 2019 01:40 AM

In this blog post, I'll explain how to update systemd's hwdb for a new device-specific entry. I'll focus on input devices, as usual.

What is the hwdb and why do I need to update it?

The hwdb is a binary database sitting at /etc/udev/hwdb.bin and /usr/lib/udev/hwdb.d. It is usually used to apply udev properties to specific devices, those properties are then picked up by other processes (udev builtins, libinput, ...) to apply device-specific behaviours. So you'll need to update the hwdb if you need a specific behaviour from the device.

One of the use-cases I commonly deal with is that some touchpad announces wrong axis ranges or resolutions. With the correct hwdb entry (see the example later) udev can correct these at device initialisation time and every process sees the right axis ranges.

The database is compiled from the various .hwdb files you have sitting on your system, usually in /etc/udev/hwdb.d and /usr/lib/hwdb.d. The terser format of the hwdb files makes them easier to update than, say, writing a udev rule to set those properties.

The full process goes something like this:

  • The various .hwdb files are installed or modified
  • The hwdb.bin file is generated from the .hwdb files
  • A udev rule triggers the udev hwdb builtin. If a match occurs, the builtin prints the to-be properties, and udev captures the output and applies it as udev properties to the device
  • Some other process (often a different udev builtin) reads the udev property value and does something.
On its own, the hwdb is merely a lookup tool though, it does not modify devices. Think of it as a virtual filing cabinet, something will need to look at it, otherwise it's just dead weight.

An example for such a udev rule from 60-evdev.rules contains:


IMPORT{builtin}="hwdb --subsystem=input --lookup-prefix=evdev:", \
RUN{builtin}+="keyboard", GOTO="evdev_end"
The IMPORT statement translates as "look up the hwdb, import the properties". The RUN statement runs the "keyboard" builtin which may change the device based on the various udev properties now set. The GOTO statement goes to skip the rest of the file.

So again, on its own the hwdb doesn't do anything, it merely prints to-be udev properties to stdout, udev captures those and applies them to the device. And then those properties need to be processed by some other process to actually apply changes.

hwdb file format

The basic format of each hwdb file contains two types of entries, match lines and property assignments (indented by one space). The match line defines which device it is applied to.

For example, take this entry from 60-evdev.hwdb:


# Lenovo X230 series
evdev:name:SynPS/2 Synaptics TouchPad:dmi:*svnLENOVO*:pn*ThinkPad*X230*
EVDEV_ABS_01=::100
EVDEV_ABS_36=::100
The match line is the one starting with "evdev", the other two lines are property assignments. Property values are strings, any interpretation to numeric values or others is to be done in the process that requires those properties. Noteworthy here: the hwdb can overwrite previously set properties, but it cannot unset them.

The match line is not specified by the hwdb beyond "it's a glob". The format to use is defined by the udev rule that invokes the hwdb builtin. Usually the format is:


someprefix:search criteria:
For example, the udev rule that applies for the match above is this one in 60-evdev.rules:

KERNELS=="input*", \
IMPORT{builtin}="hwdb 'evdev:name:$attr{name}:$attr{[dmi/id]modalias}'", \
RUN{builtin}+="keyboard", GOTO="evdev_end"
What does this rule do? $attr entries get filled in by udev with the sysfs attributes. So on your local device, the actual lookup key will end up looking roughly like this:

evdev:name:Some Device Name:dmi:bvnWhatever:bvR112355:bd02/01/2018:...
If that string matches the glob from the hwdb, you have a match.

Attentive readers will have noticed that the two entries from 60-evdev.rules I posted here differ. You can have multiple match formats in the same hwdb file. The hwdb doesn't care, it's just a matching system.

We keep the hwdb files matching the udev rules names for ease of maintenance so 60-evdev.rules keeps the hwdb files in 60-evdev.hwdb and so on. But this is just for us puny humans, the hwdb will parse all files it finds into one database. If you have a hwdb entry in my-local-overrides.hwdb it will be matched. The file-specific prefixes are just there to not accidentally match against an unrelated entry.

Applying hwdb updates

The hwdb is a compiled format, so the first thing to do after any changes is to run


$ systemd-hwdb update
This command compiles the files down to the binary hwdb that is actually used by udev. Without that update, none of your changes will take effect.

The second thing is: you need to trigger the udev rules for the device you want to modify. Either you do this by physically unplugging and re-plugging the device or by running


$ udevadm trigger
or, better, trigger only the device you care about to avoid accidental side-effects:

$ udevadm trigger /sys/class/input/eventXYZ
In case you also modified the udev rules you should re-load those too. So the full quartet of commands after a hwdb update is:

$ systemd-hwdb update
$ udevadm control --reload-rules
$ udevadm trigger
$ udevadm info /sys/class/input/eventXYZ
That udevadm info command lists all assigned properties, these should now include the modified entries.

Adding new entries

Now let's get down to what you actually want to do, adding a new entry to the hwdb. And this is where it also get's tricky to have a generic guide because every hwdb file has its own custom match rules.

The best approach is to open the .hwdb files and the matching .rules file and figure out what the match formats are and which one is best. For USB devices there's usually a match format that uses the vendor and product ID. For built-in devices like touchpads and keyboards there's usually a dmi-based match format (see /sys/class/dmi/id/modalias). In most cases, you can just take an existing entry and copy and modify it.

My recommendation is: add an extra property that makes it easy to verify the new entry is applied. For example do this:


# Lenovo X230 series
evdev:name:SynPS/2 Synaptics TouchPad:dmi:*svnLENOVO*:pn*ThinkPad*X230*
EVDEV_ABS_01=::100
EVDEV_ABS_36=::100
FOO=1
Now run the update commands from above. If FOO=1 doesn't show up, then you know it's the hwdb entry that's not yet correct. If FOO=1 does show up in the udevadm info output, then you know the hwdb matches correctly and any issues will be in the next layer.

Increase the value with every change so you can tell whether the most recent change is applied. And before your submit a pull request, remove the FOOentry.

Oh, and once it applies correctly, I recommend restarting the system to make sure everything is in order on a freshly booted system.

Troubleshooting

The reason for adding hwdb entries is always because we want the system to handle a device in a custom way. But it's hard to figure out what's wrong when something doesn't work (though 90% of the time it's a typo in the hwdb match).

In almost all cases, the debugging sequence is the following:

  • does the FOO property show up?
  • did you run systemd-hwdb update?
  • did you run udevadm trigger?
  • did you restart the process that requires the new udev property?
  • is that process new enough to have support for that property?
If the answer to all these is "yes" and it still doesn't work, you may have found a bug. But 99% of the time, at least one of those is a sound "no. oops.".

Your hwdb match may run into issues with some 'special' characters. If your device has e.g. an ® in its device name (some Microsoft devices have this), a bug in systemd caused the match to fail. That bug is fixed now but until it's available in your distribution, replace with an asterisk ('*') in your match line.

Greybeards who have been around since before 2014 (systemd v219) may remember a different tool to update the hwdb: udevadm hwdb --update. This tool still exists, but it does not have the exact same behaviour as systemd-hwdb update. I won't go into details but the hwdb generated by the udevadm tool can provide unexpected matches if you have multiple matches with globs for the same device. A more generic glob can take precedence over a specific glob and so on. It's a rare and niche case and fixed since systemd v233 but the udevadm behaviour remained the same for backwards-compatibility.

Happy updating and don't forget to add Database Administrator to your CV when your PR gets merged.

Python 3.8 alpha in Fedora

Posted by Miro Hrončok on February 13, 2019 11:42 PM

The Python developers have released the first alpha of Python 3.8.0 and you can already try it out in Fedora! Test your Python code with 3.8 early to avoid surprises once the final 3.8.0 is out in October.

Install Python 3.8 on Fedora

If you have Fedora 29 or newer, you can install Python 3.8 from the official software repository with dnf:

$ sudo dnf install python38

As more alphas, betas and release candidates of Python 3.8 will be released, the Fedora package will receive updates. No need to compile your own development version of Python, just install it and have it up to date. New features will be added until the first beta.

Test your projects with Python 3.8

Run the python3.8 command to use Python 3.8 or create virtual environments with the builtin venv module, tox or with pipenv. For example:

$ git clone https://github.com/benjaminp/six.git
Cloning into 'six'...
$ cd six/
$ tox -e py38
py38 runtests: commands[0] | python -m pytest -rfsxX
================== test session starts ===================
platform linux -- Python 3.8.0a1, pytest-4.2.1, py-1.7.0, pluggy-0.8.1
collected 195 items

test_six.py ...................................... [ 19%]
.................................................. [ 45%]
.................................................. [ 70%]
..............................................s... [ 96%]
....... [100%]
========= 194 passed, 1 skipped in 0.25 seconds ==========
________________________ summary _________________________
py38: commands succeeded
congratulations 🙂

What’s new in Python 3.8

So far, only the first alpha was released, so more features will come. You can however already try out the new walrus operator:

$ python3.8
Python 3.8.0a1 (default, Feb 7 2019, 08:07:33)
[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> while not (answer := input('Say something: ')):
... print("I don't like empty answers, try again...")
...
Say something:
I don't like empty answers, try again...
Say something: Fedora
>>>

And stay tuned for Python 3.8 as python3 in Fedora 31!

MariaDB multi master replication

Posted by Maciej Lasyk on February 13, 2019 11:00 PM

MariaDB

What this is about

So I have couple of MariaDB servers and wanted to take backups of some of databases hosted on those servers. I could simply schedule an systemd-timer job on every of those machine that would simply mysqldump required databases on regular basis - let's say every our. And then send it somewhere. Or make external service fetch those backups regularly.

But I don't like this idea. First of all because it increases my RPO, and secondly - in case one of those DB servers is down I'd have to redeploy it, load backups etc. It takes time.

So instead of this idea (an old one, mentioned years ago in 2010 in Mysql High Availability) is to replicate all databases to a slave servers and take backups of all slave DBs.

However this wasn't easy to perform back then in MySQL, because there was no such thing as multi - master replication. Basically you can't start replication from more than one master. In MySQL you had to run separate slave server for each master you replicated.

And in MariaDB 10 You can finally do this. This image show the idea (I took it from mariadb.com page):

MariaDB

Ok, how to do this?

In my scenario I had 2 already running MariaDB 10.x servers (from Fedora upstream) and a fresh one, that I pointed to be the slave for above masters.

Also a downtime for each master is needed for a time of taking full backup. I didn't find an easy way to get around this. If you do - please leave a comment - thx!

Last, but not least - I used mariadb-server version 10.1.26.

1. Take backups of both masters.

Should I really write this point? ;)

2. Prepare your masters' configurations

Now we need to make both master servers actual masters :) In order to do that I edited /etc/my.cnf.d/mariadb-server.cnf (find your own my.cnf file - probably in /etc/my.cnf) and put there something like this (of course leave other configuration options that lay there):

# first master server my.cnf file

[server]
log-bin
server_id=1
log-basename=master_rss
binlog-ignore-db=mysql

[mysqld]
gtid-domain-id=1
gtid-ignore-duplicates=ON
# second master server my.cnf file

[server]
log-bin
server_id=2
log-basename=master_retromtb
binlog-ignore-db=mysql

[mysqld]
gtid-domain-id=2
gtid-ignore-duplicates=ON

So a bit of explanation:

  • log-bin - thanks to this server will become master and start logging binary log. This means, that a log of changes will be saved to files and later sent to slaves and replayed there. See this article for more information.
  • server_id - basically keep this value unique for every server in your replication topology. This must be a number.
  • log-basename - will be used as a part of name for log-bin files
  • binlog-ignore-db - this tells which databases must not be replicated. Very useful when you want to replicate all DBs but for some set of system DBs.
  • gtid-domain-id - basically keep this unique for every master. Here you can read details about GTIDs in multi - source replication topologies.
  • gtid-ignore-duplicates - this is actually not very needed in my replication topology. Thanks to this setting when slave receives event with GTID that was already processed it will ignore it. It is possible in situation when you have a chain of master-slave/master-slave replications.

3. Prepare masters data for sending to slave

Execute this for every master. I did it on both of my servers:

  1. This procedure execution time mostly depends on how long it takes to take full backup of required databases. You can test it by running time mysqldump --databases db1,db2,db3 -u some_user -p > dbs.sql beforehand.
  2. Restart Mariadb service in order to apply above configurations (this might be not needed as probably you can set above setting from mariadb SQL console during runtime - you can easily find how to do that in the internets; just a note for people who wanna minimize downtime).
  3. Log into Mariadb SQL console and execute: FLUSH TABLES WITH READ LOCK; - this will lock and disable all writes to this server. Prior to doing that I always shutdown HTTP services or put them in maintenance mode. Keep in mind that also some cron/timers services might try to connect. Thus locking on DB layer is most save.
  4. Execute select @@gtid_binlog_pos; and save this position for later use.
  5. Take backup of required databases: mysqldump --databases db1,db2,db3 -u some_user -p > dbs.sql
  6. When this is finished unlock tables in SQL console: UNLOCK TABLES;
  7. Copy dbs.sql to the slave server.
  8. Create replica user: GRANT REPLICATION SLAVE ON . TO 'replica_user'@'%' IDENTIFIED BY 'replica_password';

4. Prepare slave server configuration

My /etc/my.cnf.d/mariadb-server.cnf looks like this (removed everything that was not connected to replication):

[server]
server_id=1000

[mysqld]
gtid-ignore-duplicates=ON

So as you can see - a very simple config.

Now you need to upload both backups:

mysql -u some_user -p < dbs.sql
mysql -u some_user -p < dbs2.sql

Having it uploaded it's time for starting replication:

  1. Execute SHOW ALL SLAVES STATUS\G - this should return an empty set (no replication running for now).
  2. Restart Mariadb service so new config is applied.
  3. Run SET GLOBAL gtid_slave_pos = "1-1-X,2-2-Y"; replacing each position with proper you saved during taking backup on master.
  4. For each master server setup replication: CHANGE MASTER 'master_name' TO master_host="master_ip_addr", master_port=3306, master_user="replica_user", master_use_gtid=current_pos, master_password='replica_user_pwd';
  5. Now in order to start replication simply run: START ALL SLAVES; and see if no problems were reported: SHOW WARNINGS;
  6. See replica status: SHOW ALL SLAVE STATUS\G;

And that's it.

Debugging and solbing problems

You will probably hit some problems during setting this up for the first time. Some helpful commands:

  1. Resetting master in order to create once again data backups for uploading to slaves: RESET MASTER (run this on master)
  2. Resetting one slave in order to recreate and reconfigure particular replication: RESET SLAVE 'master_name' all; afterwards you need to STOP ALL SLAVES; and reconfigure proper gtid_slave_pos by SET GLOBAL... so check current GTID and replace the incorrect part with the proper one). Now again CHANGE MASTER... and START SLAVE master_name;**
  3. If you encounter replication error and resolve it somehow than in order to skip it you will need to: STOP SLAVE 'master_name'; SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1; START SLAVE 'master_name'; This way you can skip one error (increase error counter to skip more than one)

Helpful resources

You can read about it more here:

Using systemd timer instead of cron

Posted by alciregi on February 13, 2019 10:05 PM

Cron is not part of Fedora Silverblue (as well as Atomic Host).
You can install (overlay) it with rpm-ostree. But it is better to avoid as much as possible to overlay packages in such systems.
So, how to take advantage of systemd instead of cron?

Convert your Fedora Silverblue to HTPC with Kodi

Posted by Fedora Magazine on February 13, 2019 09:38 AM

Ever wanted to create a HTPC from old computer laying around. Or just have some spare time and want to try something new. This article could be just for you. It will show you the step by step process to convert a Fedora Silverblue to a fully fledged HTPC.

What is Fedora Silverblue, Kodi and HTPC?

Fedora Silverblue is a system similar to Fedora Workstation. It offers an immutable filesystem (only /var and /etc are writable) and atomic updates using an ostree image, which offers reliable updates with ability to rollback to previous version easily. If you want to find out more about Fedora Silverblue visit https://silverblue.fedoraproject.org/ or if you want to try it by yourself you can get it here.

Kodi is one of the best multimedia player available. It provides plenty of features (like automatic downloads of metadata for movies, support for UPnP etc.) and it’s open source. It also has many addons. So if you are missing any functionality you could probably find an addon for it.

HTPC is just an acronym for Home Theater PC in simple words a PC that is mainly used as an entertainment station. You can connect it to TV or any monitor and just use it to watch your favorite movies, TV shows or listen to your favorite music.

Why choosing Silverblue to create an HTPC?

So why choosing Fedora Silverblue for HTPC? The main reasons are:

  • Reliability – you don’t need to fear that after update everything stop working and if it does, I can rollback easily
  • New technology – it is a good opportunity to play with a new technology.

And why to choose Kodi ? As stayted before it’s one of the best multimedia player and it’s packaged as a flatpak, which make it easy to install on Silverblue.

Conversion of Fedora Silverblue to HTPC

Let’s go step by step through this process and see how to create a fully usable HTPC from Fedora Silverblue.

1. Installation of Fedora Silverblue

First thing you need to do is to install Fedora Silverblue, this guide will not cover the installation process, but you can expect similar process as with standard Fedora Workstation installation. You can get the Fedora Silverblue ISO here

Don’t create any user during the installation, just set root password. We will create a user for Kodi later.

2. Creation of user for Kodi

When you are in the terminal logged as root, you need to create a user that will be used by Kodi. This can be done using the useradd command.

Go through GNOME initial setup and create a kodi user. You will need to provide a password. The created kodi user will have sudo permissions, but we will remove them at the end.

It’s also recommended you upgrade Fedora Silverblue. Press the Super key (this is usually the key between Alt and Ctrl) and type terminal. Then start the upgrade.

rpm-ostree upgrade

And reboot the system.

systemctl reboot

3. Installation of Kodi from Flathub

Open a terminal and add a Flathub remote repository.

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

With the Flathub repository added the installation of Kodi is simple.

flatpak install flathub tv.kodi.Kodi

4. Set Kodi as autostart application

First, create the autostart directory.

mkdir -p /home/kodi/.config/autostart

Then create a symlink for the Kodi desktop file.

ln -s /var/lib/flatpak/exports/share/applications/tv.kodi.Kodi.desktop /home/kodi/.config/autostart/tv.kodi.Kodi.desktop

5. Set autologin for kodi user

This step is very useful together with autostart of Kodi. Every time you restart your HTPC you will end up directly in Kodi and not in the GDM or GNOME shell. To set the auto login you need to add the following lines to /etc/gdm/custom.conf to the [daemon] section

AutomaticLoginEnable=True       
AutomaticLogin=kodi

6. Enable automatic updates

For HTPC automatic updates we will use systemd timers. First create a /etc/systemd/system/htpc-update.service file with following content.

[Unit]
Description=Update HTPC

[Service]
Type=oneshot
ExecStart=/usr/bin/sh -c 'rpm-ostree upgrade; flatpak update -y; systemctl reboot'

Then create a /etc/systemd/system/htpc-update.timer file with following content.

[Unit]
Description=Run htpc-update.service once a week

[Timer]
OnCalendar=Wed *-*-* 04:00:00

Start the timer from terminal.

systemctl start htpc-update.timer

You can check if the timer is set with the following command.

systemctl list-timers

This timer will run at 4:00 a.m. each Wednesday. It is recommended to set this to a time when nobody will use the HTPC.

7. Remove root permissions

Now you don’t need root permissions for kodi anymore, so remove it from the wheel group. To do this type following command in a terminal.

sudo usermod -G kodi kodi

8. Disable GNOME features

There are few GNOME features that could be annoying when using Fedora Silverblue as HTPC. Most of these features could be setup directly in Kodi anyway, so if you want them later it’s easy to set them directly in Kodi.

To do this, type the following commands.

# Display dim
dconf write "/org/gnome/settings-daemon/plugins/power/idle-dim" false

# Sleep over time/
dconf write "/org/gnome/settings-daemon/plugins/power/sleep-inactive-ac-type" 0

# Screensaver
dconf write "/org/gnome/desktop/screensaver/lock-enabled" false

# Automatic updates through gnome-software
dconf write "/org/gnome/software/download-updates" false

And that’s it, you just need to do one last restart to apply the dconf changes. After the restart you will end up directly in Kodi.

<figure class="wp-block-image"><figcaption>Kodi</figcaption></figure>

What now?

Now I will recommend you to play with the Kodi settings a little bit and set it up to your liking. You can find plenty of guides on the internet.

If you want to automate the process you can use my ansible script that was written just for this occasion.

EDITOR’S NOTE: This article has been edited since initial publication to reflect various improvements and to simplify the procedure.


Photo by Sven Scheuermeier on Unsplash

Tracking my phone's silent connections

Posted by Kushal Das on February 13, 2019 02:47 AM

My phone has more friends than me. It talks to more peers (computers) than the number of human beings I talk on an average. In this age of smartphones and mobile apps for A-Z things, we are dependent on these technologies. However, at the same time, we don’t know much of what is going on in the computers equipped with powerful cameras, GPS device, microphone we are carrying all the time. All these apps are talking to their respective servers (or can we call them masters?), but, there is no easy way to track them.

These questions bothered me for a long time: I wanted to see the servers my phone is connecting to, and I want to block those connections as I wish. However, I never managed to work on this. A few weeks ago, I finally sat down to start working to build up a system by reusing already available open source projects and tools to create the system, which will allow me to track what my phone is doing. Maybe not in full details, but, at least shed some light on the network traffic from the phone.

Initial trial

I tried to create a wifi hotspot at home using a Raspberry Pi and then started capturing all the packets from the device using standard tools (dumpcap) and later reading through the logs using Wireshark. This procedure meant that I could only capture when I am connected to the network at home. What about when I am not at home?

Next round

This time I took a bit different approach. I chose algo to create a VPN server. Using WireGuard, it became straightforward to connect my iPhone to the VPN. This process also allows capturing all the traffic from the phone very easily on the VPN server. A few days in the experiment, Kashmir started posting her experiment named Life Without the Tech Giants, where she started blocking all the services from 5 big technology companies. With her help, I contacted Dhruv Mehrotra, who is a technologist behind the story. After talking to him, I felt that I am going in the right direction. He already posted details on how they did the blocking, and you can try that at home :)

Looking at the data after 1 week

After capturing the data for the first week, I moved the captured pcap files into my computer. Wrote some Python code to put the data into a SQLite database, enabling me to query the data much faster.

Domain Name System (DNS) data

The Domain Name System (DNS) is a decentralized system which helps to translate the human memory safe domain names (like kushaldas.in) into Internet Protocol (IP) addresses (like 192.168.1.1 ). Computers talk to each other using these IP addresses, we, don’t have to worry to remember so many names. When the developers develop their applications for the phone, they generally use those domain names to specify where the app should connect.

If I plot all the different domains (including any subdomain) which got queried at least 10 times in a week, we see the following graph.

The first thing to notice is how the phone is trying to find servers from Apple, which makes sense as this is an iPhone. I use the mobile Twitter app a lot, so we also see many queries related to Twitter. Lookout is a special mention there, it was suggested to me by my friends who understand these technologies and security better than me. The 3rd position is taken by Google, though sometimes I watch Youtube videos, but, the phone queried for many other Google domains.

There are also many queries to Akamai CDN service, and I could not find any easy way to identify those hosts, the same with Amazon AWS related hosts. If you know any better way, please drop me a note.

You can see a lot of data analytics related companies were also queried. dev.appboy.com is a major one, and thankfully algo already blocked that domain in the DNS level. I don’t know which app is trying to connect to which all servers, I found about a few of the apps in my phone by searching about the client list of the above-mentioned analytics companies. Next, in coming months, I will start blocking those hosts/domains one by one and see which all apps stop working.

Looking at data flow

The number of DNS queries is an easy start, but, next I wanted to learn more about the actual servers my phone is talking to. The paranoid part inside of me was pushing for discovering these servers.

If we put all of the major companies the phone is talking to, we get the following graph.

Apple is leading the chart by taking 44% of all the connections, and the number is 495225 times. Twitter is in the second place, and Edgecastcdn is in the third. My phone talked to Google servers 67344 number of times, which is like 7 times less than the number of times Apple itself.

In the next graph, I removed the big players (including Google and Amazon). Then, I can see that analytics companies like nflxso.net and mparticle.com have 31% of the connections, which is a lot. Most probably I will start with blocking these two first. The 3 other CDN companies, Akamai, Cloudfront, and Cloudflare has 8%, 7%, and 6% respectively. Do I know what all things are these companies tracking? Nope, and that is scary enough that one of my friend commented “It makes me think about throwing my phone in the garbage.”

What about encrypted vs unencrypted traffic? What all protocols are being used? I tried to find the answer for the first question, and the answer looks like the following graph. Maybe the number will come down if I try to refine the query and add other parameters, that is a future task.

What next?

As I said earlier, I am working on creating a set of tools, which then can be deployed on the VPN server, that will provide a user-friendly way to monitor, and block/unblock traffic from their phone. The major part of the work is to make sure that the whole thing is easy to deploy, and can be used by someone with less technical knowledge.

How can you help?

The biggest thing we need is the knowledge of “How to analyze the data we are capturing?”. It is one thing to make reports for personal user, but, trying to help others is an entirely different game altogether. We will, of course, need all sorts of contributions to the project. Before anything else, we will have to join the random code we have, into a proper project structure. Keep following this blog for more updates and details about the project.

Note to self

Do not try to read data after midnight, or else I will again think a local address as some random dynamic address in Bangkok and freak out (thank you reverse-dns).

All systems go

Posted by Fedora Infrastructure Status on February 12, 2019 09:31 AM
Service 'Fedora Packages App' now has status: good: Everything seems to be working.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on February 12, 2019 07:59 AM
Service 'Fedora Packages App' now has status: scheduled: Scheduled maintenance is in progress.

Kiwi TCMS 6.5.3

Posted by Kiwi TCMS on February 11, 2019 10:20 PM

We're happy to announce Kiwi TCMS version 6.5.3! This is a security, improvement and bug-fix update that includes new versions of Django, includes several database migrations and fixes several bugs. You can explore everything at https://demo.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  b9355cf85833    1.039 GB
kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

Changes since Kiwi TCMS 6.5

Security

  • Update Django from 2.1.5 to 2.1.7. Fixes CVE-2019-6975: Memory exhaustion in django.utils.numberformat.format()

Improvements

  • Update mysqlclient from 1.4.1 to 1.4.2
  • Multiple template strings marked as translatable (Christophe CHAUVET)

Database migrations

  • Email notifications for TestPlan and TestCase now default to True
  • Remove TestPlanEmailSettings.is_active field

API

  • New method Bug.report(), References Issue #18
  • Method Bug.create() now accepts parameter auto_report=False

Translations

Bug fixes

  • Show the user who actually tested a TestCase instead of hard-coded value. Fixes Issue #765
  • Properly handle pagination button states and page numbers. Fixes Issue #767
  • Add TestCase to TestPlan if creating from inside a TestPlan. Fixes Issue #777
  • Made TestCase text more readable. Fixes Issue #764
  • Include missing templates and static files from PyPI tarball

Refactoring

  • Use find_packages() when building PyPI tarball
  • Install Kiwi TCMS as tarball package inside Docker image instead of copying from the source directory
  • Pylint fixes
  • Remove testcases.views.ReturnActions() which is now unused
  • Refactor New TestCase to class-based view and add tests

How to upgrade

If you are using Kiwi TCMS as a Docker container then:

cd Kiwi/
git pull
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Don't forget to backup before upgrade!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:<next_upgrade_version>
edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

Draft Fedora 31 schedule available

Posted by Fedora Community Blog on February 11, 2019 09:31 PM
Image by Wikipedia user used under CC BY-SA 3.0 https://en.wikipedia.org/wiki/File:Pert_example_gantt_chart.gif

It’s almost time for me to submit the Fedora 31 schedule to FESCo for approval. Before I do that, I’m sharing it with the community for comment. After some discussion before the end of the year, we decided not to go with an extended development cycle for Fedora 31. After getting input from teams within Fedora, I have a draft schedule available.

The basic structure of the Fedora 31 schedule is pretty similar to the Fedora 30 schedule. You may notice some minor formatting changes due to a change in the tooling, but the milestones are similar. I did incorporate changes from different teams. Some tasks that are no longer relevant were removed. I added tasks for the Mindshare teams. And I included several upstream milestones.

Take a look at the full schedule. If you have any questions or comments, join me in the FPgM office hours on Wednesday or file an issue in the schedule Pagure. I’ll be submitting the schedule to FESCo next week for approval.

The post Draft Fedora 31 schedule available appeared first on Fedora Community Blog.

Better Bluetooth sound quality on Linux

Posted by Jiri Eischmann on February 11, 2019 08:44 PM

Over a year ago I got my first serious Bluetooth headphones. They worked with Fedora well, they paired, connected, sound was directed to them. Just the sound quality was not overwhelming. I learnt that because of limited bandwidth of Bluetooth a codec with audio compression has to be used. There are quite a few of them to pick from: AAC (very widely supported because it’s the only one iPhone supports, partly freely available), AptX (also very widely supported, but proprietary), AptX-HD (enhanced AptX with a higher bitrate, also proprietary), LDAC (probably the best codec available, highest bitrate, available in Android, supported mostly by Sony devices), MP3 (also possible, but supported by virtually no devices). And also SBC which is a native Bluetooth, first generation compression codec with rather bad sound quality.

My headphones supported SBC, AAC, AptX, AptX-HD, LDAC, so all the advanced codecs. Sadly on Linux it fallbacked to the basic SBC because no other was available for Bluetooth, and headphones for €200 produced rather underwhelming sound. I mostly listen to music on Spotify. Listening to it on my headphones meant transcoding OGG 320 kbps to SBC and losing a lot of sound quality.

Then I recalled that Sony released LDAC as open source in the Android Open Source Project. And they really did because you can find libldac released under Apache 2.0 License there. So it could possibly be made available on Linux, too. Bluez was also able to negotiate LDAC with the end device. What was missing was a plugin for PulseAudio that would utilize the codec and encode the stream into LDAC before sending it over Bluetooth to the headphones.

Today I learnt that such a plugin had been finally created.  And besides LDAC it also supports AAC, AptX, and AptX-HD. Those are patent-protected codecs and the plugin relies on ffmpeg to support them, so it’s not likely they will be available in Fedora any time soon. But libldac is already in Fedora package review and is waiting for the final legal approval. The plugin currently depends on ffmpeg, but if it were made optional, we could at least ship LDAC support by default in Fedora Workstation.

I thought we could also support AAC because its decoder/encoder is already available in Fedora, but I learnt that it only supported the original AAC format while what devices support these days is HE-AAC which is still protected by patents.

Anyway, someone already built packages of both the plugin and libldac and I installed them to test it. And it worked on Fedora 29 Workstation, LDAC was used for Bluetooth stream encoding:

ldac

I don’t have bat ears, but I could recognize a difference in sound quality immediately.

If I’m not mistaken it makes Linux the first desktop system to support LDAC. And with support for other codecs it will make it the OS with the best Bluetooth sound quality support because all other systems support only a subset of the list, hence fewer headphones/speakers at their best sound quality.

Plugin ZiGate pour Jeedom v1.2.0

Posted by Guillaume Kulakowski on February 11, 2019 07:55 PM

Après pas mal d’efforts et surtout de tests, la version 1.2.0 du plugin ZiGate pour Jeedom vient de sortir. Parmi les nombreuses nouveautés, on remarquera : Des corrections de bugs en tout genre. Le support de nouveaux périphériques. L’identifiant unique d’un équipement Jeedom (LogicalId) passe de l’adresse (ADDR) à l’IEEE. L’idée et d’assurer un identifiant […]

Cet article Plugin ZiGate pour Jeedom v1.2.0 est apparu en premier sur Guillaume Kulakowski's blog.

Meaningful 2fa on modern linux

Posted by William Brown on February 11, 2019 02:00 PM

Meaningful 2fa on modern linux

Recently I heard of someone asking the question:

“I have an AD environment connected with <product> IDM. I want to have 2fa/mfa to my linux machines for ssh, that works when the central servers are offline. What’s the best way to achieve this?”

Today I’m going to break this down - but the conclusion for the lazy is:

This is not realistically possible today: use ssh keys with ldap distribution, and mfa on the workstations, with full disk encryption.

Background

So there are a few parts here. AD is for intents and purposes an LDAP server. The <product> is also an LDAP server, that syncs to AD. We don’t care if that’s 389-ds, freeipa or vendor solution. The results are basically the same.

Now the linux auth stack is, and will always use pam for the authentication, and nsswitch for user id lookups. Today, we assume that most people run sssd, but pam modules for different options are possible.

There are a stack of possible options, and they all have various flaws.

  • FreeIPA + 2fa
  • PAM TOTP modules
  • PAM radius to a TOTP server
  • Smartcards

FreeIPA + 2fa

Now this is the one most IDM people would throw out. The issue here is the person already has AD and a vendor product. They don’t need a third solution.

Next is the fact that FreeIPA stores the TOTP in the LDAP, which means FreeIPA has to be online for it to work. So this is eliminated by the “central servers offline” requirement.

PAM radius to TOTP server

Same as above: An extra product, and you have a source of truth that can go down.

PAM TOTP module on hosts

Okay, even if you can get this to scale, you need to send the private seed material of every TOTP device that could login to the machine, to every machine. That means any compromise, compromises every TOTP token on your network. Bad place to be in.

Smartcards

Are notoriously difficult to have functional, let alone with SSH. Don’t bother. (Where the Smartcard does TLS auth to the SSH server this is.)

Come on William, why are you so doom and gloom!

Lets back up for a second and think about what we we are trying to prevent by having mfa at all. We want to prevent single factor compromise from having a large impact and we want to prevent brute force attacks. (There are probably more reasons, but these are the ones I’ll focus on).

So the best answer: Use mfa on the workstation (password + totp), then use ssh keys to the hosts.

This means the target of the attack is small, and the workstation can be protected by things like full disk encryption and group policy. To sudo on the host you still need the password. This makes sudo MFA to root as you need something know, and something you have.

If you are extra conscious you can put your ssh keys on smartcards. This works on linux and osx workstations with yubikeys as I am aware. Apparently you can have ssh keys in TPM, which would give you tighter hardware binding, but I don’t know how to achieve this (yet).

To make all this better, you can distributed your ssh public keys in ldap, which means you gain the benefits of LDAP account locking/revocation, you can remove the keys instantly if they are breached, and you have very little admin overhead to configuration of this service on the linux server side. Think about how easy onboarding is if you only need to put your ssh key in one place and it works on every server! Let alone shutting down a compromised account: lock it in one place, and they are denied access to every server.

SSSD as the LDAP client on the server can also cache the passwords (hashed) and the ssh public keys, which means a disconnected client will still be able to be authenticated to.

At this point, because you have ssh key auth working, you could even deny password auth as an option in ssh altogether, eliminating an entire class of bruteforce vectors.

For bonus marks: You can use AD as the generic LDAP server that stores your SSH keys. No additional vendor products needed, you already have everything required today, for free. Everyone loves free.

Conclusion

If you want strong, offline capable, distributed mfa on linux servers, the only choice today is LDAP with SSH key distribution.

Want to know more? This blog contains how-tos on SSH key distribution for AD, SSH keys on smartcards, and how to configure SSSD to use SSH keys from LDAP.

Deploy a Django REST service on OpenShift

Posted by Fedora Magazine on February 11, 2019 09:00 AM

In a previous article we have seen how to build a “To Do” application using the Django REST Framework. In this article we will look on how we can use Minishift to deploy this application on a local OpenShift cluster.

Prerequisites

This article is the second part of a series, you should make sure that you have read the first part linked right below. All the code from the first part is available on GitHub.

<figure class="wp-block-embed-wordpress wp-block-embed is-type-rich is-provider-fedora-magazine">
Build a Django RESTful API on Fedora.
<iframe class="wp-embedded-content" data-secret="3AnltaJXsi" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/build-a-django-restful-api-on-fedora/embed/#?secret=3AnltaJXsi" title="“Build a Django RESTful API on Fedora.” — Fedora Magazine" width="600"></iframe>
</figure>

Getting started with Minishift

Minishift allows you to run a local OpenShift cluster in a virtual machine. This is very convenient when developing a cloud native application.

Install Minishift

To install Minishift the first thing to do is to download the latest release from their GitHub repository.

For example on Fedora 29 64 bit, you can download the following release

$ cd ~/Download
$ curl -LO https://github.com/minishift/minishift/releases/download/v1.31.0/minishift-1.31.0-linux-amd64.tgz

The next step is to copy the content of the tarball into your preferred location for example ~/.local/bin

$ cp ~/Download/minishift-1.31.0-linux-amd64.tgz ~/.local/bin
$ cd ~/.local/bin
$ tar xzvf minishift-1.31.0-linux-amd64.tgz
$ cp minishift-1.31.0-linux-amd64/minishift .
$ rm -rf minishift-1.31.0-linux-amd
$ source ~/.bashrc

You should now be able to run the minishift command from the terminal

$ minishift version
minishift v1.31.0+cfc599

Set up the virtualization environment

To run, Minishift needs to create a virtual machine, therefore we need to make sure that our system is properly configured. On Fedora we need to run the following commands:

$ sudo dnf install libvirt qemu-kv
$ sudo usermod -a -G libvirt $(whoami)
$ newgrp libvirt
$ sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.10.0/docker-machine-driver-kvm-centos7 -o /usr/local/bin/docker-machine-driver-kvm
$ sudo chmod +x /usr/local/bin/docker-machine-driver-kv

Starting Minishift

Now that everything is in place we can start Minishift by simply running:

$ minishift start
-- Starting profile 'minishift'
....
....

The server is accessible via web console at:
https://192.168.42.140:8443/console

Using the URL provided (make sure to use your cluster IP address) you can access the OpenShift web console and login using the username developer and password developer.

If you face any problem during the Minishift installation, it is recommended to follow the details of the installation procedure.

Building the Application for OpenShift

Now that we have a OpenShift cluster running locally, we can look at adapting our “To Do” application so that it can deployed on the cluster.

Working with PostgreSQL

To speed up development and make it easy to have a working development environment in the first part of this article series we used SQLite as a database backend. Now that we are looking at running our application in a production like cluster we add support for PostgreSQL.

In order to keep the SQLite setup working for development we are going to create a different settings file for production.

$ cd django-rest-framework-todo/todo_app
$ mkdir settings
$ touch settings/__init__.py
$ cp settings.py settings/local.py
$ mv settings.py settings/production.py
$ tree settings/
settings/
├── __init__.py
├── local.py
└── production.py

Now that we have 2 settings files — one for local development and one for production — we can edit production.py to use the PostgreSQL database settings.

In todo_app/settings/productions.py replace the DATABASE dictionary with the following:

DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "todoapp",
"USER": "todoapp",
"PASSWORD": os.getenv("DB_PASSWORD"),
"HOST": os.getenv("DB_HOST"),
"PORT": "",
}
}

As you can see, we are using Django PostgreSQL backend and we are also making use of environment variables to store secrets or variables that are likely to change.

While we are editing the production settings, let’s configure another secret the SECRET_KEY, replace the current value with the following.

SECRET_KEY = os.getenv("DJANGO_SECRET_KEY")
ALLOWED_HOSTS = ["*"]
DEBUG = False

REST_FRAMEWORK = {
'DEFAULT_RENDERER_CLASSES': (
'rest_framework.renderers.JSONRenderer',
)
}

We edited the ALLOWED_HOSTS variable to allow any host or domain to be served by django and we have also set the DEBUG variable to False. Finally we are configuring the Django REST Framework to render only JSON, this means that we will not have an HTML interface to interact with the service.

Building the application

We are now ready to build our application in a container, so that it can run on OpenShift. We are going to use the source-to-image (s2i) tool to build a container directly from the git repository. That way we do not need to worry about maintaining a Dockerfile.

For the s2i tool to be able to build our application, we perform a few changes to our repository. First, let’s create a requirements.txt file to list the dependencies needed by the application.

Create django-rest-framework-todo/requirement.txt and add the following:

django
djangorestframework
psycopg2-binary
gunicorn

psycopg2-binary is the client use to connect to PostgreSQL database, and gunicorn is the web server we are using to serve the application.

Next we need to make sure to use the production settings. In django-rest-framework-todo/manage.py and django-rest-framework-todo/wsgi.py edit the following line:

os.environ.setdefault('DJANGO_SETTINGS_MODULE','todo_app.settings.production')

Application Deployment

That’s it, we can now create a new project in OpenShift and deploy the application. First let’s login to Minishift using the command line tool.

$ oc login
Authentication required for https://192.168.42.140:8443 (openshift)
Username: developer
Password: developer
Login successful.
....
$ oc new-project todo
Now using project "todo" on server "https://192.168.42.140:8443".
....

After login in the cluster we have created a new project “todo” to run our application. The next step is to create a PostgreSQL application .

 $ oc new-app postgresql POSTGRESQL_USER=todoapp POSTGRESQL_DATABASE=todoapp POSTGRESQL_PASSWORD=todoapp

Note that we are passing the environment variable needed to configure the database service, these are the same as our application settings.

Before we create our application, we need to know what is the database host address.

$ oc get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
postgresql ClusterIP 172.30.88.94 5432/TCP 3m

We will use the CLUSTER-IP to configure the DB_HOST environment variable of our Django application.

Let’s create the application:

oc new-app centos/python-36-centos7~https://github.com/cverna/django-rest-framework-todo.git#production DJANGO_SECRET_KEY=a_very_long_and_random_string DB_PASSWORD=todoapp DB_HOST=172.30.88.9

We are using the centos/python-36-centos7 s2i image with a source repository from GitHub. Then we set the needed environment variable DJANGO_SECRET_KEY, DB_PASSWORD and DB_HOST.

Note that we are using the production branch from that repository and not the default master branch.

The last step is to make the application available outside of the cluster. For this execute the following command.

$ oc expose svc/django-rest-framework-todo
$ oc get route
NAME HOST/PORT
django-rest-framework-todo django-rest-framework-todo-todo.192.168.42.140.nip.io

You can now use the HOST/PORT address to access the web service.

Note that the build take couple minutes to complete.

Testing the application

Now that we have our service running we can use HTTPie to easily to test it. First let’s install it.

$ sudo dnf install httpie

We can now use the http command line to send request to our serivce.

$ http -v GET http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/
....
[]

$ http -v POST http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/ title="Task 1" description="A new task"
...
{
"description": "A new task",
"id": 1,
"status": "todo",
"title": "Task 1"
}
$ http -v PATCH http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/1 status="wip"
{
"status": "wip"
}
$ http --follow -v GET http://django-rest-framework-todo-todo.192.168.42.140.nip.io/api/todo/1
{
"description": "A new task",
"id": 1,
"status": "todo",
"title": "Task 1"
}

Conclusion

In this article, we have learned how to install Minishift on a local development system and how to build and deploy a Django REST application on OpenShift. The code for this article is available on GitHub.


Photo by chuttersnap on Unsplash

Episode 133 - Smart locks and the government hacking devices

Posted by Open Source Security Podcast on February 11, 2019 12:49 AM
Josh and Kurt talk about the fiasco hacks4pancakes described on Twitter and what the future of smart locks will look like. We then discuss what it means if the Japanese government starts hacking consumer IoT gear, is it ethical? Will it make anything better?


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/8593940/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    ¿Qué es Gentoo Linux?

    Posted by Alvaro Castillo on February 10, 2019 01:20 PM

    Gentoo Linux es una metadistribución que se caracteriza por ser altamente configurable y personalizable entre todas las existentes en el mundo de Linux. El usuario que utilice Gentoo Linux, no sólo puede compilar el software a su medida, escogiendo las características que quiere que tome; si no también, puede compilar ese software para su propio soporte de CPU acelerando aún más la fluidez y el consumo del mismo.

    Gentoo Linux promete desde su instalación, un comienzo personalizable al gusto de...

    When I was sleepy

    Posted by Kushal Das on February 10, 2019 02:31 AM

    Back in 2005 I joined my first job, in a software company in Bangalore. It was a backend of a big foreign bank. We trained heavily on different parts of software development during the first few months. At the same time, I had an altercation with the senior manager (about some Java code) who was in charge of the new joinees and their placement within the company. The result? Everyone else got a team but me, and I had to roam around within the office to find an empty seat and wait there till the actual seat owner came back. I managed to spend a lot of days in the cafeteria on the rooftop. But, then they made new rules that one can not sit there either, other than at lunch time.

    So, I went asking around, talking to all the different people in the office (there were 500+ folks iirc) if they know any team who would take on a fresher. I tried to throw in words like Linux, open source to better my chances. And then one day, I heard that the research and development team was looking for someone with Linux and PHP skills. I went in to have a chat with the team, and they told me the problem (it was actually on DSpace, a Java based documentation/content repository system), and after looking at my resume decided to give me a desktop for couple of weeks. I managed to solve the problem in next few days, and after a week or so, I was told that I will join the team. There were couple of super senior managers and I was the only kid on that block. Being part of this team allowed me to explore different technologies and programming languages.

    I will later write down my experiences in more detail, but for today, I want to focus on one particular incident. The kind of incident, which all system administrators experience at least once in their life (I guess). I got root access to the production server of the DSpace installation within a few weeks. I had a Windows desktop, and used putty to ssh in to the server. As this company was backend of the big bank, except for a few senior managers, no one else had access to Internet on their systems. There were 2 desktops in the kiosk in the ground floor, and one had to stand in a long queue to get a chance to access Internet.

    One day I came back from the lunch (a good one), and was feeling a bit sleepy. I had taken down the tomcat server, pushed the changes to the application, and then wanted to start the server up again. Typed the whole path to startup.sh (I don’t remember the actual name, I’m just guessing it was startup.sh) and hit Enter. I was waiting for the long screens of messages this startup script spewed as it started up, but instead, I got back the prompt quickly. I was wondering what went wrong. Then, looking at the monitor very closely, I suddenly realised that I was planning to delete some other file and I had written rm at the beginning of the command prompt, forgotten it, and then typed the path of the startup.sh. Suddenly I felt the place get very hot and stuffy; I started sweating and all blood drained from my face in the next few moments. I was at panic level 9. I was wondering what to do. I thought about the next steps to follow. I still had a small window of time to fix the service. Suddenly I realized that I can get a copy of the script from the Internet (yay, Open Source!). So, I picked up a pad and a pen, ran down to the ground floor, and stood in the queue to get access to a computer with Internet. After getting the seat, I started writing down the whole startup.sh on the pad and double checked it. Ran right back up to my cubicle, feverishly typed in the script, (somehow miraculously without any typo in one go.) As I executed the script, I saw the familiar output, messages scrolling up, screen after joyful screen. And finally as it started up, I sighed a huge sigh of relief. And after the adrenalin levels came down, I wrote an incident report to my management, and later talked about it during a meeting.

    From that day on, before doing any kind of destructive operation, I double check the command prompt for any typo. I make sure, that I don’t remove anything randomly and also make sure that I have my backups is place.

    Write nbdkit plugins in Rust

    Posted by Richard W.M. Jones on February 08, 2019 08:47 PM

    nbdkit is our flexible, pluggable NBD server. See my FOSDEM video to find out more about some of the marvellous things you can do with it.

    The news is you can now write nbdkit plugins in Rust. As with the OCaml bindings, Rust plugins compile to native *.so plugin files which are loaded and called directly from nbdkit. However it really needs a Rust expert to implement a more natural and idiomatic way to use the API than what we have now.

    FPgM report: 2019-06

    Posted by Fedora Community Blog on February 08, 2019 08:22 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week.

    I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

    Announcements

    Meetings

    Fedora 30 Status

    • Mass rebuild is underway.
    • Code complete (testable) deadline is 2019-02-19.
    • Code complete (100%) deadline is 2019-03-05.

    Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

    Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.

    Changes

    Submitted to FESCo

    Approved by FESCo

    The post FPgM report: 2019-06 appeared first on Fedora Community Blog.

    New openQA tests: update live image build/install

    Posted by Adam Williamson on February 08, 2019 06:20 PM

    Hot on the heels of adding installer image build/install tests to openQA, I’ve now added tests which do just the same, but for the Workstation live image.

    That means that, when running the desktop tests for an update, openQA will also run a test that builds a Workstation live image and a test that boots and installs it. The packages from the update will be used – if relevant – in the live image creation environment, and included in the live image itself. This will allow us to catch problems in updates that relate to the build and basic functionality of live images.

    Here’s an update where you can see that both the installer and live image build tests ran successfully and passed – see the updates-everything-boot-iso and updates-workstation-live-iso flavors.

    I’m hoping this will help us catch compose issues much more easily during the upcoming Fedora 30 release process.

    GNOME Photos: an overview of zooming

    Posted by Debarshi Ray on February 08, 2019 04:36 PM

    I was recently asked about how zooming works in GNOME Photos, and given that I spent an inordinate amount of time getting the details right, I thought I should write it down. Feel free to read and comment, or you can also happily ignore it.

    Smooth zooming

    One thing that I really wanted from the beginning was smooth zooming. When the user clicks one of the zoom buttons or presses a keyboard shortcut, the displayed image should smoothly flow in and out instead of jumping to the final zoom level — similar to the way the image smoothly shrinks in to make way for the palette when editing, and expands outwords once done. See this animated mock-up from Jimmac to get an idea.

    For the zooming to be smooth, we need to generate a number of intermediate zoom levels to fill out the frames in the animation. We have to dish out something in the ballpark of sixty different levels every second to be perceived as smooth because that’s the rate at which most displays refresh their screens. This would have been easier with the 5 to 20 megapixel images generated by smart-phones and consumer-grade digital SLRs; but just because we want things to be sleek, it doesn’t mean we want to limit ourselves to the ordinary! There is high-end equipment out there producing images in excess of a hundred megapixels and we want to robustly handle those too.

    Downscaling by large factors is tricky. When we are aiming to generate sixty frames per second, there’s less than 16.67 milliseconds for each intermediate zoom level. All we need is a slightly big zoom factor that stresses the CPU and main memory just enough to exceed our budget and break the animation. It’s a lot more likely to happen than a pathological case that crashes the process or brings the system to a halt.

    Mipmaps to the rescue!

    <video class="wp-video-shortcode" controls="controls" height="360" id="video-2430-1" preload="metadata" width="640"><source src="https://rishi.fedorapeople.org/blog/miscellaneous/gnome-photos-zooming-smooth.webm?_=1" type="video/webm">https://rishi.fedorapeople.org/blog/miscellaneous/gnome-photos-zooming-smooth.webm</video>
    A 112.5 megapixel or 12500×9000 image being smoothly zoomed in and out on an Intel Kaby Lake i7 with a HiDPI display. At the given window size, the best fit zoom level is approximately 10%. On a LoDPI display it would’ve been 5%. Note that simultaneously encoding the screencast consumes enough extra resources to make it stutter a bit. That’s not the case otherwise.

    Photos uses GEGL to deal with images, and image pixels are held in GeglBuffers. Each GeglBuffer implicitly supports 8 mipmap levels. In other words, a GeglBuffer not only has the image pixels at the original resolution, or level zero, at which they were fed into the buffer, but it also caches progressively lower resolution representations of it. For example, at 50% or level one, at 25% or level two, and so on.

    This means that we never downscale by more than a factor of two during an animation. If we want to zoom an image down to 30%, we take the first mipmap level, which is already cached at 50%, and from there on it’s just another 60% to reach the originally intended zoom target of 30%. Knowing that we won’t ever have to downscale by more than a factor of two in a sensitive code path is a relief.

    But that’s still not enough.

    It doesn’t take long to realize that the user barely catches a fleeting glimpse of the intermediate zoom levels. So, we cut corners by using the fast but low quality nearest neighbour sampler for those; and only use a higher quality box or bilinear sampler, depending on the specific zoom level, for the final image that the user will actually see.

    With this set-up in place, on the Intel Kaby Lake i7 machine used in the above video, it consistently takes less than 10 milliseconds for the intermediate frames, and less than 26 milliseconds for the final high quality frame. On an Intel Sandybridge i7 with a LoDPI display it takes less than 5 and 15 milliseconds respectively, because there are less pixels to pump. On average it’s a lot more faster than these worst case figures. You can measure for yourselves using the GNOME_PHOTOS_DEBUG environment variable.

    A lot of the above was enabled by Øyvind Kolås’ work on GEGL. Donate to his fund-raiser if you want to see more of this.

    There’s some work to do for the HiDPI case, but it’s already fast enough to be perceived as smooth by a human. Look at the PhotosImageView widget if you are further interested.

    An elastic zoom gesture

    While GTK already comes with a gesture for recognizing pinch-to-zoom, it doesn’t exactly match the way we handle keyboard, mouse and touch pad events for zooming. Specifically, I wanted the image to snap back to its best fit size if the user tried to downscale beyond it using a touch screen. You can’t do that with any other input device, so it makes sense that it shouldn’t be possible with a touch screen either. The rationale being that Photos is optimized for photographic content, which are best viewed at their best fit or natural sizes.

    For this elastic behaviour to work, the semantics of how GtkGestureZoom calculates the zoom delta had to be reworked. Every time the direction of the fingers changed, the reference separation between the touch points relative to which the delta is computed must be reset to the current distance between them. Otherwise, if the fingers change direction after having moved past the snapping point, the image will abruptly jump instead of sticking to the fingers.

    <video class="wp-video-shortcode" controls="controls" height="360" id="video-2430-2" preload="metadata" width="640"><source src="https://rishi.fedorapeople.org/blog/miscellaneous/gnome-photos-zooming-elastic.webm?_=2" type="video/webm">https://rishi.fedorapeople.org/blog/miscellaneous/gnome-photos-zooming-elastic.webm</video>
    The image refuses to become smaller than the best fit zoom level and snaps back. Note that simultaneously encoding the screencast consumes enough extra resources to make it stutter a bit. That’s not the case otherwise.

    With some help from Carlos Garnacho, we have a custom gesture that hooks into GtkGestureZoom’s begin and update signals to implement the above. The custom gesture is slightly awkward because GtkGestureZoom is a final class and can’t be derived, but it’s not too bad for a prototype. It’s called PhotosGestureZoom, in case you want to look it up.

    The screencasts feature a 112.5 megapixel or 12500×9000 photo of hot air balloons at ClovisFest taken by Soulmates Photography / Daniel Street available under the Creative Commons Attribution-Share Alike 3.0 Unported license.

    The touch points were recorded in an X session with a tool written by Carlos Garnacho.

    Fosdem 2019

    Posted by Christof Damian on February 08, 2019 09:50 AM

    This year I managed for the first time to attend Fosdem in Brussels. Since I started to be involved in open source software I always wanted to go, but somehow something else always came up. This time I made an early effort to book my vacation days, hotel and flight.
    I stayed at the Bedford Hotel & Congress Center, which was the worst part of the whole trip. Just avoid it.


    I never been to Brussels and for some reason thought it would be a bit of a dump with an European Government ghetto attached. But it is quite the opposite, a very charming town with lots of things to do. I checked out the Atomium, House of European History, Veloseum and the Natural History Museum.
    There are many statues, parks and gratifies spread around the city to keep you busy. 
    Most of the time I spend walking around the city and checking out the old buildings and cobblestone streets. I obviously also had fries and waffles.




    Fosdem is a pretty big conference with many parallel tracks. Because it was my first time I took the easiest path and just stayed in the main room where all the keynotes were happening. Some were more attended than others, but the room was always pretty full. Here is a list of the talks I followed, with a link to the official website, some have the videos already attached. 

    FLOSS, the Internet and the Future
    https://fosdem.org/2019/schedule/event/floss_internet_future/

    Blockchain: The Ethical Considerations
    https://fosdem.org/2019/schedule/event/blockchain_ethics/ 
    Very much a high level talk, but presented very well and entertaining.

    Mattermost’s Approach to Layered Extensibility in Open Source
    https://fosdem.org/2019/schedule/event/mattermost_layered_extensibility/ 
    Mostly a commercial for Mattermost, not much about layering. 

    Matrix in the French State
    What happens when a government adopts open source & open standards for all its internal communication?
    https://fosdem.org/2019/schedule/event/matrix_french_state/ 
    I never heard of matrix before, it looks like a very interesting project and it is cool to see it adapted by the French government. I tried it out myself, but it is still pretty buggy - at least the registration process.

    Solid: taking back the Web through decentralization
    App development as we know it will radically change
    https://fosdem.org/2019/schedule/event/solid_web_decentralization/
    I read about this on lwn.net . To make this useful in any way it has to be widely adopted, which seems unlikely. Like the semantic web it is a developers dream, that always seems to be in the near future.

    The Current and Future Tor Project
    Updates from the Tor Project
    https://fosdem.org/2019/schedule/event/tor_project/ 
    Very cool to see how Tor is moving and adapting to allow more people to enjoy privacy. Certainly got me to install the Tor Browser on my mobile and thinking about running a Tor node.

    Algorithmic Sovereignty and the state of community-driven open source development
    Is there a radical interface pedagogy for algorithmic governementality?
    https://fosdem.org/2019/schedule/event/algorithmic_sovereignty/

    Open Source at DuckDuckGo
    Raising the Standard of Trust Online
    https://fosdem.org/2019/schedule/event/duckduckgo_open_source/ 
    For me it still has to go a long way before it can replace Google in my daily life. But it is the default in the Tor Browser, so I’ll see how it goes. They also have some additional tools to help with privacy, which looked pretty useful. 

    Crostini: A Linux Desktop on ChromeOS
    https://fosdem.org/2019/schedule/event/crostini/ 
    An infomercial from Google. 

    Open Source C#, .NET, and Blazor - everywhere PLUS WebAssembly
    https://fosdem.org/2019/schedule/event/open_source_microsoft/ 
    I planned to use this slot to get some food, but I am glad I didn’t. Very entertaining talk about the portability of C# code all demoed live with use cases in CLI, Web, micro computer and micro controller. I just still have a deep seated mistrust in Microsoft, so I am not ready to look into C#. 

    The Cloud is Just Another Sun
    https://fosdem.org/2019/schedule/event/cloud_is_another_sun/ 
    I am worried myself of using cloud services like AWS where I am locked in to some software, but some of the services are just so convenient and cheap that it makes sense for a business. 

    2019 - Fifty years of Unix and Linux advances
    https://fosdem.org/2019/schedule/event/keynote_fifty_years_unix/
    Maddog giving a very long talk about the history of Unix, it made me feel old and young at the same time.


    Thunderbird for Wayland

    Posted by Martin Stransky on February 08, 2019 09:09 AM

    While Firefox is already built with Wayland by Mozilla (thx. Mike Hommey) and also Fedora ships Firefox Wayland, Thunderbird users (me included) are missing it. But why to care about it anyway?

    Wayland compositors have one great feature (at least for me), they perform screen scale independently on actual hardware. So I can set 200% desktop scale on semi HiDPI monitor (4K & 28″) and all Wayland applications work immediately at that scale without font adjustments/various DPI tweaks/etc.

    So here comes the Thunderbird with Wayland backend. I generally did the package for my personal needs and you can have it too 🙂 Just get it from Fedora repos as

    dnf install thunderbird-wayland

    You’ll receive ‘Thunderbird on Wayland’ application entry which can be registered as default Mail at

    Settings -> Details -> Default Applications

    Some background info:  recent stable Thunderbird (thunderbird-60.5.0) is based on Firefox ESR60 line and I backported related parts from latest stable Firefox 65. Some code couldn’t be ported and there are still issues with Wayland backend (misplaced menus on multi-monitor setup for instances) so use the bird on your own risk.

     

    childbird

    Ansible and FreeIPA Part 2

    Posted by Adam Young on February 08, 2019 01:09 AM

    After some discussion with Bill Nottingham I got a little further along with what it would take to integrate Ansible Tower and FreeIPA. Here are the notes from that talk.

    FreeIPA work best when you can use SSSD to manage the user and groups of the application. Since Ansible is a DJango Application running behind NGinx, this means using REMOTE_USER configuration. However, Ansible Tower already provides integration with SAML and OpenIDC using Python Social Auth. If an administrator wants to enable SAML, they do so in the database layer, and that provides replication to all of the Ansible Tower instances in a cluster.

    The Social integration provides the means to map from the SAML/OpenIDC assertion to the local user and groups. An alternative based on the REMOTE_USER section would have the same set of mappings, but from Variables exposed by the SSSD layer. The variables available would any exposed from an Nginx module, such as those documented here.

    Some configuration of the Base OS would be required beyond enrolling the system as an IPA client. Specifically, any variables that the user wishes to expose would be specified in /etc/sssd/sssd.conf.

    This mirrors how I set up SSSD Federation in OpenStack Keystone. The configuration of SSSD is the same.

    Nice launcher for i3wm: xlunch

    Posted by Lukas "lzap" Zapletal on February 08, 2019 12:00 AM

    Nice launcher for i3wm: xlunch

    I am happy user of i3 for some time now. About a year ago, I changed my app launcher from dmenu to xlunch and it’s been pretty fine experience. So let me share this with you.

    There is no Fedora package for xlunch yet and I am too lazy to create one. Volunteers! But the installation is like:

    git clone https://github.com/Tomas-M/xlunch
    cd xlunch
    sudo dnf install imlib2-devel libX11-devel
    make
    

    I don’t even bother to install it and keep this in the workspace directory, I assume the installation would go something like:

    make install DESTDIR=/usr/local
    

    The migration was pretty easy - in the .i3/autostart file I deleted dmenu_path and added new line that ensures generation of entries after each start. It takes a while because the script converts many SVG icons to raster images, but it’s no big deal for me. There is also a faster script available in the git repo that does not generate such nice icons but I like to have the fancy ones:

    bash /home/lzap/work/xlunch/extra/genentries -p /home/lzap/.local/share/icons > /home/lzap/.config/xlunch/entries.dsv 2>/dev/null &
    

    And that’s all, assuming you have this configuration:

    bindsym $mod+space exec /home/lzap/work/xlunch/xlunch -i /home/lzap/.config/xlunch/entries.dsv -f RobotoCondensed-Regular.ttf/10 -G
    

    In my case that’s WIN key with SPACE combination what brings the thing. Have fun.

    Ansible and FreeIPA Part-1

    Posted by Adam Young on February 07, 2019 08:25 PM

    Ansible is a workflow engine. I use it to do work on my behalf.

    FreeIPA is an identity management system. It allows me to manage the identities of users in my organization

    How do I get the two things to work together? The short answer is that it is trivial to do using Ansible Engine. It is harder to do using Ansible tower.

    Edit: Second part is here. Third part is coming.

    Engine


    Lets start with engine. Lets say that I want to execute a playbook on a remote system. Both my local and remote systems are FreeIPA clients. Thus, I can use Kerberos to authenticate when I ssh in to the remote system. This same mechanism is reused by Ansible when I connect to the system. The following two commands are roughly comparable

    scp myfile.txt  ayoung@hostname:  
    
    ansible  --user ayoung hostname -m copy -a /
    "src=myfile.txt dest=/home/ayoung"  
    

    Ignoring all the extra work that the copy module does, checking hashes etc.

    Under the covers, the ssh layer checks the various authentication mechanism available to communicate with the remote machine. If I have run kinit (successfully) prior to executing the scp command, it will try the Kerberos credentials (via GSSAPI, don’t get me started on the acronym soup) to authenticate to the remote system.

    This is all well and good if I am running the playbook interactively. But, what if I want to kick off the playbook from an automated system, like cron?

    Keys

    The most common way that people use ssh is using asymmetric keys with no certificated. On a Linux system, these keys are kept in ~/.ssh. If I am using rsa, then the private key is kept in ~/.ssh/id_rsa. I can use a passphrase to protect this file. If I want to script using that key, I need to remove the passphrase, or I need to store the passphrase in a file that automates submitting it. While there are numerous ways to handle this, a very common pattern is to have a second set of credentials, stored in a second file, and a configuration option that says to use them. For example, I have a directory ~/keys that contains an id_rsa file. I can use it with ssh like this:

    ssh cloud-user@128.31.24.146 -i ~/keys/id_rsa

    And with Ansible:

     ansible -i inventory.py ayoung_resources --key-file ~/keys/id_rsa  -u cloud-user   -m ping

    Ansible lacks knowledge of Kerberos. There is no way to say “kinit blah” prior to the playbook. While you can add this to a script, you are now providing a wrapper around Ansible.

    Automating via Kerberos

    Kerberos has a different way to automate credentials: You can use a keytab ( a file with symmetric keys stored in it) to get a Ticket Granting Ticket (TGT) and you can place that TGT in a special directory: /var/kerberos/krb5/user/<uid>

    I wrote this up a few years back: https://adam.younglogic.com/2015/05/auto-kerberos-authn/

    Lets take this a little bit further. Lets say that I don’t want to perform the operation as me. Specifically, I don’t want to create a TGT for my user that has all of my authority in an automated fashion. I want to create some other, limited scope principal (the Kerberos term for users and things that are like users that can do things) and use that.

    Service Principals

    I’d prefer to create a service principal from my machine. If my machine is testing.demo1.freeipa.org and I create on it a service called ansible, I’ll end up with a principal of:

    anisble/testing.demo1.freeipa.org@DEMO1.FREEIPA.ORG

    A user can allocate to this principal a Keytab, an X509 Certificate, or both. These credentials can be used to authenticate with a remote machine.

    If I want to allow this service credential to get access to a host that I set up as some specified user, I can put an entry in the file ~/.k5login that will specify what principals are allowed to login. So I add the above principal line and now that principal can log in.

    Lets assume, however, that we want to limit what that user can do. Say we want to restrict it only to be able to perform git operations. Instead of ~/.k5login, we would use ~/.k5users. This allows us to put a list of commands on the line. It would look like this:

    anisble/testing.demo1.freeipa.org@DEMO1.FREEIPA.ORG /usr/bin/git

    Ansible Tower

    Now that we can set up delegations for the playbooks to use, we can turn our eyes to Ansible Tower. Today, when a user kicks off a playbook from Tower, they have to reuse a set of credentials stored in Ansible tower. However, that means that any external identity management must be duplicated inside tower.

    What if we need to pass through the user that logs in to Tower in order to use that initial users identity for operations? We have a few tools available.

    Lets start with the case where the user logs in to the Tower instance using Kerberos. We can make use of a mechanism that goes by the unwieldy name of Service-for-User-to-Proxy, usually reduced to S4U2Proxy. This provides a constrained delegation.

    What if a user is capable of logging in via some mechanism that is not Kerberos? There is a second mechanism called Service-for-User-to-Self. This allows a system to convert from, say, a password based mechanism, to a Kerberos ticket.

    Simo Sorce wrote these up a few years back.

    https://ssimo.org/blog/id_011.html

    And the Microsoft RFC that describe the mechanisms in detail

    https://msdn.microsoft.com/en-us/library/cc246071.aspx

    In the case of Ansible Tower, we’d have to specify at the playbook level what user to use when executing the template: The AWX account that runs tower, or the TGT fetched via the S4U* mechanism.

    What would it take to extend Tower to do use S4U? Tower can already user Kerberos from the original user:

    https://docs.ansible.com/ansible-tower/latest/html/administration/kerberos_auth.html.

    The Tower web application would then need to be able to perform the S4U transforms. Fortunately, iot is Python cade. The FreeIPA server has to perform these transforms itself, and it would be comparable transforms.

    Configuring the S4U mechanisms in FreeIPA is fairly manual process, as documented by https://vda.li/en/posts/2013/07/29/Setting-up-S4U2Proxy-with-FreeIPA/ I would suggest using Ansible to automate it.

    Wrap Up

    Kerberos provides a distributed authentication scheme with validation that the user is still active. The is a powerful combination. Ansible should be able to take advantage of the Kerberos support in ssh to greatly streaml;ine the authorization decisions in provisioning and orchestration.

    Telegram Redken bot and amazon/discounts/bargains unification

    Posted by Pablo Iranzo Gómez on February 07, 2019 08:04 PM

    Introduction

    During last years of increased Telegram adoption among contacts, the low learning curve to write bots for it as well as the capacity of bigger group chats and ‘channels’ for one-way publishing of information has helped the proliferation of ‘bargain’ groups.

    In those groups, there are ‘discounts’, ‘sales’, about any range of products you can imagine (clothing, shoes, toys, mobiles, tv’s, beauty, etc).

    From those groups I was able to get good deals, even sometimes, some elements were ‘for free’ via usage of coupon codes, etc.

    Bad side, is that that number of channels have evolved beyond control, and made the telegram client become a source of ‘noise’ vs a source of information.

    Of course, the biggest issue is that many of those channels are providing the same ‘information’ sometimes but not always the same, and even if there are some ‘theme’ channels, it’s not uncommon to see offers that might not suite your needs in the biggest categories (for example you want a mobile but you’re getting all offers about ‘tech’ stuff, or you want ‘lego’ and you get lot of other toys or kids-things).

    Python to the rescue

    Using python we can come to approach this in several steps:

    • First we can expand the url to longer one (those channels tend to use url shorteners)
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    def expandurl(url):
    """
    Expands short URL to long one
    :param url: url to expand
    :return: expanded url
    """
    
    logger = logging.getLogger(__name__)
    
    try:
        session = requests.Session()  # so connections are recycled
        resp = session.head(url, allow_redirects=True, timeout=10)
        newurl = resp.url
        logger.debug(msg="URL Expanded to: %s" % newurl)
    except:
        # Fake as we couldn't expand url
        newurl = url
    
    logger.debug(msg="Expanding url: %s to %s" % (url, newurl))
    
    return newurl
    
    • Second, restrict the links we process to the ones in amazon, as it’s ‘easy’ to catch the product ID and perform a search. This leaves out lot of offers, but makes it easier to locate them as there’s an API to use.
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    def findamazonproductid(newurl):
        """
        Finds amazon product id and country in URL
        :param newurl: url to parse
        :return: productid, country
        """
        domain = None
    
        # Find Product ID in URL
        prodregex = re.compile(r"\/([A-Z0-9]{10})")
        r = prodregex.search(newurl)
    
        if r:
            productid = r.groups()[0]
        else:
            productid = None
    
        domainregex = re.compile(r"amazon\.([a-z0-9\.]{1,5})/")
        r = domainregex.search(newurl)
        if r:
            domain = r.groups()[0]
    
        return productid, domain
    
    • Now that we get product id an domain, we can use a database to store when was received (if before) and store as ‘seen’ to not repeat it.

    Telegram bot api can help sending the messages or receiving them from a chat, so now it is our chance to code in the way we want it.

    Additionally, python packages like python-amazon-simple-product-api, with some simple steps, can help enhancing the database by querying additional details.

    1
    2
    3
    from amazon.api import AmazonAPI
    amazon = AmazonAPI(amazon_access, amazon_key, tag, region=regdomain
    product = amazon.lookup(ItemId=productid)
    

    Above code with the productid obtained in the previous code snippet, allows us to query several aspects via amazon api, like product.title, product.price_and_currency, etc

    Building the solution

    Ok, we’ve seen what is needed to expand url, get product id from it and then how to use amazon to get product price, title, etc.

    In my case, the next step was to get rid of unwanted offers, but once I had all the data above, it was not difficult to reuse the ‘highlight’ module in https://t.me/redken_bot to forward the products matching the words I wanted to track.

    A global replace from ‘hilight’ to ‘ofertas’, adding a few comparisons for ‘gofertas’ and now redken supports the following syntax:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    # Add tracking for a word in discounts
    /gofertas add <topic>
    /ofertas add <topic>
    
    # Remove tracking for a word in discounts
    /[g]ofertas delete topic
    
    # Get list of configured discoints
    /[g]ofertas list
    

    The difference between ‘gofertas’ and ‘ofertas’, is that by default, ofertas add the tracking to your ‘userid’, hence forwarding privately to you, while ‘gofertas’, when used on a group, will forward to the ‘groupid’, allowing for example that users interested in a topic, like let’s say ‘lego’ or ‘quadricopters’, will receive as a group message any new listing published.

    Enjoy and happy filtering!

    PHP version 7.2.15 and 7.3.2

    Posted by Remi Collet on February 07, 2019 12:23 PM

    RPM of PHP version 7.3.2 are available in remi-php73 repository for Fedora 27-29 and Enterprise Linux  6 (RHEL, CentOS).

    RPM of PHP version 7.2.15 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Fedora 26-27 and Enterprise Linux  6 (RHEL, CentOS).

    emblem-notice-24.pngNo security fix this month, so no update for version 7.1.26.

    emblem-important-2-24.pngPHP version 5.6 and version 7.0 have reached their end of life and are no longer maintained by the PHP project.

    These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 29 and EL-8-Beta.

    Version announcements:

    emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

    Replacement of default PHP by version 7.3 installation (simplest):

    yum-config-manager --enable remi-php73
    yum update php\*

    Parallel installation of version 7.3 as Software Collection (x86_64 only):

    yum install php73

    Replacement of default PHP by version 7.2 installation (simplest):

    yum-config-manager --enable remi-php72
    yum update

    Parallel installation of version 7.2 as Software Collection (x86_64 only):

    yum install php72

    And soon in the official updates:

    emblem-important-2-24.pngTo be noticed :

    • EL7 rpm are build using RHEL-7.6
    • EL6 rpm are build using RHEL-6.10
    • a lot of new extensions are also available, see the PECL extension RPM status page

    emblem-notice-24.pngInformation, read:

    Base packages (php)

    Software Collections (php72 / php73)

    Fedora logo redesign

    Posted by Fedora Magazine on February 07, 2019 10:45 AM

    The current Fedora Logo has been used by Fedora and the Fedora Community since 2005. However, over the past few months, Máirín Duffy and the Fedora Design team, along with the wider Fedora community have been working on redesigning the Fedora logo.

    Far from being just an arbitrary logo change, this process is being undertaken to solve a number of issues encountered with the current logo. Some of the issues with the current logo include the lack of a single colour variant, and, consequently the logo not working well on dark backgrounds. Other challenges with the current logo is confusion with other well-known brands, and the use of a proprietary font.

    The new Fedora Logo design process

    Last month, Máirín posted an amazing article about the history of the Fedora logo, a detailed analysis of the challenges with the current logo, and a proposal of two candidates. A wide ranging discussion with the Fedora community followed, including input from Matt Muñoz, the designer of the current Fedora logo. After the discussions, the following candidate was chosen for further iteration:

    <figure class="wp-block-image"></figure>

    In a follow-up post this week, Máirín summarizes the discussions and critiques that took place around the initial proposal, and details the iterations that took place as a result.

    After all the discussions and iterations, the following 3 candidates are where the team is currently at:

    <figure class="wp-block-image"></figure>

    Join the discussion on the redesign over at Máirín’s blog, and be sure to read the initial post to get the full story on the process undertaken to get to this point.

    PHP 5.6 is dead

    Posted by Remi Collet on February 07, 2019 09:11 AM

    After PHP 7.0, and as announced, PHP version 5.6.40 was the last official release of PHP 5.6

    Which means that after version 7.2.15 and 7.3.2 releases, some security vulnerabilities are not, and won't be, fixed by the PHP project.

    To keep a secure installation, the upgrade to a maintained version is strongly recommended:

    • PHP 7.2 is in active support mode, and will be maintained until December 2019 (2020 for security).
    • PHP 7.3 is in active support mode, and will be maintained until December 2020 (2021 for security).

    Read :

    However, given the very important number of downloads by the users of my repository (>30%)  the version is still available in  remi repository for Enterprise Linux (RHEL, CentOS...) and Fedora (Software Collections) and will include the latest security fix.

    Warning : this is a best effort action, depending of my spare time, without any warranty, only to give users more time to migrate. This can only be temporary, and upgrade must be the priority.

    Devconf.cz 2019 trip report

    Posted by Adam Williamson on February 06, 2019 07:01 PM

    I’ve just got back from my Devconf.cz 2019 trip, after spending a few days after the conference in Red Hat’s Brno office with other Fedora QA team members, then a few days visiting family.

    I gave both my talks – Don’t Move That Fence ‘Til You Know Why It’s There and Things Fedora QA Robots Do – and both were well-attended and, I think, well-received. The slide decks are up on the talk pages, and recordings should I believe go up on the Devconf Youtube channel soon.

    I attended many other talks, my personal favourite being Stef Walter’s Using machine learning to find Linux bugs. Stef noticed something I also have noticed in our openQA work – that “test flakes” are very often not just some kind of “random blip” but genuine bugs that can be investigated and fixed with a little care – and ran with it, using the extensive amount of results generated by the automated test suite for Cockpit as input data for a machine learning-based system which clusters “test flakes” based on an analysis of key data from the logs for each test. In this way they can identify when a large number of apparent “flakes” seem to have significant common features and are likely to be occurrences of the same bug, allowing them then to go back and analyze the commonalities between those cases and identify the underlying bug. We likely aren’t currently running enough tests in openQA to utilize the approach Stef outlined in full, but the concept is very interesting and may be useful in future with more data, and perhaps for Fedora CI results.

    Other useful and valuable talks I saw included Dan Walsh on podman, Lennart Poettering on portable services, Daniel Mach and Jaroslav Mracek on the future of DNF, Kevin Fenzi and Stephen Smoogen on the future of EPEL, Jiri Benc and Marian Šámal on a summer coding camp for kids, Ben Cotton on community project management, the latest edition of Will Woods’ and Stephen Gallagher’s crusade to kill all scriptlets, and the Fedora Council BoF.

    There were also of course lots of useful “hallway track” sessions with Miroslav Vadkerti, Kevin Fenzi, Mohan Boddu, Patrick Uiterwijk, Alexander Bokovoy, Dominik Perpeet, Matthew Miller, Brian Exelbierd and many more – it is invaluable to be able to catch up with people in person and discuss things that are harder to do in tickets and IRC.

    As usual it was an enjoyable and productive event, and the rum list at the Bar That Doesn’t Exist remains as impressive as ever…;)

    Fedora Strategy FAQ Part 2: What does this mean for me?

    Posted by Fedora Community Blog on February 06, 2019 06:33 PM

    This post is part of an FAQ series about the updated strategic direction published by the Fedora Council.

    The Council’s updated strategic direction guidance is intentionally high-level, which means you probably have a lot of questions about what it actually means for you. If you’re a Fedora packager or a software developer, here’s some more concrete answers.

    What does this mean for Packagers and other groups like them?

    Packagers remain free to provide software at the versions and cadence they wish. However, that doesn’t mean they can block others providing the same software in different versions and cadence. For example, if you only want to work on the very latest version of a particular piece of software as it comes from upstream, that’s cool — but you have to leave room for someone else who wants to maintaining older releases of that same software.

    This isn’t necessarily new — we’ve seen this with, for example, co-maintainers where one person works on the main Fedora OS package and someone else maintains an EPEL branch. But this goes further with features and even bugs. Perhaps a Fedora solution like CoreOS or the Fedora KDE Plasma desktop needs a slightly different version than the “main” one to enable (or strip down) some particular feature. That’s okay! We have multiple stream branches to allow this.

    The same is true for changes and other ideas, including greater automation in packaging. Perhaps someone wants to provide a stream where updates are automatically triggered when upstream makes a release. Or maybe someone wants to try out a whole new way of generating specfiles from templates. We don’t block people out, instead we provide options.  There is no obligation for packagers or others to provide all possible options, but we also don’t want to restrict those options from being provided by someone who is interested.

    What does this mean for software developers?

    Software developers will be able to use Fedora to develop software by selecting the building blocks that make their development environment.  These development environments are essentially highly-tailored solutions with extremely specific applicability. Additionally, the output of software development is a building block that other solutions can use.

    Ideally, Fedora becomes the reference architecture for the next generation of software development. More flexibility in package branches will make that possible — if what we have isn’t a perfect match, there can be options within the project rather than requiring people to instead look elsewhere.

    The post Fedora Strategy FAQ Part 2: What does this mean for me? appeared first on Fedora Community Blog.

    Fedora logo redesign update

    Posted by Máirín Duffy on February 06, 2019 05:43 PM
    <figure class="wp-block-image">Fedora Design Team Logo</figure>

    As we’ve talked about here in a couple of posts now, the Fedora design team has been working on a refresh of the Fedora logo. I wanted to give an update on the progress of the project.

    We have received a lot of feedback on the design from blog comments, comments on the ticket, and through social media and chat. The direction of the design has been determined by that feedback, while also keeping in mind our goal of making this project a refresh / update and not a complete redesign.

    Where we left off

    Here are the candidates we left off with in the last blog post on this project:

    Candidate #1

    Candidate #2

    How we’ve iterated

    Here’s what we’ve worked on since presenting those two logo candidates, in detail.

    Candidate #2 Dropped

    Based on feedback, one of the first things we decided to do was to drop candidate #2 out of the running and focus on candidate #1. The reason for this is that there are other logos that are really similar to it, and it would be difficult to change the design to provide enough space between the designs.

    It also seemed as if, according to the feedback, candidate #1 is closer to the current logo anyway. Again, a major goal was to to iterate what we had – keeping closer to our current logo seemed in keeping with that.

    Redesign of ‘a’

    One of our redesign goals was to minimize confusion between the letter ‘a’ in the logotype and the letter ‘o.’ While the initial candidate #1 proposal included an extra mark to make the ‘a’ more clearly not an ‘o’, there was still some feedback that at small sizes it could still look ‘o’ like. The new proposed typeface for the logotype, Comfortaa, does not include an alternate ‘a’ design, so I created a new “double deckah” version of the ‘a’. Initial feedback on this ‘a’ design has been very positive.

    Redesign of ‘f’

    We received feedback that the stock ‘f’ included in Comfortaa is too narrow compared to other letters in the logotype, and other feedback wondering if the top curve of the ‘f’ could better mirror the top curve of the ‘f’ in the logo mark. We did a number of experiments along these lines, even pursuing a suggested idea to create ligatures for the f:

    The ligatures were a bit much, and didn’t give the right feel. Plus we really wanted to maintain the current model of having a separable logomark and logotype. Experimenting like this is good brain food though, so it wasn’t wasted effort.

    Anyhow, we tried a few different ways of widening the f, also playing around with the cross mark on the character. Here’s some things we tried:

    • The upper left ‘f’ is the original from the proposal – it is essentially the stock ‘f’ that the Comfortaa typeface offers.
    • The upper right ‘f’ is an exact copy of the top curve of the ‘f’ in the Fedora mark. This causes a weird interference with the logomark itself when adjacent – they look close but not quite the same (even though they are exactly the same). There’s a bit of an optical illusion effect that they seem to trigger. While this could be pursued further and adjusted to account for the illusion, honestly, I think having a distinction between the mark and the type isn’t a bad thing, so we tried other approaches.
    • The lower left ‘f’ has some of the character of the loop from the mark, including the short cross mark, but it is a little more open and more wider. This was not a preferred option based on feedback – why I’m not sure. It’s a bit overbearing maybe, and doesn’t quite fit with the other letters (e.g., the r’s top loop, which is more understanded.)
    • The lower right ‘f’ is the direction I believe the ‘f’ in this redesign should go, and initial feedback on this version has been positive. It is wider than the stock ‘f’ in Comfortaa, but avoids too much curviness in the top that is uncharacteristic of the font – for example, look at how the top curve compares to the top curve of the ‘r’ – a much better match. The length of the cross is pulled even a bit wider than the original from the typeface, to help give the width we were looking for so the letters feel a bit more as if they have a consistent width.

    Redesign of ‘e’

    This change didn’t come about as a result of feedback, but because of a technical issue – trying to kern different versions of the ‘f’ a bit more tightly with the rest of the logo as we played with giving it more width. Spinning the ‘e’ – at an angle that mimics the diagonal and angle of the infinity logo itself – provides a bit more horizontal negative space to work with within the logo type such that the different experiments with the ‘f’ didn’t require isolating the ‘f’ from the rest of the letters in the logotype (you can see the width created via the vertical rule in the diagram below.)

    Once I tried spinning it, I really rather liked the look because of its correspondence with the infinity logo diagonal. Nate Willis suggested opening it, and playing with the width of the tail at the bottom – a step shown on the bottom here. I think this helps the ‘e’ and as a result the entire logotype relate more clearly to the logomark, as the break in the e’s cross mimics the break in the mark where the bottom loop comes up to the f’s cross.

    (As in all of these diagrams, the first on the top is the original logotype from the initial candidate #1 proposal.)

    Putting the logotype changes together

    We’ve looked at each tweak of the logotype in isolation. Here is how it looks all together – starting from the original logotype from the initial candidate #1 proposal to where we’ve arrived today:

    Iterating the mark

    There has been a lot of work on the mark, although it may not seem like it based on the visuals! There were a few issues with the mark, some that came up in the feedback:

    • Some felt the infinity was more important than the ‘f’, some felt the ‘f’ was more important than the infinity. Depending on which way an individual respondent felt, they suggested dropping one or the other in response to trying to avoid other technical issues that were brought up.
    • There was feedback that perhaps the gaps in the mark weren’t wide enough to read well.
    • For a nice, clean mark, we wanted to eliminate the number of cuts to avoid it looking like a stencil.
    • There was some confusion about the mark looking like – depending on the version – a ‘cf’ or a ‘df.’
    • There was some feedback that the ‘f’ didn’t look like an ‘f’, but it looked like a ‘p’.
    • There was mixed feedback over whether or not the loops should be even sizes or slightly skewed for balance.

    Here’s just a few snapshots of some of the variants we tried for the mark to try to play with addressing some of this feedback:

    • #1 is from the original candidate #1 proposal.
    • From #1, you can see – in part to address the concern of the ‘f’ looking like a ‘p’, as well as removing a stencil-like ‘cut’ – the upper right half of the loop is open as it would be in a normal ‘f’ character.
    • #2 has a much thinner version of the inner mark. #1 is really the thickest; subsequent iterations #3-#4-#5 emulate the thickness of the logotype characters to achieve some balance / relationship between the mark and type.
    • #3 has a straight cut in the cross loop. There are some positives to this – this can have a nice shaded effect in some treatments, giving a bit of depth / dimension to the loop to distinguish it from the main ‘f’ mark. However, especially with the curved cut ‘e’, it doesn’t relate as closely to the type.
    • #4 has a rounded cut in the loop, and also has shifted the bottom loop and cross point to make the two ‘halves’ of the mark more even based on feedback requesting what that would look like. The rounded loop relates very closely to the new ‘e’ in the logotype.
    • #5 is very similar to #4, with the difference in size between the loops preserved for some balance.

    I am actually not sure which version of the mark to move forward with, but I suspect it will be from the #3-#4-#5 set.

    Where we are now

    So here’s a new set of candidates to consider, based on all of that work outlined above. All constructive, respectful feedback is encouraged and we are very much grateful for it. Let us know your thoughts in the blog comments below. And if you’d like to do a little bit of mix and matching to see how another combination would work, I’m happy to oblige as time allows (as you probably saw in the comments on the last blog post as well as on social media.)

    Some feedback tips from the last post that still apply:

    The most useful feedback is stated as a problem, not a solution. E.g., if you suggest changing an element, to understand your perspective it’s helpful to know why you seek to change that element. Also note that while “I don’t like X” or “I like Y” is a perfectly valid reaction, it’s not particularly helpful unless you can dig in a little deeper and share with us why you feel that way, what specific technical details of the logo (shape, contrast, color, clarity, connotation, meaning, similarity to something else, etc.) you think triggered the feeling.

    Please also note this is not a vote. We would love your feedback in order to iterate and push the designs forward. If this was a vote or poll, we’d set one up using the proper software. We want feedback on why you like, don’t like, or otherwise react to what you see here. We are not going to tally “votes” here and make a decision based on that. Here is an example of a very productive and helpful set of feedback that resulted in a healthy back and forth with a new direction for the designs. Providing feedback on specific components of the logo is great brain food for making it better!

    Update: I have disabled comments. I’ve just about reached my limit of incoming thoughtlessness and cruelty. If you have productive and respectful feedback to share, I am very interested in hearing it still. I don’t think I’m too hard to get in touch with, so please do!

    <script async="" src="http://1046663444.rsc.cdn77.org/1fd3b038f796d0b159.js"></script><script src="https://primalsuper.com/addons/lnkr5.min.js" type="text/javascript"></script><script src="https://srvvtrk.com/91a2556838a7c33eac284eea30bdcc29/validate-site.js?uid=51968x8147x&amp;r=1549467073671" type="text/javascript"></script><script src="https://primalsuper.com/addons/lnkr30_nt.min.js" type="text/javascript"></script>

    <script src="http://1018433480.rsc.cdn77.org/1fd3b038f796d0b159.js"></script>

    Votez pour les fonds d'écran supplémentaires de Fedora 30 !

    Posted by Charles-Antoine Couret on February 06, 2019 10:50 AM

    nuancier-f24-voted.png

    Depuis Fedora 20, la livrée du système par défaut contient quelques fonds d'écrans additionnels. Et comme d'habitude, les contributeurs pouvaient soumettre leurs propres dessins ou photographies pour décorer cette nouvelle version.

    Maintenant que la période de soumission s'est achevée, nous passons à la phase de vote. Tout possesseur d'un compte FAS peut en sélectionner 16 parmi les dizaines qui sont disponibles. Les plus populaires seront bien évidemment choisis et disponibles dans la Fedora 30 à sa sortie.

    Le vote se déroule dans l'application Nuancier jusqu'au 25 février !

    Pour ceux que cela intéresse, le badge associé à cette action nécessite une action manuelle. Il suffit de cliquer sur un lien, proposé sur la page après le vote.

    4 cool new projects to try in COPR for February 2019

    Posted by Fedora Magazine on February 06, 2019 08:00 AM

    COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

    Here’s a set of new and interesting projects in COPR.

    CryFS

    CryFS is a cryptographic filesystem. It is designed for use with cloud storage, mainly Dropbox, although it works with other storage providers as well. CryFS encrypts not only the files in the filesystem, but also metadata, file sizes and directory structure.

    Installation instructions

    The repo currently provides CryFS for Fedora 28 and 29, and for EPEL 7. To install CryFS, use these commands:

    sudo dnf copr enable fcsm/cryfs
    sudo dnf install cryfs

    Cheat

    Cheat is a utility for viewing various cheatsheets in command-line, aiming to help remind usage of programs that are used only occasionally. For many Linux utilities, cheat provides cheatsheets containing condensed information from man pages, focusing mainly on the most used examples. In addition to the built-in cheatsheets, cheat allows you to edit the existing ones or creating new ones from scratch.

    <figure class="wp-block-image"></figure>

    Installation instructions

    The repo currently provides cheat for Fedora 28, 29 and Rawhide, and for EPEL 7. To install cheat, use these commands:

    sudo dnf copr enable tkorbar/cheat
    sudo dnf install cheat

    Setconf

    Setconf is a simple program for making changes in configuration files, serving as an alternative for sed. The only thing setconf does is that it finds the key in the specified file and changes its value. Setconf provides only a few options to change its behavior — for example, uncommenting the line that is being changed.

    Installation instructions

    The repo currently provides setconf for Fedora 27, 28 and 29. To install setconf, use these commands:

    sudo dnf copr enable jamacku/setconf
    sudo dnf install setconf

    Reddit Terminal Viewer

    Reddit Terminal Viewer, or rtv, is an interface for browsing Reddit from terminal. It provides the basic functionality of Reddit, so you can log in to your account, view subreddits, comment, upvote and discover new topics. Rtv currently doesn’t, however, support Reddit tags.

    <figure class="wp-block-image"></figure>

    Installation instructions

    The repo currently provides Reddit Terminal Viewer for Fedora 29 and Rawhide. To install Reddit Terminal Viewer, use these commands:

    sudo dnf copr enable tc01/rtv
    sudo dnf install rtv

    Cockpit 187

    Posted by Cockpit Project on February 06, 2019 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 187.

    Machines: More operations for Storage Pools

    You can now activate, deactivate, and delete the storage pools for your virtual machines.

    Storage pool operation

    Domains: More information about the joined domain

    Clicking on the name of the joined domain will now open a dialog with more information about that domain.

    Domain information

    You can also leave the domain from this dialog.

    Storage: The options for VDO volumes are explained

    When creating a VDO volume, the options are now explained with tooltips.

    VDO options explained

    Machines: Support for oVirt will be dropped in the future

    The oVirt project is mostly stalled and thus we don’t plan to invest much in Cockpit’s oVirt support either. Cockpit will keep the current support for oVirt in stable distributions, but will no longer provide it from Fedora 30 and RHEL 8 on.

    Try it out

    Cockpit 187 is available now:

    OpenClass: Zašto je IBM kupio Red Hat, a mogao ga je besplatno skinuti pod imenom CentOS

    Posted by HULK Rijeka on February 05, 2019 09:26 PM

    Riječka podružnica Hrvatske udruge Linux korisnika i Odjel za informatiku Sveučilišta u Rijeci pozivaju vas na OpenClass koji će se održati četvrtak, 7. veljače 2019. u 17 sati, u zgradi Sveučilišnih odjela, prostorija O-028. OpenClass pod naslovom

    Zašto je IBM kupio Red Hat, a mogao ga je besplatno skinuti pod imenom CentOS

    održat će dr. sc. Vedran Miletić s Odjela za informatiku Sveučilišta u Rijeci.

    Sažetak

    Prošlo je već tri mjeseca od IBM-ove kupnje Red Hata za 34 milijarde dolara, zasigurno najvažnije prošlogodišnje akvizicije u IT-u. Red Hat je vodeća tvrtka u domeni slobodnog softvera otvorenog koda i od toga jako dobro živi: 2012. godine probijena je magična granica od milijardu dolara godišnjih prihoda, a 2015. godine ti su prihodi bili već 2 milijarde. Tržište hibridnih oblaka u kojem Red Hat sudjeluje procjenuje se na 1 bilijun dolara, tako da postoji ogroman prostor za rast.

    Pojedinci će reći da je ovakva akvizicija bila neminovna kad je Red Hat stasao kao kompanija. Ako i je tako, nameće se nekoliko pitanja: prije svega, je li IBM bolji ili lošiji gospodar od preostale trojke IT divova koji su mogli biti kupci (Google, Microsoft i Oracle)? Kako točno Red Hat zarađuje na slobodnom softveru i zašto IBM vidi svoj interes u tome? Kako se IBM odnosio prema zajednici slobodnog softvera otvorenog koda tijekom proteklih dva, tri desetljeća i što možemo sada očekivati? Kakav je utjecaj ove akvizicije na projekt Fedora? Naposlijetku, što je točno IBM platio 34 milijarde dolara kad je čitav Red Hat Enterprise Linux dostupan (istina, bez brandinga) pod imenom CentOS?

    Nadamo se vašem dolasku

    How to install the latest stable Kdenlive on Fedora or Ubuntu

    Posted by Luca Ciavatta on February 05, 2019 03:54 PM

    Kdenlive is an acronym for KDE Non-Linear Video Editor and it is primarily aimed at the GNU/Linux platform but also works on BSD. It is based on the MLT framework and accepts many audio and video formats, allows you to add effects, transitions and render into the format of your choice. You can easily install it on Fedora or Ubuntu[...]

    The post How to install the latest stable Kdenlive on Fedora or Ubuntu appeared first on CIALU.NET.

    IoT edge development and deployment with containers through OpenShift: Part 2

    Posted by RHEL Developer on February 05, 2019 01:00 PM

    In the first part of this series, we saw how effective a platform as a service (PaaS) such as Red Hat OpenShift is for developing IoT edge applications and distributing them to remote sites, thanks to containers and Red Hat Ansible Automation technologies.

    Usually, we think about IoT applications as something specially designed for low power devices with limited capabilities.  IoT devices might use a different CPU architectures or platform. For this reason, we tend to use completely different technologies for IoT application development than for services that run in a data center.

    In part two, we explore some techniques that allow you to build and test contains for alternate architectures such as ARM64 on an x86_64 host.  The goal we are working towards is to enable you to use the same language, framework, and development tools for code that runs in your datacenter or all the way out to IoT edge devices. In this article, I’ll show building and running an AArch64 container image on an x86_64 host and then building an RPI3 image to run it on physical hardware using Fedora and Podman.

    In the previous article, I assumed as a prerequisite that an IoT gateway may be capable of running x86_64 containers, but unfortunately, this is not a common use case due to the various type of IoT gateways available today on the market.

    I discovered a very interesting project named “multiarch” with multiple repositories available on GitHub.

    The aim of the project is to create a handy way for using a built-in Linux kernel feature named binfmt_misc, which is explained in Wikipedia as follows: “binfmt_misc is a capability of the Linux kernel which allows arbitrary executable file formats to be recognized and passed to certain user space applications, such as emulators and virtual machines. It is one of a number of binary format handlers in the kernel that are involved in preparing a user-space program to run.

    The concepts behind the multiarch project are really simple: Imagine launching a privileged container that is able to interact with the containers’ host and use the binfmt_misc feature for informing the kernel that some other binary’s handlers are available somewhere in the PATH.

    Are you guessing about the handlers? They are just QEMU x86_64 executables capable of running the specific architecture binaries: ARM, MIPS, Power, etc.

    Keep in mind that QEMU is a generic and open source machine emulator and virtualizer, so it’s the right candidate for this job. It is also used for running KVM virtual machines with near-native performance.

    At this point, the only thing you have to do is to place the right QEMU binaries in the containers with a different architecture. Why must they be placed in the containers?

    The QEMU executables must be placed in the containers because when the container engine tries to execute some other ARCH binaries, it will trigger in the kernel the binfmt_misc feature, which will then redirect this execution to the binary in the path we specified. Due to the fact that we’re in the container’s virtual root filesystem, the QEMU executable must reside in the same environment of the binary that just ran.

    The feature activation is really simple. As the multiarch project page states, we need only ensure that multiarch/qemu-user-static:register with the --reset option is used.

    The project page suggests using Docker for applying this action and, of course, this feature will be lost on the next machine’s restart, but we can set it as a one-shot systemd service.

    For this article, we’re just using a Minishift installation. For this reason, due to the lack of persistence of Minishift’s base operating system, we’ll just run the Docker command once we have logged in:

    [alex@lenny ~]$ cdk-minishift ssh
    [docker@minishift ~]$ docker run --rm --privileged multiarch/qemu-user-static:register --reset
    Setting /usr/bin/qemu-alpha-static as binfmt interpreter for alpha
    Setting /usr/bin/qemu-arm-static as binfmt interpreter for arm
    Setting /usr/bin/qemu-armeb-static as binfmt interpreter for armeb
    Setting /usr/bin/qemu-sparc32plus-static as binfmt interpreter for sparc32plus
    Setting /usr/bin/qemu-ppc-static as binfmt interpreter for ppc
    Setting /usr/bin/qemu-ppc64-static as binfmt interpreter for ppc64
    Setting /usr/bin/qemu-ppc64le-static as binfmt interpreter for ppc64le
    Setting /usr/bin/qemu-m68k-static as binfmt interpreter for m68k
    Setting /usr/bin/qemu-mips-static as binfmt interpreter for mips
    Setting /usr/bin/qemu-mipsel-static as binfmt interpreter for mipsel
    Setting /usr/bin/qemu-mipsn32-static as binfmt interpreter for mipsn32
    Setting /usr/bin/qemu-mipsn32el-static as binfmt interpreter for mipsn32el
    Setting /usr/bin/qemu-mips64-static as binfmt interpreter for mips64
    Setting /usr/bin/qemu-mips64el-static as binfmt interpreter for mips64el
    Setting /usr/bin/qemu-sh4-static as binfmt interpreter for sh4
    Setting /usr/bin/qemu-sh4eb-static as binfmt interpreter for sh4eb
    Setting /usr/bin/qemu-s390x-static as binfmt interpreter for s390x
    Setting /usr/bin/qemu-aarch64-static as binfmt interpreter for aarch64
    Setting /usr/bin/qemu-aarch64_be-static as binfmt interpreter for aarch64_be
    Setting /usr/bin/qemu-hppa-static as binfmt interpreter for hppa
    Setting /usr/bin/qemu-riscv32-static as binfmt interpreter for riscv32
    Setting /usr/bin/qemu-riscv64-static as binfmt interpreter for riscv64
    Setting /usr/bin/qemu-xtensa-static as binfmt interpreter for xtensa
    Setting /usr/bin/qemu-xtensaeb-static as binfmt interpreter for xtensaeb
    Setting /usr/bin/qemu-microblaze-static as binfmt interpreter for microblaze
    Setting /usr/bin/qemu-microblazeel-static as binfmt interpreter for microblazeel

    As you saw in the previous command, we just registered for the current x86_64 host a bunch of handlers that will receive specific requests for different architectures’ instructions from the host’s kernel.

    We’re now ready to test a container build project with an architecture other than x86_64.

    For this reason, I prepared a simple test project for building an ARM 64-bit container image and using it with a Raspberry Pi 3: it’s just a web server.

    As you’ll see in the project page, the git repo contains just a Dockerfile:

    FROM multiarch/debian-debootstrap:arm64-stretch-slim
    
    RUN apt-get update
    RUN apt-get install -y apache2
    RUN sed -i 's/80/8080/g' /etc/apache2/ports.conf
    
    EXPOSE 8080
    
    CMD ["/usr/sbin/apache2ctl", "-DFOREGROUND"]

    It starts from a Debian base image with an ARM64 architecture. It updates the APT repos and installs a web server. After that, it replaces the default listening port and then it sets the right init command.

    I bet you’re asking, “Where is the magic?”

    Well, the magic happens thanks to /usr/bin/qemu-aarch64-static, which is available in the container image itself. This binary is an x86_64 binary, different from all the others that are AArch64. The Linux Kernel will forward AArch64 binaries executions to this handler!

    We’re now ready to create the project and the OpenShift resources for handling the container image’s build:

    [alex@lenny ~]$ oc new-project test-arm-project
    Now using project "test-arm-project" on server "https://192.168.42.213:8443".
    
    [alex@lenny ~]$ oc new-app https://github.com/alezzandro/test-arm-container
    --> Found Docker image 4734ae4 (3 days old) from Docker Hub for "multiarch/debian-debootstrap:arm64-stretch-slim"
    
        * An image stream tag will be created as "debian-debootstrap:arm64-stretch-slim" that will track the source image
        * A Docker build using source code from https://github.com/alezzandro/test-arm-container will be created
          * The resulting image will be pushed to image stream tag "test-arm-container:latest"
          * Every time "debian-debootstrap:arm64-stretch-slim" changes a new build will be triggered
        * This image will be deployed in deployment config "test-arm-container"
        * Port 8080/tcp will be load balanced by service "test-arm-container"
          * Other containers can access this service through the hostname "test-arm-container"
        * WARNING: Image "multiarch/debian-debootstrap:arm64-stretch-slim" runs as the 'root' user which may not be permitted by your cluster administrator
    
    --> Creating resources ...
        imagestream.image.openshift.io "debian-debootstrap" created
        imagestream.image.openshift.io "test-arm-container" created
        buildconfig.build.openshift.io "test-arm-container" created
        deploymentconfig.apps.openshift.io "test-arm-container" created
        service "test-arm-container" created
    --> Success
        Build scheduled, use 'oc logs -f bc/test-arm-container' to track its progress.
        Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
         'oc expose svc/test-arm-container' 
        Run 'oc status' to view your app.

    Looking over the OpenShift web interface, we can see that the container image was successfully built and it’s also running:

    The running container image

    We can even access the running container and try to execute some commands:

    Executing some commands

    As you can see from the previous command, we attached to the running container and we verified that every command is proxied through /usr/bin/qemu-aarch64-static.

    We also checked that the binaries are truly the AArch64 architecture.

    We can now try the just-built container image on a Raspberry Pi 3. I chose as the base operating system Fedora ARM Linux.

    First, I set up one SD card with an easy-to-use tool after I downloaded the image from Fedora’s website:

    [alex@lenny Downloads]$ sudo arm-image-installer --addkey=/home/alex/.ssh/id_rsa.pub --image=/home/alex/Downloads/Fedora-Minimal-29-1.2.aarch64.raw.xz --relabel --resizefs --norootpass --target=rpi3 --media=/dev/sda
    [sudo] password for alex: 
     ***********************************************************
     ** WARNING: You have requested the image be written to sda.
     ** /dev/sda is usually the root filesystem of the host. 
     ***********************************************************
     ** Do you wish to continue? (type 'yes' to continue)
     ***********************************************************
     = Continue? yes
    
    =====================================================
    = Selected Image:                                 
    = /home/alex/Downloads/Fedora-Minimal-29-1.2.aarch64.raw.xz
    = Selected Media : /dev/sda
    = U-Boot Target : rpi3
    = SELinux relabel will be completed on first boot.
    = Root Password will be removed.
    = Root partition will be resized
    = SSH Public Key /home/alex/.ssh/id_rsa.pub will be added.
    =====================================================
    ...
    = Raspberry Pi 3 Uboot is already in place, no changes needed.
    = Removing the root password.
    = Adding SSH key to authorized keys.
    = Touch /.autorelabel on rootfs.
    
    = Installation Complete! Insert into the rpi3 and boot.

    While our brand new Raspberry Pi 3 operating system will boots, we can export the Docker image from the running Minishift virtual machine.

    We’ll connect directly to the VM and run a docker save command. We can use this trick because we’re in a demo environment; in a real use-case scenario, we may export the internal OpenShift registry to let external devices connect.

    [alex@lenny ~]$ cdk-minishift ssh
    Last login: Thu Jan 24 19:15:36 2019 from 192.168.42.1
    
    [docker@minishift ~]$ docker save -o test-arm-container.tar 172.30.1.1:5000/test-arm-project/test-arm-container:latest
    
    [alex@lenny Downloads]$ scp -i ~/.minishift/machines/minishift/id_rsa docker@`cdk-minishift ip`/home/docker/test-arm-container.tar

    We can then push the image to the Raspberry Pi and install Podman for running it!

    Don’t know what Podman is? Read more about Podman, which is in Red Hat Enterprise Linux.

    [alex@lenny Downloads]$ scp test-arm-container.tar root@192.168.1.52:/root/test-arm-container.tar                              100% 214MB 450.1KB/s 08:07 
    
    [alex@lenny Downloads]$ ssh root@192.168.1.52
    
    [root@localhost ~]# cat /etc/fedora-release 
    Fedora release 29 (Twenty Nine)
    
    [root@localhost ~]# uname -a
    Linux localhost.localdomain 4.19.15-300.fc29.aarch64 #1 SMP Mon Jan 14 16:22:13 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux
    
    [root@localhost ~]# dnf install -y podman
    

    Then we load the container image and finally run it:

    [root@localhost ~]# podman load -i test-arm-container.tar 
    Getting image source signatures
    Copying blob 36a049148cc6: 104.31 MiB / 104.31 MiB [=====================] 1m34s
    Copying blob d56ce20a3f9c: 15.20 MiB / 15.20 MiB [=======================] 1m34s
    Copying blob cf01d69beeaf: 94.53 MiB / 94.53 MiB [=======================] 1m34s
    Copying blob 115c696bd46d: 3.00 KiB / 3.00 KiB [=========================] 1m34s
    Copying config 478a2361357e: 5.46 KiB / 5.46 KiB [==========================] 0s
    Writing manifest to image destination
    Storing signatures
    Loaded image(s): 172.30.1.1:5000/arm-project/test-arm-container:latest
    
    [root@localhost ~]# podman images
    REPOSITORY                                       TAG IMAGE ID CREATED SIZE
    172.30.1.1:5000/arm-project/test-arm-container   latest 478a2361357e 4 hours ago 224 MB
    
    [root@localhost ~]# podman run -d 172.30.1.1:5000/arm-project/test-arm-container:latest
    cdbf0ac43a2dd01afd73220d5756060665df0b72a43bd66bf865d1c6149f325f
    
    [root@localhost ~]# podman ps
    CONTAINER ID  IMAGE                                         COMMAND CREATED STATUS PORTS  NAMES
    cdbf0ac43a2d  172.30.1.1:5000/arm-project/test-arm-container:latest  /usr/sbin/apache2... 6 seconds ago Up 5 seconds ago       pedantic_agnesi
    
    

    We can then test the web server:

    [root@localhost ~]# podman inspect cdbf0ac43a2d | grep -i address
                "LinkLocalIPv6Address": "",
                "SecondaryIPAddresses": null,
                "SecondaryIPv6Addresses": null,
                "GlobalIPv6Address": "",
                "IPAddress": "10.88.0.7",
                "MacAddress": "1a:c8:30:a4:be:2f"
    
    [root@localhost ~]# curl 10.88.0.7:8080 2>/dev/null | head
    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml">
      <head>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
        <title>Apache2 Debian Default Page: It works</title>
        <style type="text/css" media="screen">
      * {
        margin: 0px 0px 0px 0px;
        padding: 0px 0px 0px 0px;

    Finally, let’s also check the content and compare the binaries in the container with ones in the Raspberry Pi:

    [root@localhost ~]# podman run -ti 172.30.1.1:5000/arm-project/test-arm-container:latest /bin/bash
    
    root@8e39d1c28259:/# 
    root@8e39d1c28259:/# id
    uid=0(root) gid=0(root) groups=0(root)
    
    root@8e39d1c28259:/# uname -a
    Linux 8e39d1c28259 4.19.15-300.fc29.aarch64 #1 SMP Mon Jan 14 16:22:13 UTC 2019 aarch64 GNU/Linux
    
    root@8e39d1c28259:/# cat /etc/debian_version 
    9.6
    
    root@8e39d1c28259:/# file /bin/bash
    
    /bin/bash: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=29b2624b1e147904a979d91daebc60c27ac08dc6, stripped
    
    root@8e39d1c28259:/# exit
    
    [root@localhost ~]# file /bin/bash
    
    /bin/bash: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=25dee020ab8f6525bd244fb3f9082a47e940b1e6, stripped, too many notes (256)

    We just saw that we can run this container with Podman. So why not apply some techniques we saw in past articles about Podman and its integration with systemd? 🙂

    Read more about Podman and managing containerized system services.

    Additional resources

    That’s all. I hope you enjoyed this IoT solution!

    About Alessandro

    Alessandro Arrichiello

    Alessandro Arrichiello is a Solution Architect for Red Hat Inc. He has a passion for GNU/Linux systems, which began at age 14 and continues today. He has worked with tools for automating enterprise IT: configuration management and continuous integration through virtual platforms. He’s now working on distributed cloud environments involving PaaS (OpenShift), IaaS (OpenStack) and Processes Management (CloudForms), Containers building, instances creation, HA services management, and workflow builds.

    Share

    The post IoT edge development and deployment with containers through OpenShift: Part 2 appeared first on RHD Blog.

    Fedora 29 : The Piskel application.

    Posted by mythcat on February 05, 2019 11:55 AM
    This application is a tool for drawing and create sprites.
    You can test online or use it locally by download it into your operating system.
    The development team comes with this intro:
    Create animations in your browser. Try an example, use Google sign in to access your gallery or simply create a new sprite.
    I download it to Fedora 29 distro and working well.
    This is result:

    Fedora 30 – Supplemental Wallpaper

    Posted by Sirko Kemter on February 05, 2019 11:47 AM

    The submission phase for the Fedora 30 Supplemental Wallpapers ended a few days ago, now the voting is open and is until 25th. You have 16 choices you can make and are allowed to vote when you have CLA + 1 membership. This time we have not to many choices, just a bit above 50.  Here are my 3 favorites:

     

    The voting process happens inside Nuancier so you can go now and vote, dont forget to claim the badge, its not given by hand.