Fedora People

Untitled Post

Posted by Carlos Jara Alva on July 18, 2019 07:44 PM
FLISOL 2019
En el mes de abril se realizo el FLISOL en distintas partes de Latinoamérica. En el Perú, se realizo en pocas sedes:
En Lima, la única sede que participo fue en la Municipalidad de Pueblo Libre, en el cual la Comunidad de Fedora Peru fue invitada: https://www.facebook.com/proyectofedoraperu/


Muchas gracias por el apoyo.

Contribute at the Fedora Test Week for kernel 5.2

Posted by Fedora Community Blog on July 18, 2019 05:22 PM
Fedora 30 Kernel 5.2Test Day

The kernel team is working on final integration for kernel 5.2. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, July 22, 2019 through Monday, July 29, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

The post Contribute at the Fedora Test Week for kernel 5.2 appeared first on Fedora Community Blog.

Text install's Revenge!

Posted by Luigi Votta on July 18, 2019 03:16 PM
To install Fedora 30 in text mode,
selecting what you need and prefer, try this!

Download a net-install iso
When grub loads, insert inst.text at end of kernel line.

Building blocks of syslog-ng

Posted by Peter Czanik on July 18, 2019 09:20 AM

Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that feedback into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.

This one gives you an overview of syslog-ng, its major features and an introduction to its configuration.

What is logging & syslog-ng?

Let’s start from the very beginning. Logging is the recording of events on a computer. And what is syslog-ng? It’s an enhanced logging daemon with a focus on portability and high-performance central log collection. It was originally developed in C.

Why is central logging so important? There are three major reasons:

  • Ease of use: you have only one location to check for your log messages instead of many.

  • Availability: logs are available even when the sender machine is unreachable.

  • Security: logs are often deleted or modified once a computer is breached. Logs collected on the central syslog-ng server, on the other hand, can be used to reconstruct how the machine was compromised.

There are four major roles of syslog-ng: collecting, processing, filtering, and storing (or forwarding) log messages.

The first role is collecting, where syslog-ng can collect system and application logs together. These two can provide useful contextual information for either side. Many platform-specific log sources are supported (for example, collecting system logs from /dev/log, the Systemd Journal or Sun Streams). As a central log collector, syslog-ng supports both the legacy/BSD (RFC 3164) and the new (RFC 5424) syslog protocols over UDP, TCP and encrypted connections. It can also collect logs or any kinds of text data through files, sockets, pipes and even application output. The Python source serves as a Jolly Joker: you can implement an HTTP server (similar to Splunk HEC), fetch logs from Amazon Cloudwatch, and implement a Kafka source, to mention only a few possibilities..

The second role is processing, which covers many different possibilities. For example, syslog-ng can classify, normalize, and structure logs with built-in parsers. It can rewrite log messages ( we aren’t talking about falsifying log messages here, but anonimization as required by compliance regulations, for example). It can also enrich log messages using GeoIP, or create additional name-value pairs based on message content. You can use templates to reformat log messages, as required by a specific destination (for example, you can use the JSON template function with Elasticsearch). Using the Python parser, you can do any of the above, and even filtering.

The third role is filtering, which has two main uses. The first one is, discarding surplus log messages, like debug level messages, for example. The second one is message routing: making sure that a given set of logs reaches the right destination (for example, authentication-related messages reach the SIEM). There are many possibilities, as message routing can be based on message parameters or content, using many different filtering functions. Best of all: any of these can be combined using Boolean operators.

The fourth role is storage. Traditionally, syslog-ng stored log messages to flat files, or forwarded them to a central syslog-ng server using one of the syslog protocols and stored them there to flat files. Over the years, an SQL destination, then different big-data destinations (Hadoop, Kafka, Elasticsearch), message queuing (like AMQP or STOMP), different logging as a service providers, and many other features were added. Nowadays you can also write your own destinations in Python or Java.

Log messages

If you take a look at your /var/log directory, where log messages are normally stored on a Linux/UNIX system, you will see that most log messages have the following format: date + hostname + text. For example, observe this ssh login message:

Mar 11 13:37:56 linux-6965 sshd[4547]: Accepted keyboard-interactive/pam for root from 127.0.0.1 port 46048 ssh2

As you can see, the text part is an almost complete English sentence with some variable parts in it. It is pretty easy to read for a human. However, as each application produces different messages, it is quite difficult to create reports and alerts based on these messages.

There is a solution for this problem: structured logging. Instead of free-form text messages, in this case events are described using name-value pairs. For example, an ssh login can be described with the following name-value pairs:

app=sshd user=root source_ip=192.168.123.45

The good news is that syslog-ng was built around name-value pairs right from the beginning, as both advanced filtering and templates required syslog header data to be parsed and available as name-value pairs. Parsers in syslog-ng can turn unstructured, and even some structured data (CSV, JSON, etc.) into name-value pairs as well.

Configuration

Configuring syslog-ng is simple and logical, even if it does not look so at first sight. My initial advice: Don’t panic! The syslog-ng configuration has a pipeline model. There are many different building blocks (like sources, destinations, filters and others), and all of these can be connected in pipelines using log statements.

By default, syslog-ng usually looks for its configuration in /etc/syslog-ng/syslog-ng.conf (configurable at compile time). Here you can find a very simple syslog-ng configuration showing you all the mandatory (and even some optional) building blocks:

@version:3.21
@include "scl.conf"

# this is a comment :)

options {flush_lines (0); keep_hostname (yes);};

source s_sys { system(); internal();};
destination d_mesg { file("/var/log/messages"); };
filter f_default { level(info..emerg) and not (facility(mail)); };

log { source(s_sys); filter(f_default); destination(d_mesg); };

The configuration always starts with a version number declaration. It helps syslog-ng to figure out what your original intention with the configuration was and also warns you if there was an important change in syslog-ng internals.

You can include other configuration files from the main syslog-ng configuration. The one included here is an important one: it includes the syslog-ng configuration library. It will be discussed later in depth. For now, it is enough to know that many syslog-ng features are actually defined there, including the Elasticsearch destination.

You can place comments in your syslog-ng configuration, which helps structure the configuration and remind you about your decisions and workarounds when you need to modify the configuration later.

The use of global options helps you make your configuration shorter and easier to maintain. Most settings here can be overridden later in the configuration. For example flush_lines() defines how many messages are sent to a destination at the same time. A larger value adds latency, but better performance and lower resource usage as well. Zero is a safe choice of value for most logs on a low traffic server, as it writes all logs to disk as soon as they arrive. On the other hand, if you have a busy mail server on that host, you might want to override this value for the mail logs only. Then later, when your server becomes busy, you can easily raise the value for all of your logs.

The next three lines are the actual building blocks. Two of these are mandatory: the source and the destination (as you need to collect logs and store them somewhere). The filter is optional but useful and highly recommended.

  • A source is a named collection of source drivers. In this case, its name is s_sys, and it is using the system() and internal() sources. The first one collects from local, platform-specific log sources, while the second one collects messages generated by syslog-ng.

  • A destination is a named collection of destination drivers. In this case, its name is d_mesg, and it stores files into a flat file called /var/log/messages.

  • A filter is a named collection of filter functions. You can have a single filter function or a collection of filter functions connected using Boolean operators. Here we have a function for discarding debug level messages and another one for finding facility mail.

There are a few more building blocks (parsers, rewrites and others) not shown here. They will be introduced later.

Finally, there is a log statement connecting all these building blocks. Here you refer to the different building blocks by their names. Naturally, in a real configuration you will have several of these building blocks to refer to, not only one of each. Unless you are machine generating a complex configuration, you do not have to count the number of items in your configuration carefully.

SCL: syslog-ng configuration library

The syslog-ng configuration library (SCL) contains a number of ready-to-use configuration snippets. From the user’s point of view, they are no different from any other syslog-ng drivers. For example, the new elasticsearch-http() destination driver also originates from here.

Application Adapters are a set of parsers included in SCL that automatically try to parse any log messages arriving through the system() source. These parsers turn incoming log messages into a set of name-value pairs. The names for these name-value pairs, containing extra information, start with a dot to differentiate them from name-value pairs created by the user. For example, names for values parsed from sudo logs start with the .sudo. prefix.

This also means that unless you really know what you are doing, you should include the syslog-ng configuration library from your syslog-ng.conf. If you do not do that, many of the documented features of syslog-ng will stop working for you.

As you have already seen it in the sample configuration, you can enable SCL with the following line:

@include "scl.conf"

Networking

One of the most important features of syslog-ng is central log collection. You can use either the legacy or the new syslog protocols to collect logs centrally over the network. The machines sending the logs are called clients, while those on the receiving end are called servers. There is a lesser known, but at least equally, if not even more, important variant as well: the relays. On larger networks (or even smaller networks with multiple locations) relays are placed between clients and servers. This makes your logging infrastructure hierarchical with one or more levels of relays.

Whyuse relays? There are three major reasons:

  • you can collect UDP logs as close to the source as possible

  • you can distribute processing of log messages

  • you can secure your infrastructure: have a relay for each department or physical location, so logs can be sent from clients in real-time even if the central server is inaccessible

Macros & templates

As a syslog message arrives, syslog-ng automatically parses it. Most macros or name-value pairs are variables defined by syslog-ng based on the results of parsing. There are some macros that do not come from the parsing directly, for example the date and time a message was received (as opposed to the value stored in the message), or from enrichment, like GeoIP.

By default, messages are parsed as legacy syslog, but by using flags you can change this to new syslog (flags(syslog-protocol)) or you can even disable parsing completely (flags(no-parse)). In the latter case the whole incoming message is stored into the MESSAGE macro.

Name-value pairs or macros have many uses. One of these uses is in templates. By using templates you can change the format of how messages are stored, (for example, use ISODATE instead of the traditional date format):

template t_syslog {
    template("$ISODATE $HOST $MSG\n");
};
destination d_syslog {
    file("/var/log/syslog" template(t_syslog));
};

Another use is making file names variable. This way you can store logs coming from different hosts into different files or implement log rotation by storing files into directories and files based on the current year, month and day. An external script can delete files older than required to keep due to compliance or other reasons.

destination d_messages {
    file("/var/log/$R_YEAR/$R_MONTH/$HOST_$R_DAY.log" create_dirs(yes));
};

Filters & if/else statements

By using filters you can fine-tune which messages can reach a given destination. You can combine multiple filter functions using Boolean operators in a single filter, and you can use multiple filters in a log path. Filters are declared similarly to any other building blocks: you have to name them and then use one more filter function combined with Boolean operators inside the filter. Here is the relevant part of the example configuration from above:

filter f_default { level(info..emerg) and not (facility(mail)); };

The level() filter function lets all messages through, except for those from debug level. The second one selects all messages with facility mail. The two filter functions are connected with a not operator, so in the end all debug level and all facility mail messages are discarded by this filter.

There are many more filters. The match() filter operates on the message content and there are many more that operate on different values parsed from the message headers. From the security point of view, the inlist() filter might be interesting. This filter can compare a field with a list of values (for example, it can compare IP addresses extracted from firewall logs with a list of malware command & control IP addresses).

Conditional expressions in the log path make using the results of filtering easier. What is possible now by using simple if / else statements used to require complex configuration. You can use conditional expressions with similar blocks within the log path:

if (filter()) { do this }; else { do that };

It can be used, for example, to apply different parsers to different log messages or to save a subset of log messages to a separate destination.

Below you can find a simplified example, showing the log statement only:

log {
    source(s_sys);
    filter(f_sudo);
    if (match("czanik" value(".sudo.SUBJECT"))) {
        destination { file("/var/log/sudo_filtered"); };
    };
    destination(d_sudoall);
};

The log statement in the example above collects logs from a source called s_sys. The next filter, referred from the log path, keeps sudo logs only. Recent versions of syslog-ng automatically parse sudo messages. The if statement here uses the results of parsing, and writes any log messages where the user name (stored in the .sudo.SUBJECT name-value pair) equals to my user name to a separate file. Finally, all sudo logs are stored to a log file.

Parsing

Parsers of syslog-ng can structure, classify and normalize log messages. There are multiple advantages of parsing:

  • instead of the whole message, only the relevant parts are stored

  • more precise filtering (alerting)

  • more precise searches in (no)SQL databases

By default, syslog-ng treats the message part of logs as strings even if the message part contains structured data. You have to parse the message parts in order to turn them into name-value pairs. The advantages listed above can only be used once you have turned the message into name-value pairs by using the parsers of syslog-ng..

One of the earliest parsers of syslog-ng is the PatternDB parser. This parser can extract useful information from unstructured log messages into name-value pairs. It can also add status fields based on the message text and classify messages (like LogCheck). The downside of PatternDB is that you need to know your log messages in advance and describe them in an XML database. It takes time and effort, and while some example log messages do exist, for your most important log messages you most likely need to create the XML yourself.

For example, in case of an ssh login failure the name-value pairs created by PatternDB could be:

  • parsed directly from the message: app=sshd, user=root, source_ip=192.168.123.45

  • added, based on the message content: action=login, status=failure

  • classified as “violation” in the end.

JSON is becoming very popular recently, even for log messages. The JSON parser of syslog-ng can turn JSON logs into name-value pairs.

The CSV parser can turn any kind of columnar log messages into name-value pairs. A popular example was the Apache web server access log.

If you are into IT security, you will most likely use the key=value parser a lot, as iptables and most firewalls store their log messages in this format.

There are many more lesser known parsers in syslog-ng as well. You can parse XML logs, logs from the Linux Audit subsystem, and even custom date formats, by using templates.

SCL contains many parsers that combine multiple parsers into a single one to parse more complex log messages. There are parsers for Apache access logs that also parse the date from the logs. In addition, they can also interpret most Cisco logs resembling syslog messages.

Enriching messages

You can create additional name-value pairs based on the message content. PatternDB, already discussed among the parsers, can not only parse messages, but can also create name-value pairs based on the message content.

The GeoIP parser can help you find the geo-location of IP addresses. The new geoip2() parser can show you more than just the country or longitude/latitude information: it can display the continent, the county, and even the city as well, in multiple languages. It can help you spot anomalies or display locations on a map.

By using add-contextual-data(), you can enrich log messages from a CSV file. You can add, for example, host role or contact person information, based on the host name. This way you have to spend less time on finding extra information, and it can also help you create more accurate dashboards and alerts.

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

source s_tcp { tcp(port(514)); };

destination d_file {
  file("/var/log/fromnet" template("$(format-json --scope rfc5424
  --scope dot-nv-pairs --rekey .* --shift 1 --scope nv-pairs
  --exclude DATE --key ISODATE @timestamp=${ISODATE})\n\n") );
};

log {
  source(s_tcp);
  parser(p_kv);
  parser(p_geoip2);
  destination(d_file);
};

The configuration above collects log messages from a firewall using the legacy syslog protocol on a TCP port. The incoming logs are first parsed with a key=value parser (using a prefix to avoid colliding macro names). The geoip2() parser takes the source IP address as input (stored in kv.SRC) and stores location data under a different prefix. By default, logs written to disk do not include the extracted name-value pairs. This is why logs are written here to a file using the JSON template function, which writes all syslog-related macros and any extracted name-value pairs into the file. Name-initial dots are removed from names and date is used as expected by Elasticsearch. The only difference is that there are two line feeds at the end, to make the file easier to read.


If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter I am available as @Pczanik.


libinput's new thumb detection code

Posted by Peter Hutterer on July 18, 2019 08:40 AM

The average user has approximately one thumb per hand. That thumb comes in handy for a number of touchpad interactions. For example, moving the cursor with the index finger and clicking a button with the thumb. On so-called Clickpads we don't have separate buttons though. The touchpad itself acts as a button and software decides whether it's a left, right, or middle click by counting fingers and/or finger locations. Hence the need for thumb detection, because you may have two fingers on the touchpad (usually right click) but if those are the index and thumb, then really, it's just a single finger click.

libinput has had some thumb detection since the early days when we were still hand-carving bits with stone tools. But it was quite simplistic, as the old documentation illustrates: two zones on the touchpad, a touch started in the lower zone was always a thumb. Where a touch started in the upper thumb area, a timeout and movement thresholds would decide whether it was a thumb. Internally, the thumb states were, Schrödinger-esque, "NO", "YES", and "MAYBE". On top of that, we also had speed-based thumb detection - where a finger was moving fast enough, a new touch would always default to being a thumb. On the grounds that you have no business dropping fingers in the middle of a fast interaction. Such a simplistic approach worked well enough for a bunch of use-cases but failed gloriously in other cases.

Thanks to Matt Mayfields' work, we now have a much more sophisticated thumb detection algorithm. The speed detection is still there but it better accounts for pinch gestures and two-finger scrolling. The exclusion zones are still there but less final about the state of the touch, a thumb can escape that "jail" and contribute to pointer motion where necessary. The new documentation has a bit of a general overview. A requirement for well-working thumb detection however is that your device has the required (device-specific) thresholds set up. So go over to the debugging thumb thresholds documentation and start figuring out your device's thresholds.

As usual, if you notice any issues with the new code please let us know, ideally before the 1.14 release.

PHP version 7.2.21RC1 and 7.3.8RC1

Posted by Remi Collet on July 18, 2019 08:25 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.387RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 28-29 and Enterprise Linux.

RPM of PHP version 7.2.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Enterprise Linux.

 

emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module enable php:remi-7.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module enable php:remi-7.2
dnf --enablerepo=remi-modular-test update php\*

Notice: version 7.3.8RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngPackages of 7.4.0alpha3 are also available as a Software Collections.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

QElectroTech version 0.70

Posted by Remi Collet on July 18, 2019 06:43 AM

RPM of QElectroTech version 0.70, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux 7.

A bit more than 1 year after the version 0.60 release, the project have just released a new major version of their electric diagrams editor.

Official web site : http://qelectrotech.org/ .

Installation by YUM :

yum --enablerepo=remi install qelectrotech

RPM (version 0.70-1) are available for Fedora ≥ 28 and Enterprise Linux 7 (RHEL, CentOS, ...)

Updates are also on the road to official repositories

Notice :a Copr / Qelectrotech repository also exists, which provides "development" versions (0.7-dev for now).

Compling ARM stuff without an ARM board / Build PyTorch for the Raspberry Pi

Posted by nmilosev on July 17, 2019 08:49 PM

compiling.png

I am in the process of building a self-driving RC car. It’s a fun process full of discovery (I hate it already). Once it is finished I hope to write a longer article here about what I learned so stay tuned!

While the electronics stuff was difficult for me (fingers still burnt from soldering) I hoped that the computer vision stuff would be easier. Right? Right? Well no.

Neural network inference on small devices #

To be clear I didn’t expect to train my CNN on the Raspberry Pi that I have (its revision 2, with added USB WiFi dongle and USB webcam) but I wanted to do some inference on a model that I can train on my other computers.

I love using PyTorch and I use it for all my projects/work/research. Simply put it’s fantastic software.

Problem #1 - PyTorch doesn’t have official ARMv7 or ARMv8 builds. #

While you can get PyTorch if you have NVIDIA Jetson hardware, there are no builds for other generic boards. Insert sad emoji.

Problem #2 - ONNX, no real options #

I had the idea to export my trained model to ONNX (Open Neural Network eXchange format), but then what.

There are two projects:

  1. Microsoft’s ONNX Runtime - doesn’t support RPi2
  2. Snips Tract - Seems super-cool but Rust underneath (nothing against Rust, just not familiar)

So the only solution was: Build PyTorch from source.

“When your build takes two days you have time to think about life” - Anonymous programmer 2019. #

The PyTorch build process is fantastically simple. You get the code and run a single command. It’s robust and I used it many times before. So I jumped right in, it can’t take that long, yeah? NOP.

On my Raspberry Pi 2, with a decent SD card (Kingston UHS1 16GB) the build took 36 and a bit hours. Yes you read that correctly. Not 3.6 hours. Thirty six hours. While it ran, during these 36 hours I had a lot of down time. So I wondered how to do it quicker.

Option 1 - Cross compilation #

Cross compilation (or witchcraft in software development circles) is a process where you can build software for some architecture on another architecture. So here I wanted to build for ARM on a standard x86_64 machine. It’s complicated and difficult, always. Even though it was my first thought, I then discovered on the PyTorch Github issues, that it is not supported.

Option 2 - What about emulation #

This seems reasonable. You emulate generic ARM or ARMv8 board and build on it. QEMU/libvirt can emulate ARM just fine and there are clear instructions on how to achieve it. For example Fedora Wiki (I am using Fedora 30 both on RPi and my build machine) has a short guide on how to do it. Here is the link.

I tried this, and to be fair it worked fine. But it was slow. Almost unusably slow.

Option 3 - Witchcraft, sort of #

Remember cross compilation? I ran into an article which explains this weird setup for building ARM software. It is amazing. Basically there is a qemu-user package that allows you chroot into a rootFS of a different architecture with very little performance loss (!!!). Pair this with DNF’s feature to make a rootfs of any architecture, and you got something immensely powerful. Not just for building Python packages, for building anything for ARM or ARMv8 (aarch64 as it is called by DNF).

But then I read the last line. This was just a proposal.

So I went down the rabbit hole and followed the bug reports. All of them seemed closed. Could this feature work already? The answer was: YES!

Building PyTorch for the Raspberry Pi boards #

Once I discovered qemu-user chroot thingy, everything clicked.

So here we go, this is how to do it.

We need qemu and qemu-user packages. Virt manager is optional but nice to have.

sudo dnf install qemu-system-arm qemu-user-static virt-manager

We now need the rootfs, which is a single-liner

sudo dnf install --releasever=30 --installroot=/tmp/F30ARM --forcearch=armv7hl --repo=fedora --repo=updates systemd passwd dnf fedora-release vim-minimal openblas-devel blas-devel m4 cmake python3-Cython python3-devel python3-yaml python3-setuptools python3-numpy python3-cffi python3-wheel gcc-c++ tar gcc git make tmux -y

This will install a ARM rootfs to your /tmp directory along with everything you need to build PyTorch. Yes, it is that easy.

Let’s chroot

sudo chroot /tmp/F30ARM

Welcome to your “ARM board”, verify your kernel arch:

bash-5.0# uname -a
Linux toshiba-x70-a 5.1.12-300.fc30.x86_64 #1 SMP Wed Jun 19 15:19:49 UTC 2019 armv7l armv7l armv7l GNU/Linux

So cool, isn’t it? Some things are broken, but easy to fix. Mainly network and DNF wrongly detects your arch.

# Fix for 1691430
sed -i "s/'armv7hnl', 'armv8hl'/'armv7hnl', 'armv7hcnl', 'armv8hl'/" /usr/lib/python3.7/site-packages/dnf/rpm/__init__.py
alias dnf='dnf --releasever=30 --forcearch=armv7hl --repo=fedora --repo=updates'

# Fixes for default python and network
alias python=python3
echo 'nameserver 8.8.8.8' > /etc/resolv.conf

Your configuration is now complete and you have a working emulated ARM board.

Get PyTorch source:

git clone https://github.com/pytorch/pytorch --recursive
git checkout v1.1.0 # optional, you can build master if you are brave
git submodule update --init --recursive

Since we are building for a Raspberry Pi we want to disable CUDA, MKL etc.

export NO_CUDA=1
export NO_DISTRIBUTED=1
export NO_MKLDNN=1 
export NO_NNPACK=1
export NO_QNNPACK=1

All ready, build!

python setup.py bdist_wheel

Performance #

The RPi2 took 36+ hours. This? Under two. My laptop isn’t that new and I guess you can do it even faster with a faster CPU.

Conclusion #

Building for ARM shouldn’t be done on a board. There are probably some exceptions to the rule, but you really should consider the way explained here. It’s faster, reproducible and easy. Fedora works remarkably well for this (as for all other things, hehe) both on the device and on the build system.

Let me know how it goes for you.

Oh, and if you just stumbled on this page on Google wanting a wheel/.whl of PyTorch for your RPi2, here you go. To build for RPi3 and ARMv8 just replace every armv7hl in this post with aarch64 and you should be fine. :)

Image credit: https://xkcd.com/303/

Move a Linux running process to a screen shell session

Posted by Hernan Vivani on July 17, 2019 06:39 PM

Use case:

  • You just started a process (i.e. compile, copy, etc).
  • You noticed it will take much longer than expected to finish.
  • You cannot abort or risk the process to get aborted due to the current shell session finishing.
  • It would be ideal to have this process on ‘screen’ to have it running on backgroud.

We can move it to a screen session with the following steps:

  1. Suspend the process
    1. press Ctrl+Z
  2. Resume the process in the background
    1. bg
  3. Disown the process
    1. disown %1
  4. Launch a screen session
    1. screen
  5. Find the PID of the process
    1. pgrep myappname
  6. Use reptyr to take over the process
    1. reptyr 1234

 

Note: at the moment of writing this, reptyr is not available on any Fedore/Redhat repo. We’ll need to compile:

$ git clone https://github.com/nelhage/reptyr.git
$ cd reptyr/
$ make
$ sudo make install

 

 

 

 

Bond WiFi and Ethernet for easier networking mobility

Posted by Fedora Magazine on July 17, 2019 08:00 AM

Sometimes one network interface isn’t enough. Network bonding allows multiple network connections to act together with a single logical interface. You might do this because you want more bandwidth than a single connection can handle. Or maybe you want to switch back and forth between your wired and wireless networks without losing your network connection.

The latter applies to me. One of the benefits to working from home is that when the weather is nice, it’s enjoyable to work from a sunny deck instead of inside. But every time I did that, I lost my network connections. IRC, SSH, VPN — everything goes away, at least for a moment while some clients reconnect. This article describes how I set up network bonding on my Fedora 30 laptop to seamlessly move from the wired connection my laptop dock to a WiFi connection.

In Linux, interface bonding is handled by the bonding kernel module. Fedora does not ship with this enabled by default, but it is included in the kernel-core package. This means that enabling interface bonding is only a command away:

sudo modprobe bonding

Note that this will only have effect until you reboot. To permanently enable interface bonding, create a file called bonding.conf in the /etc/modules-load.d directory that contains only the word “bonding”.

Now that you have bonding enabled, it’s time to create the bonded interface. First, you must get the names of the interfaces you want to bond. To list the available interfaces, run:

sudo nmcli device status

You will see output that looks like this:

DEVICE          TYPE      STATE         CONNECTION         
enp12s0u1       ethernet  connected     Wired connection 1
tun0            tun       connected     tun0               
virbr0          bridge    connected     virbr0             
wlp2s0          wifi      disconnected  --      
p2p-dev-wlp2s0  wifi-p2p disconnected  --      
enp0s31f6       ethernet  unavailable   --      
lo              loopback  unmanaged     --                 
virbr0-nic      tun       unmanaged     --       

In this case, there are two (wired) Ethernet interfaces available. enp12s0u1 is on a laptop docking station, and you can tell that it’s connected from the STATE column. The other, enp0s31f6, is the built-in port in the laptop. There is also a WiFi connection called wlp2s0. enp12s0u1 and wlp2s0 are the two interfaces we’re interested in here. (Note that it’s not necessary for this exercise to understand how network devices are named, but if you’re interested you can see the systemd.net-naming-scheme man page.)

The first step is to create the bonded interface:

sudo nmcli connection add type bond ifname bond0 con-name bond0

In this example, the bonded interface is named bond0. The “con-name bond0” sets the connection name to bond0; leaving this off would result in a connection named bond-bond0. You can also set the connection name to something more human-friendly, like “Docking station bond” or “Ben”

The next step is to add the interfaces to the bonded interface:

sudo nmcli connection add type ethernet ifname enp12s0u1 master bond0 con-name bond-ethernet
sudo nmcli connection add type wifi ifname wlp2s0 master bond0 ssid Cotton con-name bond-wifi

As above, the connection name is specified to be more descriptive. Be sure to replace enp12s0u1 and wlp2s0 with the appropriate interface names on your system. For the WiFi interface, use your own network name (SSID) where I use “Cotton”. If your WiFi connection has a password (and of course it does!), you’ll need to add that to the configuration, too. The following assumes you’re using WPA2-PSK authentication

sudo nmcli connection modify bond-wifi wifi-sec.key-mgmt wpa-psk
sudo nmcli connection edit bond-wif

The second command will bring you into the interactive editor where you can enter your password without it being logged in your shell history. Enter the following, replacing password with your actual password

set wifi-sec.psk password
save
quit

Now you’re ready to start your bonded interface and the secondary interfaces you created

sudo nmcli connection up bond0
sudo nmcli connection up bond-ethernet
sudo nmcli connection up bond-wifi

You should now be able to disconnect your wired or wireless connections without losing your network connections.

A caveat: using other WiFi networks

This configuration works well when moving around on the specified WiFi network, but when away from this network, the SSID used in the bond is not available. Theoretically, one could add an interface to the bond for every WiFi connection used, but that doesn’t seem reasonable. Instead, you can disable the bonded interface:

sudo nmcli connection down bond0

When back on the defined WiFi network, simply start the bonded interface as above.

Fine-tuning your bond

By default, the bonded interface uses the “load balancing (round-robin)” mode. This spreads the load equally across the interfaces. But if you have a wired and a wireless connection, you may want to prefer the wired connection. The “active-backup” mode enables this. You can specify the mode and primary interface when you are creating the interface, or afterward using this command (the bonded interface should be down):

sudo nmcli connection modify bond0 +bond.options "mode=active-backup,primary=enp12s0u1"

The kernel documentation has much more information about bonding options.

NEURON in NeuroFedora needs testing

Posted by The NeuroFedora Blog on July 17, 2019 07:52 AM

We have been working on including the NEURON simulator in NeuroFedora for a while now. The build process that NEURON uses has certain peculiarities that make it a little harder to build.

For those that are interested in the technical details, while the main NEURON core is built using the standard ./configure; make ; make install process that cleanly differentiates the "build" and "install" phases, the Python bits are built as a "post-install hook". That is to say, they are built after the other bits in the "install" step instead of the "build" step. This implies that the build is not quite straightforward and must be slightly tweaked to ensure that the Fedora packaging guidelines are met.

After discussing things on this Github issue, the developers, @nrnhines (Michael Hines) and @ramcdougal (Robert A McDougal) helped me understand the complexities of the build process and get it done. They have also mentioned that NEURON is now moving to a CMake based build system and should be simpler to work with in the future. CMake is generally nicer for projects that include different languages and build systems.

After a few hours of work, NEURON is now ready to use in NeuroFedora. It is built with Python 3, and does not currently provide IV and MPI bits. These will be worked upon later. Since MUSIC is not yet in NeuroFedora, NEURON does not support MUSIC either currently. This is also a work in progress.

I have tested the NEURON build on my machine with a few example simulations and it works well, but this cannot be considered exhaustive testing of the package. If you have a Fedora system, please test NEURON and let us know if you notice any issues. Here's how.

Step 1: Set up a Fedora installation

NeuroFedora is based on Fedora, so the simplest way to use it is to install a Fedora Workstation using the live images available on https://getfedora.org. You can either install it on a system or use a virtual machine if you wish. NeuroFedora includes lots of other software for neuroscience also. You can learn more in our documentation. Fedora, in general, provides lots of other software too. You can search them using the Fedora Packages web application.

Step 2: Install NEURON

I would recommend updating your system before proceeding using dnf in a terminal:

sudo dnf update --refresh

Then, you can install NEURON. It is currently in the testing repositories, so they will need to be enabled for the command:

sudo dnf --enablerepo=updates-testing install neuron python3-neuron

Step 3: Test it out

Test it out with your models. Hopefully, everything will work fine. The NEURON documentation is here for those that would like to tinker with it too: https://www.neuron.yale.edu/neuron/docs

Step 4: Give feedback

Bodhi logo

Bodhi, the Fedora Quality Assurance web application.

This step is optional, especially if everything works fine. If you experience any issues, please do get in touch with us. You can either contact us directly using one of our communication channels, or you can give karma to the update on Bodhi. The latter is preferred.

Bodhi is the system Fedora uses for pushing updates to users. In a nutshell:

  • a new version of software is released
  • the Fedora maintainer updates the Fedora package.
  • the maintainer submits the new Fedora package to Bodhi
  • the package remains in the updates-testing repositories while users test it out and provide feedback.
  • if the update receives positive feedback (positive karma), it is automatically pushed to the updates repository for all users to receive the new version.
  • if the update receives negative feedback, the new version is not sent out to users and the maintainer must fix the reported issues and submit a new version of the package for testing again.

This workflow applies to all Fedora packages, thus ensuring that there's plenty of time for issues to be flagged before the software reaches users. So, if you have a few minutes to spare, please help us by testing these packages out. The updates for Fedora 29 and Fedora 30 are both here: https://bodhi.fedoraproject.org/updates/?packages=neuron

Please note that this requires an Fedora account, since that's the account system that links all Fedora community infrastructure. This Fedora Magazine post provides an excellent resource on setting up a Fedora Account: https://fedoramagazine.org/getting-set-up-with-fedora-project-services/

Detailed information on testing updates in Fedora can be found here on the Quality Assurance (QA) team's documentation: https://fedoraproject.org/wiki/QA:Updates_Testing


NeuroFedora is volunteer driven initiative and contributions in any form always welcome. You can get in touch with us here. We are happy to help you learn the skills needed to contribute to the project. In fact, that is one of the major goals of the initiative---to spread technical knowledge that is necessary to develop software for Neuroscience.

Google, Money and Censorship in Free Software communities

Posted by Daniel Pocock on July 16, 2019 10:05 PM

On 30 June 2019, I sent the email below to the debian-project mailing list.

It never appeared.

Alexander Wirt (formorer) has tried to justify censoring the mailing list in various ways. Wirt has multiple roles, as both Debian mailing list admin and also one of Debian's GSoC administrators and mentors. Google money pays for interns to do work for him. It appears he has a massive conflict of interest when using the former role to censor posts about Google, which relates to the latter role and its benefits.

Wirt has also made public threats to censor other discussions, for example, the DebConf Israel debate. In that case he has wrongly accused people of antisemitism, leaving people afraid to speak up again. The challenges of holding a successful event in that particular region require a far more mature approach, not a monoculture.

Why are these donations and conflicts of interest hidden from the free software community who rely on, interact with and contribute to Debian in so many ways? Why doesn't Debian provide a level playing field, why does money from Google get this veil of secrecy?

Is it just coincidence that a number of Google employees who spoke up about harassment are forced to resign and simultaneously, Debian Developers who spoke up about abusive leadership are obstructed from competing in elections? Are these symptoms of corporate influence?

Is it coincidence that the three free software communities censoring my recent blog about human rights from their Planet sites (FSFE, Debian and Mozilla, evidence of censorship) are also the communities where Google money is a disproportionate part of the budget?

Could the reason for secrecy about certain types of donation be motivated by the knowledge that unpleasant parts of the donor's culture also come along for the ride?

The email the cabal didn't want you to see

Subject: Re: Realizing Good Ideas with Debian Money
Date: Sun, 30 Jun 2019 23:24:06 +0200
From: Daniel Pocock <daniel@pocock.pro>
To: debian-project@lists.debian.org, debian-devel@lists.debian.org



On 29/05/2019 13:49, Sam Hartman wrote:
> 
> [moving a discussion from -devel to -project where it belongs]
> 
>>>>>> "Mo" == Mo Zhou <lumin@debian.org> writes:
> 
>     Mo> Hi,
>     Mo> On 2019-05-29 08:38, Raphael Hertzog wrote:
>     >> Use the $300,000 on our bank accounts?
> 
> So, there were two $300k donations in the last year.
> One of these was earmarked for a DSA equipment upgrade.


When you write that it was earmarked for a DSA equipment upgrade, do you
mean that was a condition imposed by the donor or it was the intention
of those on the Debian side of the transaction?  I don't see an issue
either way but the comment is ambiguous as it stands.

Debian announced[1] a $300k donation from Handshake foundation.

I couldn't find any public disclosure about other large donations and
the source of the other $300k.

In Bits from the DPL (December 2018), former Debian Project Leader (DPL)
Chris Lamb opaquely refers[2] to a discussion with Cat Allman about a
"significant donation".  Although there is a link to Google later in
Lamb's email, Lamb fails to disclose the following facts:

- Cat Allman is a Google employee (some people would already know that,
others wouldn't)

- the size of the donation

- any conditions attached to the donation

- private emails from Chris Lamb indicated he felt some pressure,
influence or threat from Google shortly before accepting their money

The Debian Social Contract[3] states that Debian does not hide our
problems.  Corporate influence is one of the most serious problems most
people can imagine, why has nothing been disclosed?

Therefore, please tell us,

1. who did the other $300k come from?
2. if it was not Google, then what is the significant donation from Cat
Allman / Google referred[2] to in Bits from the DPL (December 2018)?
3. if it was from Google, why was that hidden?
4. please disclose all conditions, pressure and influence relating to
any of these donations and any other payments received

Regards,

Daniel


1. https://www.debian.org/News/2019/20190329
2. https://lists.debian.org/debian-devel-announce/2018/12/msg00006.html
3. https://www.debian.org/social_contract

Censorship on the Google Summer of Code Mentor's mailing list

Google also operates a mailing list for mentors in Google Summer of Code. It looks a lot like any other free software community mailing list except for one thing: censorship.

Look through the "Received" headers of messages on the mailing list and you can find examples of messages that were delayed for some hours waiting for approval. It is not clear how many messages were silently censored, never appearing at all.

Recent attempts to discuss the issue on Google's own mailing list produced an unsurprising result: more censorship.

However, a number of people have since contacted me personally about their negative experiences with Google Summer of Code. I'm publishing below the message that Google didn't want you to see.

Subject: [GSoC Mentors] discussions about GSoC interns/students medical status
Date: Sat, 6 Jul 2019 10:56:31 +0200
From: Daniel Pocock <daniel@pocock.pro>
To: Google Summer of Code Mentors List <google-summer-of-code-mentors-list@googlegroups.com>


Hi all,

Just a few months ago, I wrote a blog lamenting the way some mentors
have disclosed details of their interns' medical situations on mailing
lists like this one.  I asked[1] the question: "Regardless of what
support the student received, would Google allow their own employees'
medical histories to be debated by 1,000 random strangers like this?"

Yet it has happened again.  If only my blog hadn't been censored.

If our interns have trusted us with this sensitive information,
especially when it concerns something that may lead to discrimination or
embarrassment, like mental health, then it highlights the enormous trust
and respect they have for us.

Many of us are great at what we do as engineers, in many cases we are
the experts on our subject area in the free software community.  But we
are not doctors.

If an intern goes to work at Google's nearby office in Zurich, then they
are automatically protected by income protection insurance (UVG, KTG and
BVG, available from all major Swiss insurers).  If the intern sends a
doctor's note to the line manager, the manager doesn't have to spend one
second contemplating its legitimacy.  They certainly don't put details
on a public email list.  They simply forward it to HR and the insurance
company steps in to cover the intern's salary.

The cost?  Approximately 1.5% of the payroll.

Listening to what is said in these discussions, many mentors are
obviously uncomfortable with the fact that "failing" an intern means
they will not even be paid for hours worked prior to a genuine accident
or illness.  For 1.5% of the program budget, why doesn't Google simply
take that burden off the mentors and give the interns peace of mind?

On numerous occasions Stephanie Taylor has tried to gloss over this
injustice with her rhetoric about how we have to punish people to make
them try harder next year.  Many of our interns are from developing
countries where they already suffer injustice and discrimination.  You
would have to be pretty heartless to leave these people without pay.
Could that be why Googlespeak clings to words like "fail" and "student"
instead of "not pay" and "employee"?

Many students from disadvantaged backgrounds, including women, have told
me they don't apply at all because of the uncertainty about doing work
that might never be paid.  This is an even bigger tragedy than the time
mentors lose on these situations.

Regards,

Daniel


1.
https://danielpocock.com/google-influence-free-open-source-software-community-threats-sanctions-bullying/

--
Former Debian GSoC administrator
https://danielpocock.com

Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC)

Posted by Brian "bex" Exelbierd on July 16, 2019 05:35 PM

I’ve decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC).  This was not an easy decision to make. I am proud of the work I have done in Fedora over the last three years and I think I have helped the community move past many challenges.  I could NEVER have done all of this without the support and assistance of the community!

As some of you know, I have been covering for some other roles in Red Hat for almost the last year.  Some of these tasks have led to some opportunities to take my career in a different direction. I am going to remain at Red Hat and on the same team with the same manager, but with a slightly expanded scope of duties.  I will no longer be day-to-day on Fedora and will instead be in a consultative role as a Community Architect at Large. This is a fancy way of saying that I will be tackling helping lots of projects with various issues while also working on some specific strategic objectives.

Read more over at the Fedora Magazine where this was originally posted.

Application service categories and community handoff

Posted by Fedora Community Blog on July 16, 2019 06:34 AM

The Community Platform Engineering (CPE) team recently wrote about our face-to-face meeting where we developed a team mission statement and developed a framework for making our workload more manageable. Having more focus will allow us to progress higher priority work for multiple stakeholders and move the needle on more initiatives in a more efficient manner than how we are working right now. 

During the F2F we walked through the process of how to gracefully remove ourselves from applications that are not fitting our mission statement. The next couple of months will be a transition phase as we want to ensure continuity and cause minimum disruption to the community. To assist in that strategy, we analysed our applications and came up with four classifications to which they could belong.

Application service categories

1. We maintain it, we run it

This refers to apps that are in our mission and we need to both actively maintain and host it. CPE will be responsible for all development and lifecycle work on those apps, but we do welcome contributors. This is business as usual for CPE and it has a predictable cost associated with it from a planning and maintenance perspective. 

2. We don’t maintain it, we run it

This represents apps that are in our infrastructure but we are not responsible for their overall maintenance. We  provide power and ping at a server level and will attempt to restart apps that have encountered an issue . We are happy to host them but the maintenance of them, which includes development of new features and bug fixes, are no longer in our day to day remit. This represents light work for us, as the actual applications ownership resides outside of CPE, with our responsibility exclusively on lifecycle management of the app.

3. We don’t maintain it, we don’t run it

This represents an application that we need to move into a mode whereby somebody other than CPE needs to own it. This represents some work on CPE’s side to ensure continuity of service and to ensure that an owner is found for it. Our Community OpenShift instance will be offered to host services here. Apps that fall into this category have mostly been in maintenance mode on the CPE side, but they keep “head space”. So we want for them to live and evolve exclusively outside of CPE on a hosting environment that we can provide as a service. Here, we will provide the means to host an application and will fully support the Community PaaS but any app maintenance or lifecycle events will be in the hands of the people running the app, not the CPE team.

These are apps for which we are a main contributor and which drain time and effort. In turn, this is causing us difficulty in planning wider initiatives because of the unpredictable nature of the requests. Category 3 apps are where our ongoing work in this area is more historical than strategic.

Winding down apps

Category 3 apps ultimately do not fit within CPE’s mission statement and our intention here is to have a maintenance wind-down period. That period will be decided on an app-by-app basis, with a typical wind down period being of the order of 1-6 months. For exceptional circumstances we may extend this out to a maximum of 12 months. That time frame will be decided in consultation with the next maintainer and the community at large to allow for continuity to occur. For apps that find themselves here, we are providing a community service in the form of Community OpenShift (“Communishift”) that could become a home for those apps. However, the CPE team won’t maintain a Service Level Expectation (SLE) for availability or fixes. Our SLE is a best effort to keep services and hardware on the air during working hours while being creative with staff schedules and time zones. We have this documented here for further information. Ideally they would have owners outside the team to which requests could be referred to, but would not be a CPE responsibility.

We are working on formalising the process of winding down an application by creating a Standard Operating Procedure (SOP). At a high level, that should include a project plan derived from consultation with the community. That may see work on the CPE team to get the application to a state of maintenance. That work could be on documentation, training, development of critical fixes / features or help with porting it to another location. Ultimately, the time spent on that kind of work is a fraction of the longer term maintenance cost. Our intention is to run all of the apps through the Fedora Council first, in case the Council prefers any specific alternatives to a particular service or app. 

4. We turn it off

This represents applications that are no longer used or have been superseded by another application. This may also represent applications that were not picked up by the other members of the community. Turning off does not equate to a hard removal of the app and if an owner can be found or a case made as to why CPE should own it, we can revisit it.

Initial app analysis

To help us identify that path, at our F2F we have evaluated a first round of apps.

Category 1

For completeness we are highlighting one example of a Category 1 application that we will always aim to maintain and keep on the air. Bodhi is one such example as it is one of the core services used to build and deliver Fedora and was developed specifically around the needs of the Fedora project. This makes it one of a kind, there are no application out there that could be leveraged to replace it and any attempts to replace it with something else would have repercussions into the entire build process and likely the entire community.

Category 2

Wiki x 2 (This may be a Category 3 after further analysis) — CPE maintains two wiki instances, one for Fedora and one for CentOS. Both of them are used by the communities in ways that makes it currently impossible to remove. In Fedora’s case it is also used by QA (Quality Assurance), making it an integral part of the Fedora release process and thus not something that can be handed to the community to maintain.

Category 3

Overall, the trend for these tools will be to move them to a steady-state of no more fixes/enhancements. The community will be welcome to maintain, and/or locate a replacement service that satisfies their requirements. Replacements can be considered by the Council for funding as appropriate.

Mailman/Hyperkitty/postorious — Maintaining this stack has cost the equivalent of an entire developer’s time long-term. However, we recognize the imperative that projects have mailing lists for discussion and collaboration. No further features will be added here and based on the community needs an outside mailing list service can be contracted.

Elections — This application has been in maintenance mode for some time now. We recently invested some time in it to port it to python3, use a newer authentication protocol (OpenID Connect) and move it to openshift, while integrating Ben Cotton’s work to add support for badges to elections. We believe elections is in a technical state that is compatible with a low-maintenance model for a community member who would like to take it over. As a matter of fact, we have already found said community member in the person of Ben Cotton (thank you Ben!).

Fedocal — This application has been in maintenance mode for some time. It has been ported to python3 (but hasn’t had a release with python3 support yet). There is work in progress to port it to OpenID Connect and have it run in OpenShift. It still needs to be ported to fedora-messaging

Nuancier — This application has been in maintenance mode as well. It has been ported to python3 but needs to be ported to OpenID Connect, fedora-messaging and moved to OpenShift.

Badges — This application has been in maintenance mode for a while now. The work to port it to python3 has been started but it still needs to be ported to OpenID Connect, fedora-messaging and moved to openshift. We invested some time recently to identify the highest pain point of the application, log them in the issue tracker as user stories and start prioritizing them. We however, cannot commit to fixing them.

For Fedocal and Nuancier, we are thinking of holding virtual hackfest on Fridays for as long as there is work to do on them, advertise this to the community to try to spark interest in these applications in the hope that we find someone interested enough (and after these hackfest knowledgeable enough) to take over their maintenance.

Category 4

Pastebin — fpaste.org is a well known and used service in Fedora, however it has been a pain point for the CPE team for few years. The pastebin software that exist are most often not maintained, finding and replacing one is a full-time work for a few weeks. Finally, this type of service also comes with high legal costs as we are often asked to remove content from it, despite the limited time this content remains available. CentOS is also running a pastebin service but it has the same long term costs and a similar conversation will need to occur there

Apps.fp.o — This is the landing page available at https://apps.fedoraproject.org/. Its content has not been kept up to date and it overall needs some redesign. We may be open to give it up to a community member, but we do not believe that the gain is worth the time investment in finding that person

Ipsilon — Ipsilon is our identity provider. It supports multiple authentication protocol (OpenID 2.0, OpenID Connect, SAML 2.0, …) and multiple backends (FAS, LDAP/FreeIPA, htpasswd, system accounts…). While it was originally shipped as a tech preview in RHEL it no longer is and the team working on this application has also been refocused on other projects. We would like to move all our applications to use OpenID Connect or SAML 2.0 (instead of OpenID 2.0 with (custom) extensions) and replace FAS with an IPA-based solution, which in turn allows us to replace ipsilon by a more maintained solution, likely Red Hat Single Sign On. The dependencies are making this a long term effort. We will need to announce to the community that this means we will shut down the public OpenID 2.0 endpoints, which means that any community services that use this protocol need to be moved to OpenID Connect as well.

Over the coming weeks we will setup our process to begin the formal window of the items listed above that are in a 3 or a 4 state and will share that process and plan with the Fedora Council.

The post Application service categories and community handoff appeared first on Fedora Community Blog.

Episode 154 - Chat with the authors of the book "The Fifth Domain"

Posted by Open Source Security Podcast on July 16, 2019 12:10 AM
Josh and Kurt talk to the authors of a new book The Fifth Domain. Dick Clarke and Rob Knake join us to discuss the book, cybersecurity, US policy, how we got where we are today and what the future holds for cybersecurity.


<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/10497236/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Résultats des élections de Fedora 06/19

    Posted by Charles-Antoine Couret on July 15, 2019 06:13 PM

    Comme je vous le rapportais il y a peu, Fedora a organisé des élections pour renouveler partiellement le collège de ses organes FESCo, Mindshare et Council.

    Le scrutin est comme toujours un vote par valeurs. Nous pouvons attribuer à chaque candidat un certain nombre de points, dont la valeur maximale est celui du nombre de candidat, et le minimum 0. Cela permet de montrer l'approbation à un candidat et la désapprobation d'un autre sans ambiguïté. Rien n'empêchant de voter pour deux candidats avec la même valeur.

    Les résultats pour le Conseil sont (qui est le seul candidat) :

      # votes |  name
     176          Till Maas (till)
    

    À titre indicatif le score maximal possible était de 184 * 1 votes soit 184.

    Les résultats pour le FESCo sont (seuls les quatre premiers sont élus) :

      # votes |  name
    695     Stephen Gallagher (sgallagh)
    687         Igor Gnatenko (ignatenkobrain)
    615     Aleksandra Fedorova (bookwar)
    569     Petr Šabata (psabata)
    --------------------------------------------
    525      Jeremy Cline
    444     Fabio Valentini (decathorpe
    

    À titre indicatif le score maximal possible était de 205 * 6 soit 1230.

    Les résultats pour le Mindshare sont donc (seuls le premier est élu) :

      # votes |  name
    221     Sumantro Mukherjee (sumantrom)
    --------------------------------------------
    172     Luis Bazan (lbazan)
    

    À titre indicatif le score maximal possible était de 178 * 2 soit 356.

    Nous pouvons noter que globalement le nombre de votants pour chaque scrutin était proche aux alentours de 175-200 votants ce qui est un poil moins que la fois précédente (200-250 en moyenne). Les scores sont aussi plutôt éparpillés.

    Bravo aux participants et aux élus et le meilleur pour le projet Fedora.

    Duplicity 0.8.01

    Posted by Gwyn Ciesla on July 15, 2019 03:10 PM

    Duplicity 0.8.01 is now in rawhide. The big change here is that it now uses Python 3. I’ve tested it in my own environment, both on it’s own and with deja-dup, and both work.

    Please test and file bugs. I expect there will be more, but with Python 2 reaching EOL soon, it’s important to move everything we can to Python 3.

    Thanks!

    -Gwyn/limb

    CPU atomics and orderings explained

    Posted by William Brown on July 15, 2019 02:00 PM

    CPU atomics and orderings explained

    Sometimes the question comes up about how CPU memory orderings work, and what they do. I hope this post explains it in a really accessible way.

    Short Version - I wanna code!

    Summary - The memory model you commonly see is from C++ and it defines:

    • Relaxed
    • Acquire
    • Release
    • Acquire/Release (sometimes AcqRel)
    • SeqCst

    There are memory orderings - every operation is “atomic”, so will work correctly, but there rules define how the memory and code around the atomic are influenced.

    If in doubt - use SeqCst - it’s the strongest guarantee and prevents all re-ordering of operations and will do the right thing.

    The summary is:

    • Relaxed - no ordering guarantees, just execute the atomic as is.
    • Acquire - all code after this atomic, will be executed after the atomic.
    • Release - all code before this atomic, will be executed before the atomic.
    • Acquire/Release - both Acquire and Release - ie code stays before and after.
    • SeqCst - Stronger consistency of Acquire/Release.

    Long Version … let’s begin …

    So why do we have memory and operation orderings at all? Let’s look at some code to explain:

    let mut x = 0;
    let mut y = 0;
    x = x + 3;
    y = y + 7;
    x = x + 4;
    x = y + x;
    

    Really trivial example - now to us as a human, we read this and see a set of operations that are linear by time. That means, they execute from top to bottom, in order.

    However, this is not how computers work. First, compilers will optimise your code, and optimisation means re-ordering of the operations to achieve better results. A compiler may optimise this to:

    let mut x = 0;
    let mut y = 0;
    // Note removal of the x + 3 and x + 4, folded to a single operation.
    x = x + 7
    y = y + 7;
    x = y + x;
    

    Now there is a second element. Your CPU presents the illusion of running as a linear system, but it’s actually an asynchronous, out-of-order task execution engine. That means a CPU will reorder your instructions, and may even run them concurrently and asynchronously.

    For example, your CPU will have both x + 7 and y + 7 in the pipeline, even though neither operation has completed - they are effectively running at the “same time” (concurrently).

    When you write a single thread program, you generally won’t notice this behaviour. This is because a lot of smart people write compilers and CPU’s to give the illusion of linear ordering, even though both of them are operating very differently.

    Now we want to write a multithreaded application. Suddenly this is the challenge:

    We write a concurrent program, in a linear language, executed on a concurrent asynchronous machine.

    This means there is a challenge is the translation between our mind (thinking about the concurrent problem), the program (which we have to express as a linear set of operations), which then runs on our CPU (an async concurrent device).

    Phew. How do computers even work in this scenario?!

    Why are CPU’s async?

    CPU’s have to be async to be fast - remember spectre and meltdown? These are attacks based on measuring the side effects of CPU’s asynchronous behaviour. While computers are “fast” these attacks will always be possible, because to make a CPU synchronous is slow - and asynchronous behaviour will always have measurable side effects. Every modern CPU’s performance is an illusion of async black magic.

    A large portion of the async behaviour comes from the interaction of the CPU, cache, and memory.

    In order to provide the “illusion” of a coherent synchronous memory interface there is no seperation of your programs cache and memory. When the cpu wants to access “memory” the CPU cache is utilised transparently and will handle the request, and only on a cache miss, will we retrieve the values from RAM.

    (Aside: in almost all cases more CPU cache, not frequency will make your system perform better, because a cache miss will mean your task stalls waiting on RAM. Ohh no!)

    CPU -> Cache -> RAM
    

    When you have multiple CPU’s, each CPU has it’s own L1 cache:

    CPU1 -> L1 Cache -> |              |
    CPU2 -> L1 Cache -> | Shared L2/L3 | -> RAM
    CPU3 -> L1 Cache -> |              |
    CPU4 -> L1 Cache -> |              |
    

    Ahhh! Suddenly we can see where problems can occur - each CPU has an L1 cache, which is transparent to memory but unique to the CPU. This means that each CPU can make a change to the same piece of memory in their L1 cache without the other CPU knowing. To help explain, let’s show a demo.

    CPU just trash my variables fam

    We’ll assume we now have two threads - my code is in rust again, and there is a good reason for the unsafes - this code really is unsafe!

    // assume global x: usize = 0; y: usize = 0;
    
    THREAD 1                        THREAD 2
    
    if unsafe { *x == 1 } {          unsafe {
        unsafe { *y += 1 }              *y = 10;
    }                                   *x = 1;
                                    }
    

    At the end of execution, what state will X and Y be in? The answer is “it depends”:

    • What order did the threads run?
    • The state of the L1 cache of each CPU
    • The possible interleavings of the operations.
    • Compiler re-ordering

    In the end the result of x will always be 1 - because x is only mutated in one thread, the caches will “eventually” (explained soon) become consistent.

    The real question is y. y could be:

    • 10
    • 11
    • 1

    10 - This can occur because in thread 2, x = 1 is re-ordered above y = 10, causing the thread 1 “y += 1” to execute, followed by thread 2 assign 10 directly to y. It can also occur because the check for x == 1 occurs first, so y += 1 is skipped, then thread 2 is run, causing y = 10. Two ways to achieve the same result!

    11 - This occurs in the “normal” execution path - all things considered it’s a miracle :)

    1 - This is the most complex one - The y = 10 in thread 2 is applied, but the result is never sent to THREAD 1’s cache, so x = 1 occurs and is made available to THREAD 1 (yes, this is possible to have different values made available to each cpu …). Then thread 1 executes y (0) += 1, which is then sent back trampling the value of y = 10 from thread 2.

    If you want to know more about this and many other horrors of CPU execution, Paul McKenny is an expert in this field and has many talks at LCA and others on the topic. He can be found on twitter and is super helpful if you have questions.

    So how does a CPU work at all?

    Obviously your system (likely a multicore system) works today - so it must be possible to write correct concurrent software. Cache’s are kept in sync via a protocol called MESI. This is a state machine describing the states of memory and cache, and how they can be synchronised. The states are:

    • Modified
    • Exclusive
    • Shared
    • Invalid

    What’s interesting about MESI is that each cache line is maintaining it’s own state machine of the memory addresses - it’s not a global state machine. To coordinate CPU’s asynchronously message each other.

    A CPU can be messaged via IPC (Inter-Processor-Communication) to say that another CPU wants to “claim” exclusive ownership of a memory address, or to indicate that it has changed the content of a memory address and you should discard your version. It’s important to understand these messages are asynchronous. When a CPU modifies an address it does not immediately send the invalidation message to all other CPU’s - and when a CPU recieves the invalidation request it does not immediately act upon that message.

    If CPU’s did “synchronously” act on all these messages, they would be spending so much time handling IPC traffic, they would never get anything done!

    As a result, it must be possible to indicate to a CPU that it’s time to send or acknowledge these invalidations in the cache line. This is where barriers, or the memory orderings come in.

    • Relaxed - No messages are sent or acknowledged.
    • Release - flush all pending invalidations to be sent to other CPUS
    • Acquire - Acknowledge and process all invalidation requests in my queue
    • Acquire/Release - flush all outgoing invalidations, and process my incomming queue
    • SeqCst - as AcqRel, but with some other guarantees around ordering that are beyond this discussion.

    Understand a Mutex

    With this knowledge in place, we are finally in a position to understand the operations of a Mutex

    // Assume mutex: Mutex<usize> = Mutex::new(0);
    
    THREAD 1                            THREAD 2
    
    {                                   {
        let guard = mutex.lock()            let guard = mutex.lock()
        *guard += 1;                        println!(*guard)
    }                                   }
    

    We know very clearly that this will print 1 or 0 - it’s safe, no weird behaviours. Let’s explain this case though:

    THREAD 1
    
    {
        let guard = mutex.lock()
        // Acquire here!
        // All invalidation handled, guard is 0.
        // Compiler is told "all following code must stay after .lock()".
        *guard += 1;
        // content of usize is changed, invalid req is queue
    }
    // Release here!
    // Guard goes out of scope, invalidation reqs sent to all CPU's
    // Compiler told all proceeding code must stay above this point.
    
                THREAD 2
    
                {
                    let guard = mutex.lock()
                    // Acquire here!
                    // All invalidations handled - previous cache of usize discarded
                    // and read from THREAD 1 cache into S state.
                    // Compiler is told "all following code must stay after .lock()".
                    println(*guard);
                }
                // Release here!
                // Guard goes out of scope, no invalidations sent due to
                // no modifications.
                // Compiler told all proceeding code must stay above this point.
    

    And there we have it! How barriers allow us to define an ordering in code and a CPU, to ensure our caches and compiler outputs are correct and consistent.

    Benefits of Rust

    A nice benefit of Rust, and knowing these MESI states now, we can see that the best way to run a system is to minimise the number of invalidations being sent and acknowledged as this always causes a delay on CPU time. Rust variables are always mutable or immutable. These map almost directly to the E and S states of MESI. A mutable value is always exclusive to a single cache line, with no contention - and immutable values can be placed into the Shared state allowing each CPU to maintain a cache copy for higher performance.

    This is one of the reasons for Rust’s amazing concurrency story is that the memory in your program map to cache states very clearly.

    It’s also why it’s unsafe to mutate a pointer between two threads (a global) - because the cache of the two cpus’ won’t be coherent, and you may not cause a crash, but one threads work will absolutely be lost!

    Finally, it’s important to see that this is why using the correct concurrency primitives matter - it can highly influence your cache behaviour in your program and how that affects cache line contention and performance.

    For comments and more, please feel free to email me!

    Shameless Plug

    I’m the author and maintainer of Conc Read - a concurrently readable datastructure library for Rust. Check it out on crates.io!

    ASG! 2019 CfP Re-Opened!

    Posted by Lennart Poettering on July 14, 2019 10:00 PM

    <large>The All Systems Go! 2019 Call for Participation Re-Opened for ONE DAY!</large>

    Due to popular request we have re-opened the Call for Participation (CFP) for All Systems Go! 2019 for one day. It will close again TODAY, on 15 of July 2019, midnight Central European Summit Time! If you missed the deadline so far, we’d like to invite you to submit your proposals for consideration to the CFP submission site quickly! (And yes, this is the last extension, there's not going to be any more extensions.)

    ASG image

    All Systems Go! is everybody's favourite low-level Userspace Linux conference, taking place in Berlin, Germany in September 20-22, 2019.

    For more information please visit our conference website!

    HP, Linux and ACPI

    Posted by Luya Tshimbalanga on July 14, 2019 05:35 PM
    Majority of HP hardware running on Linux and even Microsoft reported an issue related to a non-standard compliant ACPI. Notable message below repeats at least three times on the boot:


    4.876549] ACPI BIOS Error (bug): AE_AML_BUFFER_LIMIT, Field [D128] at bit offset/length 128/1024 exceeds size of target Buffer (160 bits) (20190215/dsopcode-198) 
    [ 4.876555] ACPI Error: Aborting method \HWMC due to previous error (AE_AML_BUFFER_LIMIT) (20190215/psparse-529) 
    [ 4.876562] ACPI Error: Aborting method \_SB.WMID.WMAA due to previous error (AE_AML_BUFFER_LIMIT) (20190215/psparse-529)


    The bug is a known for years from which Linux kernel team are unable to fix without the help of vendor i.e. HP. Here is a compilation of reports:
     The good news is some errors seems harmless. Unfortunately, such errors displayed the quirks approach used by vendors to support Microsoft Windows system thus doing bad practice. One of case how such action lead to an issue to even the officially supported operating system on HP hardware.

    The ideal will be for HP to provide a BIOS fix for their affected hardware and officially support the Linux ecosystem much like their Printing department. Linux Vendor Firmware Service will be a good start and so far Dell is the leader in that department. American Megatrends Inc, the company developing BIOS/UEFI for HP made the process easier so it is a matter to fully enable the support.

    bzip2 1.0.8

    Posted by Mark J. Wielaard on July 13, 2019 07:38 PM

    We are happy to announce the release of bzip2 1.0.8.

    This is a fixup release because the CVE-2019-12900 fix in bzip2 1.0.7 was too strict and might have prevented decompression of some files that earlier bzip2 versions could decompress. And it contains a few more patches from various distros and forks.

    bzip2 1.0.8 contains the following fixes:

    • Accept as many selectors as the file format allows. This relaxes the fix for CVE-2019-12900 from 1.0.7 so that bzip2 allows decompression of bz2 files that use (too) many selectors again.
    • Fix handling of large (> 4GB) files on Windows.
    • Cleanup of bzdiff and bzgrep scripts so they don’t use any bash extensions and handle multiple archives correctly.
    • There is now a bz2-files testsuite at https://sourceware.org/git/bzip2-tests.git

    Patches by Joshua Watt, Mark Wielaard, Phil Ross, Vincent Lefevre, Led and Kristýna Streitová.

    This release also finalizes the move of bzip2 to a community maintained project at https://sourceware.org/bzip2/

    Thanks to Bhargava Shastry bzip2 is now also part of oss-fuzz to catch fuzzing issues early and (hopefully not) often.

    All systems go

    Posted by Fedora Infrastructure Status on July 12, 2019 10:20 PM
    Service 'Pagure' now has status: good: Everything seems to be working.

    There are scheduled downtimes in progress

    Posted by Fedora Infrastructure Status on July 12, 2019 08:59 PM
    Service 'Pagure' now has status: scheduled: scheduled outage: https://pagure.io/fedora-infrastructure/issue/7980

    FPgM report: 2019-28

    Posted by Fedora Community Blog on July 12, 2019 08:56 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week. I am on PTO the week of 15 July, so there will be no FPgM report or FPgM office hours next week.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

    Announcements

    Upcoming meetings

    Fedora 31

    Schedule

    • 2019-07-23 — Self-contained changes due
    • 2019-07-24–08-13 — Mass rebuild
    • 2019-08-13 — Code complete (testable) deadline
    • 2019-08-13 — Fedora 31 branch point

    Changes

    Announced

    Submitted to FESCo

    Approved by FESCo

    The post FPgM report: 2019-28 appeared first on Fedora Community Blog.

    Settings, in a sandbox world

    Posted by Matthias Clasen on July 12, 2019 06:19 PM

    GNOME applications (and others) are commonly using the GSettings API for storing their application settings.

    GSettings has many nice aspects:

    • flexible data types, with GVariant
    • schemas, so others can understand your settings (e.,g. dconf-editor)
    • overrides, so distros can tweak defaults they don’t like

    And it has different backends, so it can be adapted to work transparently in many situations. One example for where this comes in handy is when we use a memory backend to avoid persisting any settings while running tests.

    The GSettings backend that is typically used for normal operation is the DConf one.

    DConf

    DConf features include profiles,  a stack of databases, a facility for locking down keys so they are not writable, and a single-writer design with a central service.

    The DConf design is flexible and enterprisey – we have taken advantage of this when we created fleet commander to centrally manage application and desktop settings for large deployments.

    But it is not a great fit for sandboxing, where we want to isolate applications from each other and from the host system.  In DConf, all settings are stored in a single database, and apps are free to read and write any keys, not just their own – plenty of potential for mischief and accidents.

    Most of the apps that are available as flatpaks today are poking a ‘DConf hole’ into their sandbox to allow the GSettings code to keep talking to the dconf daemon on the session bus, and mmap the dconf database.

    Here is how the DConf hole looks in the flatpak metadata file:

    [Context]
    filesystems=xdg-run/dconf;~/.config/dconf:ro;
    
    [Session Bus Policy]
    ca.desrt.dconf=talk

    Sandboxes

    Ideally, we want sandboxed apps to only have access to their own settings, and maybe readonly access to a limited set of shared settings (for things like the current font, or accessibility settings). It would also be nice if uninstalling a sandboxed app did not leave traces behind, like leftover settings  in some central database.

    It might be possible to retrofit some of this into DConf. But when we looked, it did not seem easy, and would require reconsidering some of the central aspects of the DConf design. Instead of going down that road, we decided to take advantage of another GSettings backend that already exists, and stores settings in a keyfile.

    Unsurprisingly, it is called the keyfile backend.

    Keyfiles

    The keyfile backend was originally created to facilitate the migration from GConf to GSettings, and has been a bit neglected, but we’ve given it some love and attention, and it can now function as the default GSettings backend inside sandboxes.

    It provides many of the isolation aspects we want: Apps can only read and write their own settings, and the settings are in a single file, in the same place as all the application data:

    ~/.var/app/$APP/config/glib-2.0/settings/keyfile

    One of the things we added to the keyfile backend is support for locks and overrides, so that fleet commander can keep working for apps that are in flatpaks.

    For shared desktop-wide settings, there is a companion Settings portal, which provides readonly access to some global settings. It is used transparently by GTK and Qt for toolkit-level settings.

    What does all this mean for flatpak apps?

    If your application is not yet available as a flatpak, and you want to provide one, you don’t have to do anything in particular. Things will just work. Don’t poke a hole in your sandbox for DConf, and GSettings will use the keyfile backend without any extra work on your part.

    If your flatpak is currently shipping with a DConf hole, you can keep doing that for now. When you are ready for it, you should

    • Remove the DConf hole from your flatpak metadata
    • Instruct flatpak to migrate existing DConf settings, by adding a migrate-path setting to the X-DConf section in your flatpak metadata. The value fo the migrate-path key is the DConf path prefix where your application’s settings are stored.

    Note that this is a one-time migration; it will only happen if the keyfile does not exist. The existing settings will be left in the DConf database, so if you need to do the migration again for whatever reason, you can simply remove the the keyfile.

    This is how the migrate-path key looks in the metadata file:

    [X-DConf]
    migrate-path=/org/gnome/builder/

    Closing the DConf hole is what makes GSettings use the keyfile backend, and the migrate-path key tells flatpak to migrate settings from DConf – you need both parts for a seamless transition.

    There were some recent fixes to the keyfile backend code, so you want to make sure that the runtime has GLib 2.60.6, for best results.

    Happy flatpaking!

    Update: One of the most recent fixes in the keyfile backend was to correct under what circumstances GSettings will choose it as the default backend. If you have problems where the wrong backend is chosen, as a short-term workaround, you can override the choice with the GSETTINGS_BACKEND environment variable.

    Update 2: To add the migrate-path setting with flatpak-builder, use the following option:

    --metadata=X-DConf=migrate-path=/your/path/


    GNOME Software in Fedora will no longer support snapd

    Posted by Richard Hughes on July 12, 2019 12:51 PM

    In my slightly infamous email to fedora-devel I stated that I would turn off the snapd support in the gnome-software package for Fedora 31. A lot of people agreed with the technical reasons, but failed to understand the bigger picture and asked me to explain myself.

    I wanted to tell a little, fictional, story:

    In 2012 the ISO institute started working on a cross-vendor petrol reference vehicle to reduce the amount of R&D different companies had to do to build and sell a modern, and safe, saloon car.

    Almost immediately, Mercedes joins ISO, and starts selling the ISO car. Fiat joins in 2013, Peugeot in 2014 and General Motors finally joins in 2015 and adds support for Diesel engines. BMW, who had been trying to maintain the previous chassis they designed on their own (sold as “BMW Kar Koncept”), finally adopts the ISO car also in 2015. BMW versions of the ISO car use BMW-specific transmission oil as it doesn’t trust oil from the ISO consortium.

    Mercedes looks to the future, and adds high-voltage battery support to the ISO reference car also in 2015, adding the required additional wiring and regenerative braking support. All the other members of the consortium can use their own high voltage batteries, or use the reference battery. The battery can be charged with electricity from any provider.

    In 2016 BMW stops marketing the “ISO Car” like all the other vendors, and instead starts calling it “BMW Car” instead. At about the same time BMW adds support for hydrogen engines to the reference vehicle. All the other vendors can ship the ISO car with a Hydrogen engine, but all the hydrogen must be purchased from a BMW-certified dealer. If any vendor other than BMW uses the hydrogen engines, they can’t use the BMW-specific heat shield which protects the fuel tank from exploding in the event on a collision.

    In 2017 Mercedes adds traction control and power steering to the ISO reference car. It is enabled almost immediately and used by nearly all the vendors with no royalties and many customer lives are saved.

    In 2018 BMW decides that actually producing vendor-specific oil for it’s cars is quite a lot of extra work, and tells all customers existing transmission oil has to be thrown away, but now all customers can get free oil from the ISO consortium. The ISO consortium distributes a lot more oil, but also has to deal with a lot more customer queries about transmission failures.

    In 2019 BMW builds a special cut-down ISO car, but physically removes all the petrol and electric functionality from the frame. It is rebranded as “Kar by BMW”. It then sends a private note to the chair of the ISO consortium that it’s not going to be using ISO car in 2020, and that it’s designing a completely new “Kar” that only supports hydrogen engines and does not have traction control or seatbelts. The explanation given was that BMW wanted a vehicle that was tailored specifically for hydrogen engines. Any BMW customers using petrol or electricity in their car must switch to hydrogen by 2020.

    The BMW engineers that used to work on ISO Car have been shifted to work on Kar, although have committed to also work on Car if it’s not too much extra work. BMW still want to be officially part of the consortium and to be able to sell the ISO Car as an extra vehicle to the customer that provides all the engine types (as some customers don’t like hydrogen engines), but doesn’t want to be seen to support anything other than a hydrogen-based future. It’s also unclear whether the extra vehicle sold to customers would be the “ISO Car” or the “BMW Car”.

    One ISO consortium member asks whether they should remove hydrogen engine support from the ISO car as they feel BMW is not playing fair. Another consortium member thinks that the extra functionality could just be disabled by default and any unused functionality should certainly be removed. All members of the consortium feel like BMW has pushed them too far. Mercedes stop selling the hydrogen ISO Car model stating it’s not safe without the heat shield, and because BMW isn’t going to be supporting the ISO Car in 2020.

    What is Silverblue?

    Posted by Fedora Magazine on July 12, 2019 08:00 AM

    Fedora Silverblue is becoming more and more popular inside and outside the Fedora world. So based on feedback from the community, here are answers to some interesting questions about the project. If you do have any other Silverblue related questions, please leave it in the comments section and we will try to answer them in a future article.

    What is Silverblue?

    Silverblue is a codename for the new generation of the desktop operating system, previously known as Atomic Workstation. The operating system is delivered in images that are created by utilizing the rpm-ostree project. The main benefits of the system are speed, security, atomic updates and immutability.

    What does “Silverblue” actually mean?

    “Team Silverblue” or “Silverblue” in short doesn’t have any hidden meaning. It was chosen after roughly two months when the project, previously known as Atomic Workstation was rebranded. There were over 150 words or word combinations reviewed in the process. In the end Silverblue was chosen because it had an available domain as well as the social network accounts. One could think of it as a new take on Fedora’s blue branding, and could be used in phrases like “Go, Team Silverblue!” or “Want to join the team and improve Silverblue?”.

    What is ostree?

    OSTree or libostree is a project that combines a “git-like” model for committing and downloading bootable filesystem trees, together with a layer to deploy them and manage the bootloader configuration. OSTree is used by rpm-ostree, a hybrid package/image based system that Silverblue uses. It atomically replicates a base OS and allows the user to “layer” the traditional RPM on top of the base OS if needed.

    Why use Silverblue?

    Because it allows you to concentrate on your work and not on the operating system you’re running. It’s more robust as the updates of the system are atomic. The only thing you need to do is to restart into the new image. Also, if there’s anything wrong with the currently booted image, you can easily reboot/rollback to the previous working one, if available. If it isn’t, you can download and boot any other image that was generated in the past, using the ostree command.

    Another advantage is the possibility of an easy switch between branches (or, in an old context, Fedora releases). You can easily try the Rawhide or updates-testing branch and then return back to the one that contains the current stable release. Also, you should consider Silverblue if you want to try something new and unusual.

    What are the benefits of an immutable OS?

    Having the root filesystem mounted read-only by default increases resilience against accidental damage as well as some types of malicious attack. The primary tool to upgrade or change the root filesystem is rpm-ostree.

    Another benefit is robustness. It’s nearly impossible for a regular user to get the OS to the state when it doesn’t boot or doesn’t work properly after accidentally or unintentionally removing some system library. Try to think about these kind of experiences from your past, and imagine how Silverblue could help you there.

    How does one manage applications and packages in Silverblue?

    For graphical user interface applications, Flatpak is recommended, if the application is available as a flatpak. Users can choose between Flatpaks from either Fedora and built from Fedora packages and in Fedora-owned infrastructure, or Flathub that currently has a wider offering. Users can install them easily through GNOME Software, which already supports Fedora Silverblue.

    One of the first things users find out is there is no dnf preinstalled in the OS. The main reason is that it wouldn’t work on Silverblue — and part of its functionality was replaced by the rpm-ostree command. Users can overlay the traditional packages by using the rpm-ostree install PACKAGE. But it should only be used when there is no other way. This is because when the new system images are pulled from the repository, the system image must be rebuilt every time it is altered to accommodate the layered packages, or packages that were removed from the base OS or replaced with a different version.

    Fedora Silverblue comes with the default set of GUI applications that are part of the base OS. The team is working on porting them to Flatpaks so they can be distributed that way. As a benefit, the base OS will become smaller and easier to maintain and test, and users can modify their default installation more easily. If you want to look at how it’s done or help, take a look at the official documentation.

    What is Toolbox?

    Toolbox is a project to make containers easily consumable for regular users. It does that by using podman’s rootless containers. Toolbox lets you easily and quickly create a container with a regular Fedora installation that you can play with or develop on, separated from your OS.

    Is there any Silverblue roadmap?

    Formally there isn’t any, as we’re focusing on problems we discover during our testing and from community feedback. We’re currently using Fedora’s Taiga to do our planning.

    What’s the release life cycle of the Silverblue?

    It’s the same as regular Fedora Workstation. A new release comes every 6 months and is supported for 13 months. The team plans to release updates for the OS bi-weekly (or longer) instead of daily as they currently do. That way the updates can be more thoroughly tested by QA and community volunteers before they are sent to the rest of the users.

    What is the future of the immutable OS?

    From our point of view the future of the desktop involves the immutable OS. It’s safest for the user, and Android, ChromeOS, and the last macOS Catalina all use this method under the hood. For the Linux desktop there are still problems with some third party software that expects to write to the OS. HP printer drivers are a good example.

    Another issue is how parts of the system are distributed and installed. Fonts are a good example. Currently in Fedora they’re distributed in RPM packages. If you want to use them, you have to overlay them and then restart to the newly created image that contains them.

    What is the future of standard Workstation?

    There is a possibility that the Silverblue will replace the regular Workstation. But there’s still a long way to go for Silverblue to provide the same functionality and user experience as the Workstation. In the meantime both desktop offerings will be delivered at the same time.

    How does Atomic Workstation or Fedora CoreOS relate to any of this?

    Atomic Workstation was the name of the project before it was renamed to Fedora Silverblue.

    Fedora CoreOS is a different, but similar project. It shares some fundamental technologies with Silverblue, such as rpm-ostree, toolbox and others. Nevertheless, CoreOS is a more minimal, container-focused and automatically updating OS.

    repos

    Posted by Porfirio A. Páiz - porfiriopaiz on July 12, 2019 05:32 AM

    Software Repositories

    Once we solved the problem of getting connected to the Internet and how to launch a terminal, you might want to install all the software you use.

    The software comes from somewhere, on Fedora these are called Software Repositories, next I detail which are the ones I enable on all my Fedora installs apart of the officials that comes preinstalled and enabled by default.

    Open a terminal and enable some of these.

    RPMFusion

    RPM Fusion is a repository of add-on packages for Fedora and EL+EPEL maintained by a group of volunteers. RPM Fusion is not a standalone repository, but an extension of Fedora. RPM Fusion distributes packages that have been deemed unacceptable to Fedora.

    More about RPMFusion on its official website: https://rpmfusion.org/FAQ

    su -c 'dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm'
    

    Fedora Workstation Repositories

    From the Fedora wiki page corresponding to Fedora Workstation Repositories:

    The Fedora community strongly promotes free and open source resources. The Fedora Workstation, in its out of the box configuration, therefore, only includes free and open source software. To make the Fedora Workstation more usable, we've made it possible to easily install a curated set of third party (external) sources that supply software not included in Fedora via an additional package.

    Read more at: https://fedoraproject.org/wiki/Workstation/Third_Party_Software_Repositories

    Please note that this will only install the *.repo files, it will not enable the provided repos:

    su -c 'dnf install fedora-workstation-repositories'
    

    Fedora Rawhide's Repositories

    Rawhide is the name given to the current development version of Fedora. It consists of a package repository called "rawhide" and contains the latest build of all Fedora packages updated on a daily basis. Each day, an attempt is made to create a full set of 'deliverables' (installation images and so on), and all that compose successfully are included in the Rawhide tree for that day.

    It is possible to install its repository files and just temporarily enable it for just a single transaction, let us say, to simple install or upgrade a single package and its dependencies, maybe, to give a try to its new version that is not currently available on any of the stable and maintained versions of Fedora.

    This is useful when a bug was fixed on Rawhide but it has not landed yet on the stable branch of Fedora and the urge for it cannot wait.

    Again, this will just install the *.repo file under /etc/yum.repos.d/, this will not enable it. Later we will see how to handle, disable and enable this repositories for just one transaction.

    More on Rawhide on its wiki page: https://fedoraproject.org/wiki/Releases/Rawhide

    su -c 'dnf install fedora-repos-rawhide'
    

    COPR

    Copr is an easy-to-use automatic build system providing a package repository as its output.

    Here are some of the repos I rely on for some packages:

    neteler/remarkable

    Remarkable is a free fully featured markdown editor.

    su -c 'dnf -y copr enable neteler/remarkable'
    

    philfry/gajim

    Gajim is a Jabber client written in PyGTK, currently it provides support for the OMEMO encryption method which I use. This repo provides tools and dependencies not available in the official Fedora repo.

    su -c 'dnf -y copr enable philfry/gajim'
    

    dani/qgis

    QGIS is a user friendly Open Source Geographic Information System.

    su -c 'dnf -y copr enable dani/qgis'
    

    donet-sig/dotnet

    This provides the .NET CLI tools and runtime for Fedora.

    su -c 'dnf copr enable dotnet-sig/dotnet'
    

    VSCodium

    Few weeks ago I decided to give a try to VSCodium, a fork of VSCode, here is how to enable its repo for Fedora.

    First import its gpg key, so you can check the packages retrieved from the repo:

    su -c 'rpm --import https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg'
    

    Now create the vscodium.repo file:

    su -c "tee -a /etc/yum.repos.d/vscodium.repo << 'EOF'
    [gitlab.com_paulcarroty_vscodium_repo]
    name=gitlab.com_paulcarroty_vscodium_repo
    baseurl=https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/repos/rpms/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg
    EOF
    "
    

    Verification

    Now check that all the repos has been successfully installed and some of them enabled by refreshing the dnf metadata.

    su -c 'dnf check-update'
    

    Thats all, in the next post will see how to enable some of this repos, how temporarilly disable and enable some other for just a single transaction, how to install or upgrade certain packages from an specific repo and many repo administration tasks.

    Firefox 68 available now in Fedora

    Posted by Fedora Magazine on July 12, 2019 01:33 AM

    Earlier this week, Mozilla released version 68 of the Firefox web browser. Firefox is the default web browser in Fedora, and this update is now available in the official Fedora repositories.

    This Firefox release provides a range of bug fixes and enhancements, including:

    • Better handling when using dark GTK themes (like Adwaita Dark). Previously, running a dark theme may have caused issues where user interface elements on a rendered webpage (like forms) are rendered in the dark theme, on a white background. Firefox 68 resolves these issues. Refer to these two Mozilla bugzilla tickets for more information.
    • The about:addons special page has two new features to keep you safer when installing extensions and themes in Firefox. First is the ability to report security and stability issues with addons directly in the about:addons page. Additionally, about:addons now has a list of secure and stable extensions and themes that have been vetted by the Recommended Extensions program.

    Updating Firefox in Fedora

    Firefox 68 has already been pushed to the stable Fedora repositories. The security fix will be applied to your system with your next update. You can also update the firefox package only by running the following command:

    $ sudo dnf update --refresh firefox

    This command requires you to have sudo setup on your system. Additionally, note that not every Fedora mirrors syncs at the same rate. Community sites graciously donate space and bandwidth these mirrors to carry Fedora content. You may need to try again later if your selected mirror is still awaiting the latest update.

    GCI 2018 mentor’s summit @ Google headquarters

    Posted by Fedora Community Blog on July 11, 2019 12:54 PM
    Fedora community elections

    Context

    Google Code-in is a contest to introduce students (ages 13-17) to open source software development. Since 2010, 8,108 students from 107 countries have completed over 40,100 open source tasks Because Google Code-in is often the first experience many students have with open source, the contest is designed to make it easy for students to jump right in. I was one of the mentors in this first time for Fedora program. We had 125 students participating in Fedora and the top 3 students completed 26, 25 and 22 tasks each.

    Every year Google invites the Grand-Prize winners and their parents, and a mentor to it’s headquarters in San Francisco, California for a 4 days trip. I was offered the opportunity to go and represent Fedora in the summit and meet these 2 brilliant folks in person. This report covers activities and other things that happened there.

    Mentors

    From coming up with variety of different level tasks to verifying tasks on time, it was an experience for all of us. Active and helpful group of mentors helped the students whenever required. There were cases when the assigned mentor got busy/unavailable, we were ready to step in and take care of the task. Thanks to proper communication, we were always able to review tasks on time.

    We were 8 mentors (handling different kinds of tasks)

    Winners

    Margii and EchoDuck

    Margii (left) – EchoDuck (Right) in Google Android park

    Both our winners were amazing and did the tasks with great quality.  I am still surprised to see their work at such a young age.
    Margii wants to contribute to Fedora Website and other Infra projects, and he is great with Python/Flask. EchoDuck is also interested in contributing to Fedora Infra and he is looking to get his hands dirty with Rust (coding wherever required, and packaging). It’s an action item on me to help them connect with right people so that they can start their contribution journey. I am also hoping to see them as GCI mentors or GSoC students in the future.

    Day 1 (Reception)

    We met other mentors/students/parents in hotel lobby at 5 pm and then left for Google San Francisco Office. We were welcomed with a lot of snacks and swag.
    Best part was all the students receiving Pixel 3xl. The event was followed up by a dinner and we were dropped off back at the hotel.

    Day 2 (Full Day at Google headquarters in Mountain View.)

    This day was special since we were going to explore Googleplex and talk to Google Engineers at Google Cloud office. We were also offered $75 and $150 to mentors and students respectively to buy our choice of Google Merchandise. Students met  with Google employees of their home country and parted to have lunch with them. We had a line up of talks by great folks from Google.

    • Recruiting – Lauren Alpizar
    • Android OS: Ally Sillins
    • Cloud: Ryan Matsumoto
    • Chrome OS – Grant Grundler
    • Google Assistant – Daniel Myers
    • Google TensorFlow – Paige Bailey

    We had Dinner at the same office and then head back to the hotel.

    Day 3 (Fun Day in San Francisco)

    Probably the day most of us would remember. We had option to select one of Segway tour or cable car tour. I selected the cable car and went to see around the whole city (Twin Peaks, Lands End, San Francisco City, Golden gate park, Golden gate bridge (also, walked across it))

    There was a Yacht waiting for us at one of the piers. We embarked and cruised around the Alcatraz Island, Golden Gate bridge. We had dinner on the Yacht and since this day was a fun day, obviously all of us were tired.

    Day 4 (Closing reception in Google SF office)

    On the last day, we had to go the office a bit early. We had breakfast in the office itself followed by the award ceremony (to grand prize winners). We were given 4 minutes per org time to share something if we wished. A lot of students shared their experience with GCI. We had lunch after than and meanwhile a video crew was taking interview of people who signed up for it. We had left the office by 3pm after taking a lot of pictures in front of the San Francisco – Oakland bridge.

    People who made this possible

    Thanks to every student who participated. Everyone of you were amazing and I hope to see you all.
    Thanks to Justin for being there when needed, facilitating with telegram group and IRC bridge, and keeping the conversation alive. A very special thanks to Bex for being the backbone of this and all other summer coding programs. Of course all the mentors, Thank you for giving all the time and I hope to be a part of this along with all of you in coming years.

    The post GCI 2018 mentor’s summit @ Google headquarters appeared first on Fedora Community Blog.

    NeuroFedora poster at CNS*2019

    Posted by The NeuroFedora Blog on July 11, 2019 12:25 PM

    With CNS*2019 around the corner, we worked on getting the NeuroFedora poster ready for the poster presentation session. Our poster is P96, on the first poster session on the 14th of July. The poster is also shown below:

    <object data="https://neurofedora.github.io/extra/2019-CNS-NeuroFedora.pdf" height="800" type="application/pdf" width="80%"> Your browser does not support previewing PDF files. Please download the file to view it. </object>

    The poster is made available under a CC-By license. Please feel free to share it around.

    The current team already consists of more people than the authors listed on the poster. The authors here were only the first set, and as the team grows, so will our author list for future publications. In general, we follow the standard rule: if one has contributed to the project since the previous publication, they get their name on the poster.

    Unfortunately, this time, no one from the team is able to attend the conference, but if you are there and want to learn more about NeuroFedora, please get in touch with us using any of our communication channels.

    To everyone that will be in Barcelona for the conference, we hope you have a fruitful one, and of course, we hope you are able to make some time to rest at the beach too.


    NeuroFedora is volunteer driven initiative and contributions in any form always welcome. You can get in touch with us here. We are happy to help you learn the skills needed to contribute to the project. In fact, that is one of the major goals of the initiative---to spread technical knowledge that is necessary to develop software for Neuroscience.

    Fedora job opening: Fedora Community Action and Impact Coordinator (FCAIC)

    Posted by Fedora Magazine on July 10, 2019 02:50 PM

    I’ve decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC).  This was not an easy decision to make. I am proud of the work I have done in Fedora over the last three years and I think I have helped the community move past many challenges.  I could NEVER have done all of this without the support and assistance of the community!

    As some of you know, I have been covering for some other roles in Red Hat for almost the last year.  Some of these tasks have led to some opportunities to take my career in a different direction. I am going to remain at Red Hat and on the same team with the same manager, but with a slightly expanded scope of duties.  I will no longer be day-to-day on Fedora and will instead be in a consultative role as a Community Architect at Large. This is a fancy way of saying that I will be tackling helping lots of projects with various issues while also working on some specific strategic objectives.

    I think this is a great opportunity for the Fedora community.  The Fedora I became FCAIC in three years ago is a very different place from the Fedora of today.  While I could easily continue to help shape and grow this community, I think that I can do more by letting some new ideas come in.  The new person will hopefully be able to approach challenges differently. I’ll also be here to offer my advice and feedback as others who have moved on in the past have done.  Additionally, I will work with Matthew Miller and Red Hat to help hire and onboard the new Fedora Community and Impact Coordinator. During this time I will continue as FCAIC.

    This means that we are looking for a new FCAIC. Love Fedora? Want to work with Fedora full-time to help support and grow the Fedora community? This is the core of what the FCAIC does. The job description (also below), has a list of some of the primary job responsibilities and required skills – but that’s just a sample of the duties required, and the day to day life working full-time with the Fedora community.

    Day to day work includes working with Mindshare, managing the Fedora Budget, and being part of many other teams, including the Fedora Council.  You should be ready to write frequently about Fedora’s achievements, policies and decisions, and to draft and generate ideas and strategies. And, of course, planning Flock and Fedora’s presence at other events. It’s hard work, but also a great deal of fun.

    Are you good at setting long-term priorities and hacking away at problems with the big picture in mind? Do you enjoy working with people all around the world, with a variety of skills and interests, to build not just a successful Linux distribution, but a healthy project? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Is Fedora’s mission deeply important to you?

    If you said “yes” to those questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit apply online, or contact Matthew Miller, Brian Exelbierd, or Stormy Peters.



    Fedora Community Action and Impact Coordinator

    Location: CZ-Remote – prefer Europe but can be North America

    Company Description

    At Red Hat, we connect an innovative community of customers, partners, and contributors to deliver an open source stack of trusted, high-performing solutions. We offer cloud, Linux, middleware, storage, and virtualization technologies, together with award-winning global customer support, consulting, and implementation services. Red Hat is a rapidly growing company supporting more than 90% of Fortune 500 companies.

    Job summary

    Red Hat’s Open Source Programs Office (OSPO) team is looking for the next Fedora Community Action and Impact Lead. In this role, you will join the Fedora Council and guide initiatives to grow the Fedora user and developer communities, as well as make Red Hat and Fedora interactions even more transparent and positive. The Council is responsible for stewardship of the Fedora Project as a whole, and supports the health and growth of the Fedora community.

    As a the Fedora Community Action and Impact Lead, you’ll facilitate decision making on how to best focus the Fedora community budget to meet our collective objectives, work with other council members to identify the short, medium, and long-term goals of the Fedora community, and organize and enable the project.

    You will also help make decisions about trademark use, project structure, community disputes or complaints, and other issues. You’ll hold a full council membership, not an auxiliary or advisory role.

    Primary job responsibilities

    • Identify opportunities to engage new contributors and community members; align project around supporting those opportunities.
    • Improve on-boarding materials and processes for new contributors.
    • Participate in user and developer discussions and identify barriers to success for contributors and users.
    • Use metrics to evaluate the success of open source initiatives.
    • Regularly report on community metrics and developments, both internally and externally.  
    • Represent Red Hat’s stake in the Fedora community’s success.
    • Work with internal stakeholders to understand their goals and develop strategies for working effectively with the community.
    • Improve onboarding materials and presentation of Fedora to new hires; develop standardized materials on Fedora that can be used globally at Red Hat.
    • Work with the Fedora Council to determine the annual Fedora budget.
    • Assist in planning and organizing Fedora’s flagship events each year.
    • Create and carry out community promotion strategies; create media content like blog posts, podcasts, and videos and facilitate the creation of media by other members of the community

    Required skills

    • Extensive experience with the Fedora Project or a comparable open source community.
    • Exceptional writing and speaking skills
    • Experience with software development and open source developer communities; understanding of development processes.
    • Outstanding organizational skills; ability to prioritize tasks matching short and long-term goals and focus on the tasks of high priority
    • Ability to manage a project budget.
    • Ability to lead teams and participate in multiple cross-organizational teams that span the globe.
    • Experience motivating volunteers and staff across departments and companies

    Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.

    Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.


    Photo by Deva Williamson on Unsplash.

    Cockpit 198

    Posted by Cockpit Project on July 10, 2019 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 198.

    PatternFly4 user interface design

    Cockpit has been restyled to match the PatternFly 4 User Interface design, including the Red Hat Text and Display fonts.

    This style refresh aligns Cockpit with other web user interfaces that use PatternFly, such as OpenShift 4.

    Over time, Cockpit will be ported to actually use PatternFly 4 widgets, but this restyle allows us to change Cockpit gradually.

    login page

    system page

    SELinux: Show changes

    The SELinux page now has a new section “System Modifications” which shows all policy settings that were made to the system (with Cockpit or otherwise):

    SELinux modifications

    The “View Automation Script” link will show a dialog with a shell script that can be used to apply the same changes to other machines:

    SELinux automation script

    Machines: Deletion of Virtual Networks

    The Virtual Networks section of the Machines page now supports deleting networks.

    delete virtual network

    Machines: Support more disk types

    When creating a new VM, the disk can now be on a storage pool type other than a plain file. The newly supported types are iSCSI, LVM, and physical volumes.

    VM creation with iSCSI pool

    Docker: Change menu label

    The menu label changed from “Containers” to “Docker Containers”. This avoids confusion with the “Podman Containers” page from cockpit-podman, and points out that this page really is about Docker only, not any other container technology.

    Web server: More flexible https redirection for proxies

    cockpit-ws now supports redirecting unencrypted http to https (TLS) even when running in --no-tls mode. Use this when running cockpit-ws behind a reverse http proxy that also supports https, but does not handle the redirection from http to https for itself. This is enabled with the new --proxy-tls-redirect option.

    Try it out

    Cockpit 198 is available now:

    Bug bounties and NDAs are an option, not the standard

    Posted by Matthew Garrett on July 09, 2019 09:15 PM
    Zoom had a vulnerability that allowed users on MacOS to be connected to a video conference with their webcam active simply by visiting an appropriately crafted page. Zoom's response has largely been to argue that:

    a) There's a setting you can toggle to disable the webcam being on by default, so this isn't a big deal,
    b) When Safari added a security feature requiring that users explicitly agree to launch Zoom, this created a poor user experience and so they were justified in working around this (and so introducing the vulnerability), and,
    c) The submitter asked whether Zoom would pay them for disclosing the bug, and when Zoom said they'd only do so if the submitter signed an NDA, they declined.

    (a) and (b) are clearly ludicrous arguments, but (c) is the interesting one. Zoom go on to mention that they disagreed with the severity of the issue, and in the end decided not to change how their software worked. If the submitter had agreed to the terms of the NDA, then Zoom's decision that this was a low severity issue would have led to them being given a small amount of money and never being allowed to talk about the vulnerability. Since Zoom apparently have no intention of fixing it, we'd presumably never have heard about it. Users would have been less informed, and the world would have been a less secure place.

    The point of bug bounties is to provide people with an additional incentive to disclose security issues to companies. But what incentive are they offering? Well, that depends on who you are. For many people, the amount of money offered by bug bounty programs is meaningful, and agreeing to sign an NDA is worth it. For others, the ability to publicly talk about the issue is worth more than whatever the bounty may award - being able to give a presentation on the vulnerability at a high profile conference may be enough to get you a significantly better paying job. Others may be unwilling to sign an NDA on principle, refusing to trust that the company will ever disclose the issue or fix the vulnerability. And finally there are people who can't sign such an NDA - they may have discovered the issue on work time, and employer policies may prohibit them doing so.

    Zoom are correct that it's not unusual for bug bounty programs to require NDAs. But when they talk about this being an industry standard, they come awfully close to suggesting that the submitter did something unusual or unreasonable in rejecting their bounty terms. When someone lets you know about a vulnerability, they're giving you an opportunity to have the issue fixed before the public knows about it. They've done something they didn't need to do - they could have just publicly disclosed it immediately, causing significant damage to your reputation and potentially putting your customers at risk. They could potentially have sold the information to a third party. But they didn't - they came to you first. If you want to offer them money in order to encourage them (and others) to do the same in future, then that's great. If you want to tie strings to that money, that's a choice you can make - but there's no reason for them to agree to those strings, and if they choose not to then you don't get to complain about that afterwards. And if they make it clear at the time of submission that they intend to publicly disclose the issue after 90 days, then they're acting in accordance with widely accepted norms. If you're not able to fix an issue within 90 days, that's very much your problem.

    If your bug bounty requires people sign an NDA, you should think about why. If it's so you can control disclosure and delay things beyond 90 days (and potentially never disclose at all), look at whether the amount of money you're offering for that is anywhere near commensurate with the value the submitter could otherwise gain from the information and compare that to the reputational damage you'll take from people deciding that it's not worth it and just disclosing unilaterally. And, seriously, never ask for an NDA before you're committing to a specific $ amount - it's never reasonable to ask that someone sign away their rights without knowing exactly what they're getting in return.

    tl;dr - a bug bounty should only be one component of your vulnerability reporting process. You need to be prepared for people to decline any restrictions you wish to place on them, and you need to be prepared for them to disclose on the date they initially proposed. If they give you 90 days, that's entirely within industry norms. Remember that a bargain is being struck here - you offering money isn't being generous, it's you attempting to provide an incentive for people to help you improve your security. If you're asking people to give up more than you're offering in return, don't be surprised if they say no.

    comment count unavailable comments

    EPEL-8 Production Layout

    Posted by Stephen Smoogen on July 09, 2019 05:24 PM

    EPEL-8 Production Layout

    TL; DR:

    1. EPEL-8 will have a multi-phase roll-out into production.
    2. EPEL-8.0 will build using existing grobisplitter in order to use a ‘flattened’ build system without modules.
    3. EPEL-8.1 will start in staging without grobisplitter and using default modules via mock.
    4. The staging work will allow for continual development changes in koji, ‘ursa-prime’, and MBS functionality to work without breaking Fedora 31 or initial EPEL-8.0 builds.
    5. EPEL-8.1 will look to be ready by November 2019 after Fedora 31 around the time that RHEL-8.1 may release (if it uses a 6 month cadence.)

    Multi-phase roll-out

    As documented elsewhere, EPEL-8 has been slowly rolling out due to the many changes in RHEL and in the Fedora build system since EPEL-7 was initiated in 2014. Trying to roll out an EPEL-8 which was ‘final’ and thus the way it always will be was too prone to failure as we find we have to constantly change plans to match reality.
    We will be rolling out EPEL-8 in a multi-phase release cycle. Each cycle will allow for hopefully greater functionality for developers and consumers. On the flip side, we will find that we have to change expectations of what can and can not be delivered inside of EPEL over that time.
    Phases:
    1. 8.0 will be a ‘minimal viability’. Due to un-shipped development libraries and the lack of building replacement modules, not all packages will be able to build. Instead only non-modular RPMs which can rely on only ‘default’ modules will work. Packages must also only rely on what is shipped in RHEL-8.0 BaseOS/AppStream/CodeReadyBuilder channels versus any ‘unshipped -devel’ packages.
    2. 8.1 will add on ‘minimal modularity’. Instead of using a flattened build system, we will look at updating koji to have basic knowledge of modularity, use a tool to tag in packages from modules as needed, and possibly add in the Module Build System (MBS) in order to ship modules.
    3. 8.2 will finish adding in the Module Build System and will enable gating and CI into the workflow so that packages can tested faster.
    Due to the fact that the phases will change how EPEL is produced, there may be need to be mass rebuilds between each one. There will also be changes in policies about what packages are allowed to be in EPEL and how they would be allowed.

    Problems with koji, modules and mock

    If you are wanting to build packages in mock, you can set up a lot of controls in /etc/mock/foo.cfg which will turn on and off modules as needed so that you can enable the javapackages-tools or virt-devel module so that packages like libssh2-devel or javapackages-local are available. However koji does not allow this control per channel because it is meant to completely control what packages are brought into a buildroot. Every build records what packages were used to build an artifact and koji will create a special mock config file to pull in those items. This allows for a high level of auditability and confirmation that the package stored is the package built, and that what was built used certain things.
    For building an operating system like Fedora or Red Hat Enterprise Linux (RHEL), this works great because you can show how things were done 2-3 years later when trying to debug something else. However when koji does not ‘own’ the life-cycle of packages this becomes problematic. In building EPEL, the RHEL packages are given to the buildroot via external repositories. This means that koji does not fully know the life-cycle of the packages it ‘pulls’ in to the buildroot. In a basic mode it will choose packages it has built/knows about first, then packages from the buildroot, and if there is a conflict from external packages will try to choose the one with the highest epoch-version-release-timestamp so that only the newest version is in. (If the timestamp is the same, it tends to refuse to use both packages).
    An improvement to this was adding code to mergerepo which allows for dnf to make a choice on which packages to use between repositories. This allows for mock’s dnf to pull in modules without the repositories having been mangled or ‘flattened’ as with grobisplitter. However, it is not a complete story. For DNF to know which modules to pull in it needs to set an environment variable for the platform (for fedora releases it is something like f30 and for RHEL it is el8). Koji doesn’t know how to do this so the solution would be to set it in the build systems /etc/mock/site-defaults.cfg but that would affect all builds and would cause problems for building Fedora on the same build system.

    Grobisplitter

    A second initiative to deal with building with modules was to try and take modules out of the equation completely. Since a module is a virtual repository embedded in a real one, you should be able to pull them apart and make new ones. Grobisplitter was designed to do this to help get CentOS-8 ready and also allow for EPEL to bootstrap using a minimal buildset. While working on this, we found that we needed also parts of the ‘–bare’ koji work because certain python packages have the same src.rpm name-version but different releases which koji would kick out.
    Currently grobisplitter does not put in any information about the module it ‘spat’ out. This will affect building when dnf starts seeing metadata in individual rpms which says ‘this is part of a module and needs to be installed as such’.

    Production plans

    We are trying to determine which tool will work better long term in order to make EPEL-8.0 and EPEL-8.1 work.

    EPEL-8.0

    Start DateEnd DateWork PlannedParty Involved
    2019-07-012019-07-05Lessons LearnedSmoogen, Mohan
    2019-07-012019-07-05DocumentationSmoogen
    2019-07-082019-07-12Release Build workMohan, Fenzi
    2019-07-082019-07-12Call for packagesSmoogen
    2019-07-152019-07-19Initial branchingMohan, Dawson
    2019-07-222019-07-31First branch/testDawson, et al
    2019-08-012019-08-01EPEL-8.0 GAEPEL Steering Committee
    2019-08-012019-08-08Lessons LearnedSmoogen, et al
    2019-08-012019-08-08Revise documentationSmoogen, et al
    2019-09-012019-09-01Bodhi gating turned onMohan

    EPEL-8.0 Production Breakout

    1. Lessons Learned. Document the steps and lessons learned from the previous time frame. Because previous EPEL spin-ups have been done multiple years apart, what was known is forgotten and has to be relearned. By capturing it, we hope that EPEL-9 does not take as long.
    2. Documentation. Write documents on what was done to set up the environment and what is expected in the next section (how to branch to EPEL-8, how to build with EPEL-8, dealing with unshipped packages, updated FAQ)
    3. Call for Packages This will be going over the steps that packagers need to follow to get packages branched to EPEL-8.
    4. Release Build Work. This is setting up the builders and environment in production. Most of the steps should be repeats of what was done in staging with additional work done in bodhi to have signing and composes work
    5. Initial Branching. This where the first set of packages are needed to be branched and built for EPEL-8: epel-release, epel-rpm-macros, fedpkg-minimal, fedpkg (and all the things needed for it).
    6. First Branch Going over the various tickets for EPEL-8 packages, a reasonable sample will be branched. Work will be done with the packagers on problems they find. This will continue as needed.
    7. EPEL-8.0 GA Branching can follow normal processes to get done.
    8. Lessons Learned. Go over problems and feed into other groups backlogs.
    9. Documentation Update previous documents and add any that were found to be needed.

    EPEL-8.1

    Start DateEnd DateWork PlannedParty Involved
    2019-07-012019-07-05Lessons LearnedFenzi, Contyk, et al
    2019-07???Groom Koji changes needed???
    2019-07???Write/Test Koji changes needed???
    2019-07???Non-modular RPM in staging???
    2019-07???MBS in staging???
    2019-08????Implement Koji changes????
    2019-08????Implement bodhi compose in staging????
    2019-09????Close off 8.1 beta???
    2019-09????Lessons learned???
    2019-09????Begin changes in prod????
    2019-10????Open module builds in EPEL???
    2019-11????EPEL-8.1 GAEPEL Steering Committee
    2019-11????Lessons Learned???
    2019-11????Revise documentation???

    EPEL-8.1 Production Breakout

    This follows the staging and production of the 8.0 with additional work in order to make working with modules work in builds. Most of these dates and layers need to be filled out in future meetings. The main work will be adding in allowing a program code-named ‘Ursa-Prime’ to help build non-modular rpms using modules as dependencies. This will allow for grobisplitter to be replaced with a program that has long term maintenan

    I no longer recommend FreeIPA

    Posted by William Brown on July 09, 2019 02:00 PM

    I no longer recommend FreeIPA

    It’s probably taken me a few years to write this, but I can no longer recommend FreeIPA for IDM installations.

    Why not?

    The FreeIPA project focused on Kerberos and SSSD, with enough other parts glued on to look like a complete IDM project. Now that’s fine, but it means that concerns in other parts of the project are largely ignored. It creates design decisions that are not scalable or robust.

    Due to these decisions IPA has stability issues and scaling issues that other products do not.

    To be clear: security systems like IDM or LDAP can never go down. That’s not acceptable.

    What do you recommend instead?

    • Samba with AD
    • AzureAD
    • 389 Directory Server

    All of these projects are very reliable, secure, scalable. We have done a lot of work into 389 to improve our out of box IDM capabilities too, but there is more to be done too. The Samba AD team have done great things too, and deserve a lot of respect for what they have done.

    Is there more detail than this?

    Yes - buy me a drink and I’ll talk :)

    Didn’t you help?

    I tried and it was not taken on board.

    So what now?

    Hopefully in the next year we’ll see new IDM projects for opensource released that have totally different approachs to the legacy we currently ride upon.

    Red Hat, IBM, and Fedora

    Posted by Fedora Magazine on July 09, 2019 12:51 PM

    Today marks a new day in the 26-year history of Red Hat. IBM has finalized its acquisition of Red Hat, which will operate as a distinct unit within IBM.

    What does this mean for Red Hat’s participation in the Fedora Project?

    In short, nothing.

    Red Hat will continue to be a champion for open source, just as it always has, and valued projects like Fedora that will continue to play a role in driving innovation in open source technology. IBM is committed to Red Hat’s independence and role in open source software communities. We will continue this work and, as always, we will continue to help upstream projects be successful and contribute to welcoming new members and maintaining the project.

    In Fedora, our mission, governance, and objectives remain the same. Red Hat associates will continue to contribute to the upstream in the same ways they have been.

    We will do this together, with the community, as we always have.

    If you have questions or would like to learn more about today’s news, I encourage you to review the materials below. For any questions not answered here, please feel free to contact us. Red Hat CTO Chris Wright will host an online Q&A session in the coming days where you can ask questions you may have about what the acquisition means for Red Hat and our involvement in open source communities. Details will be announced on the Red Hat blog.

    Regards,

    Matthew Miller, Fedora Project Leader
    Brian Exelbierd, Fedora Community Action and Impact Coordinator

    Call for Fedora Women’s Day 2019 proposals

    Posted by Fedora Community Blog on July 09, 2019 08:30 AM

    Fedora Women’s Day (FWD) is a day to celebrate and bring visibility to female contributors in open source projects, including Fedora. This event is headed by Fedora’s Diversity and Inclusion Team.

    During the month of September, in collaboration with other open source communities, women in tech groups and hacker spaces, we plan to organize community meetups and events around the world to highlight and celebrate the women in open source communities like Fedora and their invaluable contributions to their projects and community.

    These events also provide a good opportunity for women worldwide to learn about free and open source software and jump start their journey as a FOSS user and/or a contributor.  They also provide a platform for women to connect, learn and be inspired by other women in open source communities and beyond.

    We are looking forward to applications for organizing FWD-2019, go ahead and submit applications and help us in organizing this event in various locations in the world. 

    Important dates:

    Deadline for submission – Friday, 23 August, 2019

    Acceptance deadline – Friday, 6 September, 2019

    Suggested dates for FWD: September-October 2019

    Note: The Diversity and Inclusion team gives flexibility on dates for organizing FWD, thus event can be organized on any dates throughout the month of September and October. The proposals will be reviewed on a rolling basis, so don’t procrastinate until the last date of the deadline to submit a spectacular proposal.

    Who can be a part of FWD:

    • We welcome all organizers and attendees whose values align with our mission and goals for the event irrespective of their genders and backgrounds.
    • Diversity and Inclusion team is eagerly looking forward to have more organizers and participants from the under-represented groups and areas.

    Why should you organize a FWD in your local community

    • It lets you share your knowledge as FOSS still remains an underutilized area.
    • You can spread the awesomeness that FOSS has provided you with.
    • You might find a fellow contributor to work with or engage locally.
    • You will win lots of goodies and love from the Fedora community.
    • You can get a lot of freedom for arranging your very own event and cultivate your leadership and creativity skills as you plan and organize a FWD in your local community.
    • To empower women in your local community through open source tools and build their skills to contribute to a global project.

    Why should you attend a FWD in your local community

    • To know open source and building skills by working on  projects related to it.
    • It helps to connect and gain inspiration from talented women
    • It helps in getting started with open source contributions.
    • To get some surprised goodies.

    Steps to organize a FWD event:

    Cannot find a FWD in your region? Organize one! It’s simple with below steps.

    Identify your goals:

    • Find out the interests of your local community and conduct interactive sessions, it can be a workshop, hackathon.
    • Do they know what open source is? If not, you can use your event to create awareness about open source software and more direction on topics Fedora would be nice.
    • Are they interested in contributing to open source? Make your content more contribution specific and make sure to follow-up with participants after the event to help them get started or to make progress on what they have started. You can also organise some follow up sessions if required.
    • Are they interested in networking? We can help you identify local open source contributors from your region.
    • Set some measurable goals for the event, to measure the success of the event. 
    • Make sure your goals align with our motivations for FWD. Brainstorm and share ideas with us that you feel can bring a massive difference to the audience and would help them learn to contribute to FOSS or Fedora.

    Tell us about it:

    • Please let us know if you are interested on the diversity mailing list before the deadline. We would be glad to support you through the whole process.
    • Fedora Women’s Day (FWD) event proposals need to be submitted to fedora-womens-day repository by Friday, 23 August, 2019
    • You can request budget for your event to be reimbursed after writing an event report.

    Spread the word:

    Start Early! Spread the word before and after the event. Publicise your event both locally and globally to gather as many participants as you can, to maximize the impact of efforts you put in. You can invite fellow Fedora Contributors who are based in your area to collaborate with you. It is important that you estimate your audience well in advance, so that you can plan and ask for a suitable budget. After you are done with the event, let others have an idea of fun you had, by clicking some interesting group pictures and writing an elaborate event report. If you know someone or a tech group who might be interested to organize a Fedora Women’s Day event, feel free to involve them.

    Increase your chances:

    Finally, to increase your chances of getting an acceptance, go through our internal goals very closely. Start planning early to give yourself enough time to prepare and connect with other hacker-spaces and tech community and enhance your proposal by involving a bigger group. Understand the needs of your audience and make a personalized proposal that fits the best. Identify the resources that you might be required to conduct a Fedora Women’s Day event and inform us if you need any help.

    The post Call for Fedora Women’s Day 2019 proposals appeared first on Fedora Community Blog.

    Highest used Python code in the Pentesting/Security world

    Posted by Kushal Das on July 09, 2019 05:34 AM
    python -c 'import pty;pty.spawn("/bin/bash")'
    

    I think this is the highest used Python program in the land of Pentesting/Security, Almost every blog post or tutorial I read, they talk about the above-mentioned line to get a proper terminal after getting access to a minimal shell on a remote Linux server.

    What does this code do?

    We are calling the Python executable with -c and python statements inside of the double quote. -c executes the Python statements, and as we are running it as non-interactive mode, it parses the entire input before executing it.

    The code we pass as the argument of the -c has two statements.

    import pty
    pty.spawn("/bin/bash")
    

    pty is a Python module which defines operations related to the pseudo-terminal concept, it can create another process, and from the controlling terminal, it can read/write to the new process.

    The pty.spawn function spawns a new process (/bin/bash in this case) and then connects IO of the new process to the parent/controlling process.

    demo of getting bash

    In most cases, even though you get access to bash using the way mentioned above, TAB completion is still not working. To enable it, press Ctrl+z to move the process to sleep, and then use the following command on your terminal.

    stty raw -echo
    

    stty changes terminal line settings and part of the GNU coreutils package. To read about all the options we set by using raw -echo, read the man page of stty.

    Many years ago, I watched a documentary about Security firms showcasing offensive attacks, that was the first I saw them using Python scripts to send in the payload and exploit the remote systems. Now, I am using similar scripts in the lab to learn and having fun with Python. It is a new world for me, but, it also shows the diverse world we serve via Python.

    Fedora 30 : Using the python-wikitcms.

    Posted by mythcat on July 08, 2019 04:35 PM
    This python module named python-wikitcms can be used for interacting with the Fedora wiki.
    The Fedora wiki used Fedora's Wikitcms.
    Today I test it and works great with Fedora distro version 30.
    First, the install of the fedora package with DNF tool:
    [root@desk mythcat]# dnf install python3-wikitcms.noarch
    ...
    Downloading Packages:
    (1/8): python3-mwclient-0.9.3-3.fc30.noarch.rpm 186 kB/s | 61 kB 00:00
    (2/8): python3-fedfind-4.2.5-1.fc30.noarch.rpm 314 kB/s | 105 kB 00:00
    (3/8): python3-cached_property-1.5.1-3.fc30.noa 41 kB/s | 20 kB 00:00
    (4/8): python3-requests-oauthlib-1.0.0-1.fc29.n 313 kB/s | 40 kB 00:00
    (5/8): python3-jwt-1.7.1-2.fc30.noarch.rpm 112 kB/s | 42 kB 00:00
    (6/8): python3-oauthlib-2.1.0-1.fc29.noarch.rpm 293 kB/s | 153 kB 00:00
    (7/8): python3-simplejson-3.16.0-2.fc30.x86_64. 641 kB/s | 278 kB 00:00
    (8/8): python3-wikitcms-2.4.2-2.fc30.noarch.rpm 264 kB/s | 84 kB 00:00
    I used this simple example to get information about the Fedora wiki:
    [mythcat@desk ~]$ python3
    Python 3.7.3 (default, May 11 2019, 00:38:04)
    [GCC 9.1.1 20190503 (Red Hat 9.1.1-1)] on linux
    Type "help", "copyright", "credits" or "license" for more information.
    >>> from wikitcms.wiki import Wiki
    >>> my_site = Wiki()
    >>> event = my_site.current_event
    >>> print(event.version)
    31 Rawhide 20190704.n.1
    >>> page = my_site.get_validation_page('Installation','23','Final','RC10')
    >>> for row in page.get_resultrows():
    ... print(row.testcase)
    ...
    QA:Testcase_Mediakit_Checksums
    QA:Testcase_Mediakit_ISO_Size
    QA:Testcase_Mediakit_Repoclosure
    QA:Testcase_Mediakit_FileConflicts
    QA:Testcase_Boot_default_install
    ...
    >>> dir(my_site)
    I used this source code to login with my account.
    >>> my_site.login()
    A webpage is open to get access to the account and show this info:
    The OpenID Connect client Wiki Test Control Management System is asking to authorize access for mythcat. this allow you to access it 
    After I agree with this the page tells me to close it:
    You can close this window and return to the CLI
    The next examples show you how to get and show information from the wiki:
    >>> print(my_site.username)
    Mythcat
    >>> result = my_site.api('query', titles='Mythcat')
    >>> for page in result['query']['pages'].values():
    ... print(page['title'])
    ...
    Mythcat
    >>> for my_contributions in my_site.usercontributions('Mythcat'):
    ... print(my_contributions)
    ...
    This python module comes with low documentation.

    The state of open source GPU drivers on Arm in 2019

    Posted by Peter Robinson on July 08, 2019 03:27 PM

    I first blogged about the state of open source drivers for Arm GPUs 7 years ago, in January 2012, and then again in September 2017. I’ve had a few requests since then to provide an update but I’ve not bothered because there’s really been no real change in the last few years, that is until now!

    The good news

    So the big positive change is that there’s two new open drivers om the scene with the panfrost and lima drivers. Panfrost is a reverse engineered driver for the newer Midguard and Bitfrost series of Mali GPUs designed/licensed by Arm, whereas Lima is aimed at the older Utguard series Mali 4xx series of devices. Panfrost, started by Alyssa Rosenzweig, and now has quite a large contributor base, has over the last few months has been coming along leaps and bounds and by the time Mesa 19.2 is out I suspect it should be able to run gnome-shell on an initial set of devices. I’m less certain the state of Lima. The drivers landed in the kernel in the 5.2 development cycle, which Linus just released. On the userspace side they landed in the mesa 19.1 development cycle, but they’ve greatly improving in mesa 19.2 cycle. Of course they’re all enabled in Fedora rawhide, although I don’t expect them to be really testable until later in the 19.2 cycle, but it makes it easy for early adopters who know they’re doing to be able to start to play.

    A decent open source driver for the MALI GPUs from Arm had been the last biggest hold out from the Arm ecosystem we’ve been waiting for and it covers a lot of the cheaper end of the SBC market with a lot of AllWinner and some Rockchip SoCs having the MALI 4xx series of hardware, which will use the Lima driver and other lower to midrange hardware shipping with the newer Mali midguard GPUs like in the Rockchip 3399 SoC.

    Other general updates

    Since I last wrote the freedreno (QCom Ardreno) and etnaviv (Vivante GCxxx series) have continued to improve and add support for newer hardware. The vc4 open drivers for the Raspberry Pi 0-3 generations have seen gradual improvement over time, and there’s a new open v3d driver for the Raspberry Pi 4 which they use from the outset.

    The last driver is one that seems to have transitioned to be in limbo is the driver for the Nvidia Tegra Arm platform. While it has an open driver for the display controller, and the GPU mostly works with the nouveau driver, at least on the 32 bit TegraK1 (the upstream state of the Tegra X-series is definitely for another post) they appear to have yet another driver, not their closer x86 driver, but another one (not the latest rev, which is 4.9 based, but the only linkable version I could find) which is needed to do anything fun from an CUDA/AI/ML point of view, I wonder how it will fit with their commitment to support Arm64 for their HPC stack or will that only be interesting to them for PCIe/fabric attached discrete cards for HPC super computer deals?

    That brings me to OpenCL and Vulkan for all the drivers above, for the vast majority of the open drivers support for either is basically non existent or in the very early stages of development so for the time being I’m going to leave that for another follow up in this long winded series, probably when there’s something of note to report. The other thing that is looking quite good, but one for another post, is video acceleration offload, there’s been quite a bit of recent movement there too.

    Contributor profile: Aneta Petkova

    Posted by Kiwi TCMS on July 08, 2019 02:53 PM

    Happy Monday, testers! In this series we are introducing the contributors behind Kiwi TCMS. This is our community and these are their stories.

    Aneta Petkova - QA Chapter Lead at SumUp

    Aneta is a software engineer navigating the complex field of QA since her first "grownup" job. She's been working in the area of test automation for web applications using different programming languages and tools. Her mission is to inspire people to think about quality from the very inception of ideas and to blur the line between developers and QA specialists.

    What is your professional background

    I have an engineering degree in computer science and I've spend the last 8 years in Quality Assurance. Java, TestNG and UI automation with Selenium WebDriver are my strongest technical skills but I use different programming languages and tools.

    I believe languages and tools should only support an engineer and never define them.

    Currently I am the QA Chapter Lead at SumUp, where I can work towards achieving my goals in an amazing team of people that do what they love.

    When did you use open source for the first time

    The first time I remember was in 2011, but I've probably used it before and just didn't pay attention. To me it seemed the same as proprietary, and I guess that means it was good.

    Describe your contributions to the project

    I created kiwitcms-junit-plugin. This is a native Java library which you can install via Maven Central. It will discover your automated test suite and publish test execution results in Kiwi TCMS. This plugin is very simple and requires only minimal configuration before it is ready to work. Check-out the example in TP-25!

    editor comment: Aneta and Ivo (Kiwi TCMS) hosted the "Git crash course" workshop at HackConf 2018. Kiwi TCMS will be hosting 2 workshops this year so stay tuned!

    Why did you decide to contribute to Kiwi TCMS

    I had recently switched Java for Ruby and I was feeling nostalgic. Also, I had spent my entire career so far in QA and I wanted to slip on the developer shoes for at least a little bit.

    Was there something which was hard for you during the contribution process

    I'm used to working in a team and when I started working on this project I was the only active Java developer. Luckily for me, I live in the time of StackOverflow, so I managed to get most of my questions answered by strangers on the Internet.

    I learned tons of stuff, but mostly I learned I can build software, not just test it!

    Which is the best part of contributing to Kiwi TCMS

    Doing something that has the potential to help others and that could be improved upon.

    What is next for you in professional and open source plan

    My current focus is moving slightly into DevOps direction and I am really overwhelmed by the amount of things to learn. I feel there is so much I want to experiment with. I am not really planning anything related to open source - it has never been a goal for me - but when I come across a project I feel strongly about, I'd probably be tempted to contribute.

    Thank you, Aneta! Happy testing!

    Arrival at CommCon 2019

    Posted by Daniel Pocock on July 08, 2019 10:24 AM

    Last night I arrived at CommCon 2019 in Latimer, Buckinghamshire, a stone's throw from where I used to live in St Albans, UK. For many of you it is just a mouseclick away thanks to online streaming.

    It is a residential conference with many of the leaders in the free and open source real-time communications and telephony ecosystem, together with many users and other people interested in promoting free, private and secure communications.

    On Wednesday I'll be giving a talk about packaging and how it relates to RTC projects, given my experience in this domain as a Fedora, Ubuntu and Debian Developer.

    David Duffet, author of Let the Geek Speak, gave the opening keynote, discussing the benefits and disadvantages of free, open source software in telecommunications. This slide caught my attention:

    where he talks about the burden of

    ruthless ungrateful expectations for continued service and evolution

    on developers and volunteers. This reminded me of some of the behaviour recently documented on my blog.

    CommCon organizers and sponsors, however, have found far more effective ways to motivate people: welcome gifts:

    There is some great wildlife too:

    Command line quick tips: Permissions

    Posted by Fedora Magazine on July 08, 2019 08:00 AM

    Fedora, like all Linux based systems, comes with a powerful set of security features. One of the basic features is permissions on files and folders. These permissions allow files and folders to be secured from unauthorized access. This article explains a bit about these permissions, and shows you how to share access to a folder using them.

    Permission basics

    Fedora is by nature a multi-user operating system. It also has groups, which users can be members of. But imagine for a moment a multi-user system with no concept of permissions. Different logged in users could read each other’s content at will. This isn’t very good for privacy or security, as you can imagine.

    Any file or folder on Fedora has three sets of permissions assigned. The first set is for the user who owns the file or folder. The second is for the group that owns it. The third set is for everyone else who’s not the user who owns the file, or in the group that owns the file. Sometimes this is called the world.

    What permissions mean

    Each set of permissions comes in three flavors — read, write, and execute. Each of these has an initial that stands for the permission, thus r, w, and x.

    File permissions

    For files, here’s what these permissions mean:

    • Read (r): the file content can be read
    • Write (w): the file content can be changed
    • Execute (x): the file can be executed — this is used primarily for programs or scripts that are meant to be run directly

    You can see the three sets of these permissions when you do a long listing of any file. Try this with the /etc/services file on your system:

    $ ls -l /etc/services
    -rw-r--r--. 1 root root 692241 Apr 9 03:47 /etc/services

    Notice the groups of permissions at the left side of the listing. These are provided in three sets, as mentioned above — for the user who owns the file, for the group that owns the file, and for everyone else. The user owner is root and the group owner is the root group. The user owner has read and write access to the file. Anyone in the group root can only read the file. And finally, anyone else can also only read the file. (The dash at the far left shows this is a regular file.)

    By the way, you’ll commonly find this set of permissions on many (but not all) system configuration files. They are only meant to be changed by the system administrator, not regular users. Often regular users need to read the content as well.

    Folder (directory) permissions

    For folders, the permissions have slightly different meaning:

    • Read (r): the folder contents can be read (such as the ls command)
    • Write (w): the folder contents can be changed (files can be created or erased in this folder)
    • Execute (x): the folder can be searched, although its contents cannot be read. (This may sound strange, but the explanation requires more complex details of file systems outside the scope of this article. So just roll with it for now.)

    Take a look at the /etc/grub.d folder for example:

    $ ls -ld /etc/grub.d
    drwx------. 2 root root 4096 May 23 16:28 /etc/grub.d

    Note the d at the far left. It shows this is a directory, or folder. The permissions show the user owner (root) can read, change, and cd into this folder. However, no one else can do so — whether they’re a member of the root group or not. Notice you can’t cd into the folder, either:

    $ cd /etc/grub.d
    bash: cd: /etc/grub.d: Permission denied

    Notice how your own home directory is setup:

    $ ls -ld $HOME
    drwx------. 221 paul paul 28672 Jul 3 14:03 /home/paul

    Now, notice how no one, other than you as the owner, can access anything in this folder. This is intentional! You wouldn’t want others to be able to read your private content on a shared system.

    Making a shared folder

    You can exploit this permissions capability to easily make a folder to share within a group. Imagine you have a group called finance with several members who need to share documents. Because these are user documents, it’s a good idea to store them within the /home folder hierarchy.

    To get started, use sudo to make a folder for sharing, and set it to be owned by the finance group:

    $ sudo mkdir -p /home/shared/finance
    $ sudo chgrp finance /home/shared/finance

    By default the new folder has these permissions. Notice how it can be read or searched by anyone, even if they can’t create or erase files in it:

    drwxr-xr-x. 2 root finance 4096 Jul  6 15:35 finance

    That doesn’t seem like a good idea for financial data. Next, use the chmod command to change the mode (permissions) of the shared folder. Note the use of g to change the owning group’s permissions, and o to change other users’ permissions. Similarly, u would change the user owner’s permissions:

    $ sudo chmod g+w,o-rx /home/shared/finance

    The resulting permissions look better. Now, anyone in the finance group (or the user owner root) have total access to the folder and its contents:

    drwxrwx---. 2 root finance 4096 Jul  6 15:35 finance

    If any other user tries to access the shared folder, they won’t be able to do so. Great! Now our finance group can put documents in a shared place.

    Other notes

    There are additional ways to manipulate these permissions. For example, you may want any files in this folder to be set as owned by the group finance. This requires additional settings not covered in this article, but stay tuned to the Magazine for more on that topic soon.

    Two new federated services for dgplug

    Posted by Kushal Das on July 08, 2019 04:22 AM

    Last week we started providing two new services for the dgplug members.

    Mastodon service at toots

    Having our own instance was in the plan for time in my head. I had personal Mastodon account before, but, that instance went down and never tried to find a new home. This time, I think if a few of us (the sys-admins from the group) use this as a regular thing for ourselves, it will be much easier to maintain than depending on someone else.

    Any regular dgplug member can get an invite link for the instance by joining the IRC channel and asking for the same.

    Blogging platform

    In our summer training, we spend much time talking about communication, a significant part is focused on blogging. We suggest https://wordpress.com as a starting place to the newcomers. At the same time, we found that some people had trouble as they were more focused on the themes or other options than writing regularly.

    I looked at https://write.as before, but as I saw https://people.kernel.org is now running on WriteFreely, I thought of giving it a try. The UI is much more straightforward, and as it uses Markdown by default, that is a plus point for our use case. Though most of this year’s participants already have their own blogs, we don’t have many people at the beginning, which helps as not too many support requests to us.

    Just like the Mastodon instance, if you need a home for your blogs, come over to our IRC channel #dgplug on Freenode server, and ask for an account.

    backup of the systems

    This is the biggest question in providing the services in my mind. We set up the very initial backup systems, and we will see in the coming weeks how it stands. Maybe, we will take down the services, and try to restore everything from backup, and see how it goes.

    Btw, if you want to follow me over Mastodon, then I am available at https://toots.dgplug.org/@kushal

    Episode 153 - The unexpected security of AI, photographs, and VPN

    Posted by Open Source Security Podcast on July 08, 2019 12:00 AM
    Josh and Kurt talk about user expectations around Facebook's AI. Normal people are starting to see the capabilities and potential risk with all these services. We also cover the topic of China owning a number of VPN services.


    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/10433372/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes


      Creating hardware where no hardware exists

      Posted by Matthew Garrett on July 07, 2019 07:46 PM
      The laptop industry was still in its infancy back in 1990, but it still faced a core problem that we do today - power and thermal management are hard, but also critical to a good user experience (and potentially to the lifespan of the hardware). This is in the days where DOS and Windows had no memory protection, so handling these problems at the OS level would have been an invitation for someone to overwrite your management code and potentially kill your laptop. The safe option was pushing all of this out to an external management controller of some sort, but vendors in the 90s were the same as vendors now and would do basically anything to avoid having to drop an extra chip on the board. Thankfully(?), Intel had a solution.

      The 386SL was released in October 1990 as a low-powered mobile-optimised version of the 386. Critically, it included a feature that let vendors ensure that their power management code could run without OS interference. A small window of RAM was hidden behind the VGA memory[1] and the CPU configured so that various events would cause the CPU to stop executing the OS and jump to this protected region. It could then do whatever power or thermal management tasks were necessary and return control to the OS, which would be none the wiser. Intel called this System Management Mode, and we've never really recovered.

      Step forward to the late 90s. USB is now a thing, but even the operating systems that support USB usually don't in their installers (and plenty of operating systems still didn't have USB drivers). The industry needed a transition path, and System Management Mode was there for them. By configuring the chipset to generate a System Management Interrupt (or SMI) whenever the OS tried to access the PS/2 keyboard controller, the CPU could then trap into some SMM code that knew how to talk to USB, figure out what was going on with the USB keyboard, fake up the results and pass them back to the OS. As far as the OS was concerned, it was talking to a normal keyboard controller - but in reality, the "hardware" it was talking to was entirely implemented in software on the CPU.

      Since then we've seen even more stuff get crammed into SMM, which is annoying because in general it's much harder for an OS to do interesting things with hardware if the CPU occasionally stops in order to run invisible code to touch hardware resources you were planning on using, and that's even ignoring the fact that operating systems in general don't really appreciate the entire world stopping and then restarting some time later without any notification. So, overall, SMM is a pain for OS vendors.

      Change of topic. When Apple moved to x86 CPUs in the mid 2000s, they faced a problem. Their hardware was basically now just a PC, and that meant people were going to try to run their OS on random PC hardware. For various reasons this was unappealing, and so Apple took advantage of the one significant difference between their platforms and generic PCs. x86 Macs have a component called the System Management Controller that (ironically) seems to do a bunch of the stuff that the 386SL was designed to do on the CPU. It runs the fans, it reports hardware information, it controls the keyboard backlight, it does all kinds of things. So Apple embedded a string in the SMC, and the OS tries to read it on boot. If it fails, so does boot[2]. Qemu has a driver that emulates enough of the SMC that you can provide that string on the command line and boot OS X in qemu, something that's documented further here.

      What does this have to do with SMM? It turns out that you can configure x86 chipsets to trap into SMM on arbitrary IO port ranges, and older Macs had SMCs in IO port space[3]. After some fighting with Intel documentation[4] I had Coreboot's SMI handler responding to writes to an arbitrary IO port range. With some more fighting I was able to fake up responses to reads as well. And then I took qemu's SMC emulation driver and merged it into Coreboot's SMM code. Now, accesses to the IO port range that the SMC occupies on real hardware generate SMIs, trap into SMM on the CPU, run the emulation code, handle writes, fake up responses to reads and return control to the OS. From the OS's perspective, this is entirely invisible[5]. We've created hardware where none existed.

      The tree where I'm working on this is here, and I'll see if it's possible to clean this up in a reasonable way to get it merged into mainline Coreboot. Note that this only handles the SMC - actually booting OS X involves a lot more, but that's something for another time.

      [1] If the OS attempts to access this range, the chipset directs it to the video card instead of to actual RAM.
      [2] It's actually more complicated than that - see here for more.
      [3] IO port space is a weird x86 feature where there's an entire separate IO bus that isn't part of the memory map and which requires different instructions to access. It's low performance but also extremely simple, so hardware that has no performance requirements is often implemented using it.
      [4] Some current Intel hardware has two sets of registers defined for setting up which IO ports should trap into SMM. I can't find anything that documents what the relationship between them is, but if you program the obvious ones nothing happens and if you program the ones that are hidden in the section about LPC decoding ranges things suddenly start working.
      [5] Eh technically a sufficiently enthusiastic OS could notice that the time it took for the access to occur didn't match what it should on real hardware, or could look at the CPU's count of the number of SMIs that have occurred and correlate that with accesses, but good enough

      comment count unavailable comments

      OpenShift's haproxy as IPv6 ingress

      Posted by Tomasz Torcz on July 07, 2019 07:22 PM

      Kubernetes networking always struck me as some PoC mistakenly put into production. Among questionable design choices, Internet Protocol version 6 is hardly supported. But when using OpenShift's default router – haproxy – you can easily handle IPv6 at your ingress.

      With recent version is even easier than before. I've started following Customizing HAProxy Router Guide, but after extracting config template I've discovered that all the ground work has been prepared already, for v3.7 and up.

      OpenShift's HAProxy container reacts to ROUTER_IP_V4_V6_MODE environment variable. When set to v4v6, the router will gladly accept connections in both IP versions, legacy and v6.

      You should oc -n default edit dc/router and in the long block of envvars add this:

      - name: ROUTER_IP_V4_V6_MODE
        value: "v4v6"
      

      While editing this, you may want to add ROUTER_ENABLE_HTTP2=true, too.

      Fun with the ODRS, part 2

      Posted by Richard Hughes on July 05, 2019 07:58 PM

      For the last few days I’ve been working on the ODRS, the review server used by GNOME Software and other open source software centers. I had to do a lot of work initially to get the codebase up to modern standards, but now it has unit tests (86% coverage!), full CI and is using the latest versions of everything. All this refactoring allowed me to add some extra new features we’ve needed for a while.

      The first feature changes how we do moderation. The way the ODRS works means that any unauthenticated user can mark a review for moderation for any reason in just one click. This means that it’s no longer shown to any other user and requires a moderator to perform one of three actions:

      • Decide it’s okay, and clear the reported counter back to zero
      • Decide it’s not very good, and either modify it or delete it
      • Decide it’s spam or in any way hateful, and delete all the reviews from the submitter, adding them to the user blocklist

      For the last few years it’s been mostly me deciding on the ~3k marked-for-moderatation reviews with the help of Google Translate. Let me tell you, after all that my threshold for dealing with internet trolls is super low. There are already over 60 blocked users on the ODRS, although they’ll never really know they are shouting into /dev/null

      One change I’ve made here is that it now takes two “reports” of a review before it needs moderation; the logic being that a lot of reports seem accidental and a really bad review is already normally reported by multiple people in the few days after it’s been posted. The other change is that we now have a locale-specific “bad word list” that submitted reports are checked against at submission time. If they are flagged, the moderator has to decide on the action before it’s ever shown to other users. This has already correctly flagged 5 reviews in the couple of days since it was deployed. If you contributed to the spreadsheet with “bad words” for your country I’m very grateful. That bad word list will be available as a JSON dump on the ODRS on Monday in case it’s useful to other people. I fully expect it’ll grow and change over time.

      The other big change is dealing with different application IDs. Over the last decade some applications have moved from “launchable-style” inkscape.desktop IDs to AppStream-style IDs like org.inkscape.Inkscape.desktop and are even reported in different forms, e.g. the Flathub-inspired org.inkscape.Inkscape and the Snappy io.snapcraft.inkscape-tIrcA87dMWthuDORCCRU0VpidK5SBVOc. Until today a review submitted against the old desktop ID wouldn’t match for the Flatpak one, and now it does. The same happens when we get the star ratings which means that apps that change ID don’t start with a clean slate and inherit all the positivity of the old version. Of course, the usual per-request ordering and filtering is done, so older versions than the one requested might be shown lower than newer versions anyway.

      This is also your monthly reminder to use <provides><id>oldname.desktop</id></provides> in your metainfo.xml file if you change your desktop ID. That includes you Flathub and Snapcraft maintainers too. If you do that client side then you at least probably get the right reviews if the software center does the right thing, but doing it server side as well makes really sure you’re getting the reviews and ratings you want in all cases.

      If all this sounds interesting, and you’d like to know more about the ODRS development, or would like to be a moderator for your language, please join the mailing list and I’ll post there next week when I’ve made the moderator experience nicer than it is now. It’ll also be the place to request help, guidance and also ask for new features.

      FPgM report: 2019-27

      Posted by Fedora Community Blog on July 05, 2019 07:10 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week.

      I have weekly office hours in #fedora-meetnig-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

      Announcements

      Upcoming meetings

      Fedora 31

      Schedule

      • 2019-07-23 — Self-contained changes due
      • 2019-07-24–08-13 — Mass rebuild
      • 2019-08-13 — Code complete (testable) deadline
      • 2019-08-13 — Fedora 31 branch point

      Changes

      Announced

      Submitted to FESCo

      Approved by FESCo

      Deferred to Fedora 32

      The post FPgM report: 2019-27 appeared first on Fedora Community Blog.