Fedora People

Revert to FAS2

Posted by Fedora Infrastructure Status on July 16, 2021 07:00 PM

We are reverting back to using FAS2, because Noggin is too easy to use, and we are getting to many contributors.

During this outage, All Fedora Infrastructure services may be impacted.

Fedora Containers Lab

Posted by Alex Callejas on May 10, 2021 11:27 PM

El 15 de febrero de 2019, Sr. Kraken bt0dotninja y su servilleta, nos pondríamos de acuerdo, en el chat de Fedora México, para iniciar con las reuniones del grupo, con la finalidad de compartir conocimientos. Incluso escribí un post sobre aquella experiencia.

<figure class="aligncenter size-large is-resized"><figcaption>1a. Reunión Comunidad Fedora México @ Red Hat México</figcaption></figure>

Para el tema, lo habíamos pensado como: Cómo tener un ambiente de pruebas para contenedores en 5 mins, recientemente habíamos tenido un Fedora Classroom con Dan Walsh donde explicaba el uso de podman, alternativa a Docker, adicionado con mi experiencia para crear ambientes de pruebas, evolucionó en mi presentación: Fedora Containers Lab.

Esta presentación la realice varias veces, en diferentes espacios y con diferentes comunidades, la última de ellas, como taller para el Tecnológico Nacional de México Campus Iztapalapa III, el pasado viernes 7 de Mayo de 2021:

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="383" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/lMYNjme9nqM?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="680"></iframe>
<figcaption>Ver en YouTube</figcaption></figure>

Las diferentes versiones de los slides, se pueden descargar de aquí:

El pasado sábado 24 de Abril de 2021, durante el FLISoL organizado por el Rancho Electrónico, tuve la oportunidad de presentar la tan prometida 2a parte de dicho taller: Fedora Containers Lab 2. Partiendo del último ejemplo de la 1er parte, crear el archivo YAML de un pod para subirlo a kubernetes, pasando por podman-compose, hasta la prueba de concepto de podman-in-podman. Este taller lo presente también en la reciente reunión virtual de Fedora México:

<figure class="wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="383" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/2bJvSCBUwgk?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en-US&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="680"></iframe>
<figcaption>Ver en YouTube</figcaption></figure>

Los slides de las 2 diferentes versiones, se pueden descargar de aquí:

Espero les sirva…

The post Fedora Containers Lab first appeared on .

Using Element as an IRC client

Posted by Ben Cotton on May 10, 2021 11:35 AM

Like many who work in open source communities, IRC is a key part of my daily life. Its simplicity has made it a mainstay. But the lack of richness also makes it unattractive to many newcomers. As a result, newer chat protocols are gaining traction. Matrix is one of those. I first created a Matrix account to participate in the Fedora Social Hour. But since Matrix.org is bridged to Freenode, I thought I’d give Element (a popular Matrix client) a try as an IRC client, too.

I’ve been using Element almost exclusively for the last few months. Here’s what I think of it.

Pros

The biggest pro for me is also the most surprising. I like getting IRC notifications on my phone. Despite being bad at it (as you may have read last week), I’m a big fan of putting work aside when I’m done with work. But I’m also an anxious person who constantly worries about what’s going on when I’m not around. It’s not that I think the place will fall apart because I’m not there. I just worry that it happens to be falling apart when I’m not there.

Getting mobile notifications means I can look, see that everything is fine (or at least not on fire enough that I need to jump in and help), and then go back to what I’m doing. But it also means I can engage with conversations if I choose to without having to sit at my computer all day. As someone who has previously had to learn and re-learn not to have work email alert on the phone, I’m surprised at my reaction to having chat notifications on my phone.

Speaking of notifications, I like the ability to set per-room notification settings. I can set different levels of notification for each channel and those settings reflect across all devices. This isn’t unique to Element, but it’s a nice feature nonetheless. In fact, I wish it were even richer. Ideally, I’d like to have my mobile notifications be more restrictive than my desktop notifications. Some channels I want to see notifications for when I’m at my desk, but don’t care enough to see them when I’m away.

I also really like the fact that I can have one fewer app open. Generally, I have Element, Signal, Slack, and Telegram, plus Google Chat all active. Not running a standalone IRC client saves a little bit of system resources and also lets me find the thing that dinged at me a little quicker.

Cons

By far the biggest drawback, and the reason I still use Konversation sometimes, is the mishandling of multi-line copy/paste. Element sends it as a single multi-line message, which appears on the IRC side as “bcotton has sent a long message: <url>”. When running an IRC meeting, I often have reason to paste several lines at once. I’d like them to be sent as individual lines so that IRC clients (and particularly our MeetBot implementation), see them.

The Matrix<->IRC bridge is also laggy sometimes. Every so often, something gets stuck and messages don’t go through for up to a few minutes. This is not how instant messaging is supposed to work and is particularly troublesome in meetings.

Overall

Generally, using Element for IRC has been a net positive. I’m looking forward to more of the chats I use becoming Matrix-native so I don’t have to worry about the IRC side as much. I’d also like the few chats I have on Facebook Messenger and Slack to move to Matrix. But that’s not a windmill I’m willing to tilt at for now. In the meantime, I’ll keep using Element for most of my IRC need,s, but I’m not quite ready to uninstall Konversation.

The post Using Element as an IRC client appeared first on Blog Fiasco.

Next Open NeuroFedora meeting: 10 May 1300 UTC

Posted by The NeuroFedora Blog on May 10, 2021 10:21 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 10 May at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 today'

The meeting will be chaired by @gicmo. The agenda for the meeting is:

We hope to see you there!

Fedora Has Too Many Security Bugs 2

Posted by Robbie Harwood on May 10, 2021 04:00 AM

A year (and change) later, this is a followup to my previous post on how Feodra has too many security bugs. The code and methodology I'm using are unchanged from that post - this is just new numbers and some thoughts on the delta.

Right now, there are 2,089 open CVE bugs against Fedora. This is a decrease of 247 from last year - so that's good news. My gratitude toward maintainers who have been reducing their backlog.

Year breakdown:

2009: 1
2010: 0
2011: 2
2012: 9
2013: 5
2014: 5
2015: 17
2016: 72
2017: 242
2018: 367
2019: 260
2020: 701
2021: 408

With the exception of the 2009 bug (opened on 2021-03-27), the tail shrunk by one year. The per-year deltas are:

2009: +1
2010: -4
2011: -9
2012: -8
2013: -13
2014: -25
2015: -57
2016: -123
2017: -183
2018: -315
2019: -485
2020: +674
2021: N/A

(The 2020 change is of course expected, since 2020 mostly hadn't happened yet at the time the last post was written, and there's often lag between number assignment and disclosure of CVEs.)

The EPEL/non distribution no longer skews toward EPEL: now there are 958 EPEL bugs, compared to 1131 non-EPEL. However, given the relative size of the two package sets, this is still a surprisingly high number of EPEL issues. The deltas are EPEL: -308, non-EPEL: +61.

For ecosystems, right now I see:

mingw: 140 (-319)
python: 109 (+28)
nodejs: 99 (+27)
rubygem: 27 (-5)
php: 19 (-4)

An interesting datum here is that mingw's reduction is more than the total Fedora reduction. In other words, if mingw had been CVE-neutral, Fedora would have increased total count by 72. So in a sense, mingw improved, while the rest of Fedora became worse.

Finally, while the documentation links have been fixed, there has been no change to Fedora policy around security handling, nor is there a functioning Security Team right now. Making a Security Team happen would be a Herculean task, so I have nothing but appreciation for the folks who have worked on it. However, it does suggest that our incentive structure is wrong.

Episode 270 – Hello dark patterns my old friend

Posted by Josh Bressers on May 10, 2021 12:01 AM

Josh and Kurt talk about dark patterns. A dark pattern is when a service tries to confuse a user into doing something they don’t want to, like unknowingly purchasing a monthly subscription to something you don’t need or want. The US Federal Trade Commission is starting to discuss dark patterns in webs sites and apps.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2438-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_270_Hello_dark_patterns_my_old_friend.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_270_Hello_dark_patterns_my_old_friend.mp3</audio>

Show Notes

Changing hidden/locked BIOS settings under Linux

Posted by Hans de Goede on May 09, 2021 01:37 PM
This all started with a Mele PCG09 before testing Linux on this I took a quick look under Windows and the device-manager there showed an exclamation mark next to a Realtek 8723BS bluetooth device, so BT did not work. Under Linux I quickly found out why, the device actually uses a Broadcom Wifi/BT chipset attached over SDIO/an UART for the Wifi resp. BT parts. The UART connected BT part was described in the ACPI tables with a HID (Hardware-ID) of "OBDA8723", not good.

Now I could have easily fixed this with an extra initrd with DSDT-overrride but that did not feel right. There was an option in the BIOS which actually controls what HID gets advertised for the Wifi/BT named "WIFI" which was set to "RTL8723" which obviously is wrong, but that option was grayed out. So instead of going for the DSDT-override I really want to be able to change that BIOS option and set it to the right value. Some duckduckgo-ing found this blogpost on changing locked BIOS settings.

The flashrom packaged in Fedora dumped the BIOS in one go and after build UEFITool and ifrextract from source from their git repos I could extract the interface description for the BIOS Setup menus without issues (as described in the blogpost). Here is the interesting part of the IFR for changing the Wifi/BT model:


0xC521 One Of: WIFI, VarStoreInfo (VarOffset/VarName): 0x110, VarStore: 0x1, QuestionId: 0x1AB, Size: 1, Min: 0x0, Max 0x2, Step: 0x0 {05 91 53 03 54 03 AB 01 01 00 10 01 10 10 00 02 00}
0xC532 One Of Option: RTL8723, Value (8 bit): 0x1 (default) {09 07 55 03 10 00 01}
0xC539 One Of Option: AP6330, Value (8 bit): 0x2 {09 07 56 03 00 00 02}
0xC540 One Of Option: Disabled, Value (8 bit): 0x0 {09 07 01 04 00 00 00}
0xC547 End One Of {29 02}



So to fix the broken BT I need to change the byte at offset 0x110 in the "Setup" EFI variable which contains the BIOS settings from 0x01 to 0x02. Easy, one problem though, the "dd on /sys/firmware/efi/efivars/Setup-..." method described in the blogpost does not work on most devices. Most devices protect the BIOS settings from being modified this way by having 2 Setup-${GUID} EFI variables (with different GUIDs), hiding the real one leaving a fake one which is only a couple of bytes large.

But the BIOS Setup-menu itself is just another EFI executable, so how can this access the real Setup variable ? The trick is that the hiding happens when the OS calls exitbootservices to tell EFI it is ready to take over control of the machine. This means that under Linux the real Setup EFI variable has been hidden early on during Linux boot, but when grub is running it is still available! And there is a patch adding a new setup_var command to grub, which allows changing BIOS settings from within grub.

The original setup_var command picks the first Setup EFI variable it finds, but as mentioned already in most cases there are 2, so later an improved setup_var_3 command was added which instead skips Setup EFI variables which are too small (as the fake ones are only a few bytes). After building an EFI version of grub with the setup_var* commands added it is just a matter of booting into a grub commandline and then running "setup_var_3 0x110 2" and from then on the BIOS shows the WIFI type as being AP6330 and the ACPI tables will now report "BCM2E67" as HID for the BT and just like that the bluetooth issue has been fixed.


For your convenience I've uploaded a grubia32.efi and a grubx64.efi with the setup_var patches added here. This is build from this branch at this commit (this was just a random branch which I had checked out while working on this).

The Mele PCG09 use-case for modifying hidden BIOS-settings is a bit of a corner-case. Intel Bay- and Cherry-Trail SoCs come with an embedded OTG XHCI controller to allow them to function as an USB device/gadget rather then only being capable of operating as an USB host. Since most devices ship with Windows and Windows does not really do anything useful with USB-device controllers, this controller is disabled by most BIOS-es and there is no visible option to enable it. The same approach from above can be used to enable the "USB OTG" option in the BIOS so that we can use this under Linux. Lets take the Teclast X89 (Windows version) tablet as example. Extracting the IFR and then looking for the "USB OTG" function results in finding this IFR snippet:


0x9560 One Of: USB OTG Support, VarStoreInfo (VarOffset/VarName): 0xDA, VarStore: 0x1, QuestionId: 0xA5, Size: 1, Min: 0x0, Max 0x1, Step: 0x0 {05 91 DE 02 DF 02 A5 00 01 00 DA 00 10 10 00 01 00}
0x9571 Default: DefaultId: 0x0, Value (8 bit): 0x1 {5B 06 00 00 00 01}
0x9577 One Of Option: PCI mode, Value (8 bit): 0x1 {09 07 E0 02 00 00 01}
0x957E One Of Option: Disabled, Value (8 bit): 0x0 {09 07 3B 03 00 00 00}
0x9585 End One Of {29 02}



And then running "setup_var_3 0xda 1" on the grub commandline results in a new "00:16.0 USB controller: Intel Corporation Atom Processor Z36xxx/Z37xxx Series OTG USB Device" entry showing up in lspci.

Actually using this requires a kernel with UDC (USB Device Controller) support enabled as well as some USB gadget drivers, at least the Fedora kernel does not have these enabled by default. On Bay Trail devices an external device-mode USB-PHY is necessary for device-mode to actually work. On a kernel with UDC enabled you can check if your hardware has such a phy by doing: "cat /sys/bus/ulpi/devices/dwc3.4.auto.ulpi/modalias" if there is a phy this will usually return "ulpi:v0451p1508". If you get "ulpi:v0000p0000" instead then your hardware does not have a device-mode phy and you cannot use gadget mode.

On Cherry Trail devices the device-mode phy is build into the SoC, so on most Cherry Trail devices this just works. There is one caveat though, the x5-z83?0 Cherry Trail SoCs only have one set of USB3 superspeed data lines and this is part of the USB-datalines meant for the OTG port. So if you have a Cherry Trail device with a x5-z83?0 SoC and it has a superspeed (USB3) USB-A port then that is using the OTG superspeed-lines, when the OTG XHCI controller is enabled and the micro-usb gets switched to device-mode (which it also does when charging!) then this will now also switch the superspeed datalines to device-mode, disconnecting any superspeed USB device connected to the USB-A port. So on these devices you need to choose, you can either use the micro-usb in device-mode, or get superspeed on the USB-A port, you cannot use both at the same time.

If you have a kernel build with UDC support a quick test is to run a USB-A to micro-B cable from a desktop or laptop to the tablet and then do "sudo modprobe g_serial" on the tablet, after this you should see a binch of messages in dmesg on the desktop/tablet about an USB device showing up ending with something like "cdc_acm 1-3:2.0: ttyACM0: USB ACM device". If you want you can run a serial-console on the tablet on /dev/ttyGS0 and then connect to it on the desktop/laptop at /dev/ttyACM0.

EOL Copr APIv1 and APIv2, pt.2

Posted by Copr on May 09, 2021 12:00 AM

During the last year, we are incrementally dropping support for Copr’s APIv1 and APIv2. We kindly ask you to migrate to APIv3. Some reasoning and our motivation for doing so can be found the Copr has a brand new API blog post.

According to the deprecation schedule, we just took the following step:

  • April 2021 - print warning on STDERR when you use APIv2 from python3-copr and remove APIv1 from python3-copr library

That means that if you cannot migrate to APIv3 yet, you can still send requests to APIv1 endpoints directly, without using the python3-copr library, or you can temporarily freeze on python-copr-1.109 and copr-cli-1.93.

Please don’t put aside the migration of your code for much longer. In September the APIv1 is going to be dropped from the frontend and APIv2 from clients.

kcp: Kubernetes Without Nodes and Why I Care

Posted by Devan Goodwin on May 08, 2021 12:28 PM
Nodes? Where we're going, we don't need.... nodes. (what if kube was just the API that could talk to anything?) prototype and call to ideate at https://t.co/XCv3iCd0rI — Clayton Coleman (@smarterclayton) May 5, 2021 Earlier this week Clayton Coleman presented Kubernetes as the Hybrid Cloud Control Plane as a keynote at KubeCon EU 2021, and revealed the kcp prototype. kcp is exploring re-use of the Kubernetes API at a higher level to orchestrate many different workloads and services across the hybrid cloud.

Letsencrypt certificate renewal: Nginx with reverse-proxy

Posted by Rajeesh K Nambiar on May 08, 2021 10:22 AM

Let’s Encrypt revolutionized the SSL certificate management for websites in a short span of time — it directly improved the security of users of the world wide web by: (1) making it very simple to deploy SSL certificates to websites by administrators and (2) make the certificates available free of cost. To appreciate their efforts, compare to what hoops one had to jump through to obtain a certificate from a certificate authority (CA) and how much money and energy one would have to spend on it.

I make use of letsencrypt in all the servers I manitain(ed) and in the past used the certbot tool to obtain & renew certificates. Recent versions of certbot are only available as a snap package, which is not something I’d want to or able to setup in many cases.

Enter acme. It is shell script that works great. Installing acme will also setup a cron job, which would automatically renew the certificate for the domain(s) near its expiration. I have recently setup dict.sayahna.org using nginx as a reverse proxy to a lexonomy service and acme for certificate management. The cron job is supposed to renew the certificate on time.

Except it didn’t. Few days ago received a notification from about imminent expiry of the certificate. I have searched the interweb quite a bit, but didn’t find a simple enough solution (“make the proxy service redirect the request”…). What follows is the troubleshooting and a solution, may be someone else find it useful.

Problem

acme was unable to renew the certificate, because the HTTP-01 authentication challenge requests were not answered by the proxy server where all traffic was being redirected to. In short: how to renew letsencrypt certificates on an nginx reverse-proxy server?

Certificate renewal attempt by acme would result in errors like:

# .acme.sh/acme.sh --cron --home "/root/.acme.sh" -w /var/www/html/
[Sat 08 May 2021 07:28:17 AM UTC] ===Starting cron===
[Sat 08 May 2021 07:28:17 AM UTC] Renew: 'my.domain.org'
[Sat 08 May 2021 07:28:18 AM UTC] Using CA: https://acme-v02.api.letsencrypt.org/directory
[Sat 08 May 2021 07:28:18 AM UTC] Single domain='my.domain.org'
[Sat 08 May 2021 07:28:18 AM UTC] Getting domain auth token for each domain
[Sat 08 May 2021 07:28:20 AM UTC] Getting webroot for domain='my.domain.org'
[Sat 08 May 2021 07:28:21 AM UTC] Verifying: my.domain.org
[Sat 08 May 2021 07:28:24 AM UTC] my.domain.org:Verify error:Invalid response from https://my.domain.org/.well-known/acme-challenge/Iyx9vzzPWv8iRrl3OkXjQkXTsnWwN49N5aTyFbweJiA [NNN.NNN.NNN.NNN]:
[Sat 08 May 2021 07:28:24 AM UTC] Please add '--debug' or '--log' to check more details.
[Sat 08 May 2021 07:28:24 AM UTC] See: https://github.com/acmesh-official/acme.sh/wiki/How-to-debug-acme.sh
[Sat 08 May 2021 07:28:25 AM UTC] Error renew my.domain.org.

Troubleshooting

The key error to notice is

Verify error:Invalid response from https://my.domain.org/.well-known/acme-challenge/Iyx9vzzPWv8iRrl3OkXjQkXTsnWwN49N5aTyFbweJiA [NNN.NNN.NNN.NNN]

Sure enough, the resource .well-known/acme-challenge/… is not accessible. Let us try to make that accessible, without going through proxy server.

Solution

First, create the directory if it doesn’t exist. Assuming the web root as /var/www/html:

# mkdir -p /var/ww/html/.well-known/acme-challenge

Then, edit /etc/nginx/sites-enabled/my.domain.org and before the proxy_pass directive, add the .well-known/acme-challenge/ location and point it to the correct location in web root. Do this on both HTTPS and HTTP server blocks (otherwise it didn’t work for me).

 6 server {
 7   listen 443 default_server ssl;
...
43   server_name my.domain.org;
44   location /.well-known/acme-challenge/ {
45     root /var/www/html/;
46   }
47  
48   location / {
49     proxy_pass http://myproxyserver;
50     proxy_redirect off;
51   }
...
83 server {
84   listen 80;
85   listen [::]:80;
86 
87   server_name my.domain.org;
88 
89   location /.well-known/acme-challenge/ {
90     root /var/www/html/;
91   }
92 
93   # Redirect to HTTPS
94   return 301 https://$server_name$request_uri;

Make sure the configuration is valid and reload the nginx configuration

nginx -t && systemctl reload nginx.service

Now, try to renew the certificate again:

# .acme.sh/acme.sh --cron --home "/root/.acme.sh" -w /var/www/html/
...
[Sat 08 May 2021 07:45:01 AM UTC] Your cert is in  /root/.acme.sh/my.domain.org/dict.sayahna.org.cer 
[Sat 08 May 2021 07:45:01 AM UTC] Your cert key is in  /root/.acme.sh/my.domain.org/my.domain.org.key 
[Sat 08 May 2021 07:45:01 AM UTC] v2 chain.
[Sat 08 May 2021 07:45:01 AM UTC] The intermediate CA cert is in  /root/.acme.sh/my.domain.org/ca.cer 
[Sat 08 May 2021 07:45:01 AM UTC] And the full chain certs is there:  /root/.acme.sh/my.domain.org/fullchain.cer 
[Sat 08 May 2021 07:45:02 AM UTC] _on_issue_success

Success.

PHP version 7.4.19 and 8.0.5

Posted by Remi Collet on May 08, 2021 05:47 AM

RPMs of PHP version 8.0.6 are available in remi-php80 repository for Fedora 32-34 and Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 7.4.19 are available in remi repository for Fedora 32-34 and remi-php74 repository Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 7.3.28.

emblem-important-2-24.pngPHP version 7.2 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 32-34 and EL-8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.0 installation (simplest):

yum-config-manager --enable remi-php80
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.3
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu65 (version 65.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.1
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php73 / php74 / php80)

Friday’s Fedora Facts: 2021-18

Posted by Fedora Community Blog on May 07, 2021 09:07 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! Fedora Linux 32 will reach end of life on Tuesday 25 May.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
DevConf.USvirtual2-3 Sepcloses 31 May
Nest With Fedoravirtual5-8 Augcloses 16 July
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

Upcoming meetings

Releases

Fedora Linux 34

Schedule

Upcoming key schedule milestones:

  • 2021-04-28 — Elections questionnaire process and nomination period begins.
  • 2021-05-12 — Elections nominations close.

Fedora Linux 35

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Debuginfod By DefaultSelf-ContainedApproved
Package information on ELF objectsSystem-WideFESCo #2598
CompilerPolicy ChangeSystem-WideFESCo #2603
Node.js 16.x by defaultSystem-WideFESCo #2605
More flexible use of SSSD fast cache for local usersSystem-WideAnnounced
Broken RPATH will fail rpmbuildSystem-WideAnnounced
Perl 5.34System-WideAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-18 appeared first on Fedora Community Blog.

Unmounting inside a container

Posted by Adam Young on May 07, 2021 05:50 PM

We do RPM things. Some of those RPM things need the /proc file system. Not forever, but for a short while. So we mount /proc, do something, and unmount. Which works fine.

Until we tried to do it in a container.

When we ran our umount code in the container, we got the following error:

umount: /meta/meta-rpm/build/tmp/work/qemux86_64-redhat-linux/rpmbased-minimal-image/1.0-r0/rootfs/.proc: block devices are not permitted on filesystem.

SELinux. Running in permissive mode worked fine. In /var/log/messages We have the following message:

May 5 15:22:41 zygarde setroubleshoot[1733179]: SELinux is preventing /meta/meta-rpm/build/tmp/work/qemux86_64-redhat-linux/rpmbased-minimal-image/1.0-r0/recipe-sysroot-native/usr/bin/umount from unmount access on the filesystem .#012#012 Plugin catchall (100. confidence) suggests *******#012#012If you believe that umount should be allowed unmount access on the filesystem by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c ‘umount’ –raw | audit2allow -M my-umount#012# semodule -X 300 -i my-umount.pp

It seems somewhat unbalanced that a container can mount a filesystem, but not unmount one.

As the messages said, we can get by it by building and running a custom policy:

ausearch -c 'umount' --raw | audit2allow -M my-umount
semodule -X 300 -i my-umount.pp

Its overkill, but you can never have too much overkill.

Actually, yes you can. When your goal is to not violate security constraints, opening up a hole is counterproductive.

When we run a container, by default, it gets the labeled as container_runtime_t:

unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 ayoung 1736123 1609658  0 18:04 pts/1 00:00:00 grep --color=auto metarpm

To change the label, run with a different security context. For instancing running podman with –security-opt label=type:metarpm.process like this:

 podman run   -it   --security-opt label=type:metarpm.process  -v /dev/fuse:/dev/fuse         -v metadata:/meta localhost/metarpm  /usr/bin/bash

Gives ends up with a process labeled system_u:system_r:metarpm_policy.process like this:

system_u:system_r:metarpm_policy.process:s0:c738,c1010 100999 1736385 1736373  0 18:08 pts/0 00:00:00 /usr/bin/bash

So long as this matches policy, we can do new things. But if we don’t ,we can’t do anything. But writing SELinux policy is a non-trivial amount of learning curve. How can we simplify?

Udica. Pronounced you-DI-tza. Or something like that.

Now…I’m going to cut out some of my trial and error with Udica. Following the instructions alone did not solve my problem, mainly because Udica does not yet understand how to deal with the umount or ioctl erros caused by the container. Here is what worked

  • set SELinux in permissive mode, so the operations succeeds, but the errors get caught in the audit log
  • as a non-root user,
    • start the container and run the bitbake function to generate the errors
    • run the podman ps command to get the container id
    • redirect the output a file
    • stop the container
  • as root
    • re-enable SELinux
    • cat the output of the file generated above into udica, add the flag to read the audit file
    • load the generated policy

Here is what that looks like on the terminal. As root

setenforce permissive

As ayoung

$ podman run   -it      --security-opt seccomp=unconfined -v /dev/fuse:/dev/fuse         -v metadata:/meta localhost/metarpm  /usr/bin/bash
$ cd /meta/meta-rpm/
$ . ./oe-init-build-env
$ bitbake rpmbased-minimal-image

Leave that running, and go to another terminal as ayoung

$ podman ps
CONTAINER ID  IMAGE              COMMAND        CREATED             STATUS                 PORTS   NAMES
c153bee4a84d  localhost/metarpm  /usr/bin/bash  About a minute ago  Up About a minute ago          naughty_heyrovsky
$ podman inspect c153bee4a84d > container.json

Now as root

$ cat container.json | udica metarpm  --append-rules /var/log/audit/audit.log
$ setenforce enforcing
$ semodule -i metarpm.cil /usr/share/udica/templates/base_container.cil
$ semodule -l | grep meta
metarpm

As the ayoung user, rerun the container with the additional flag

$ podman run   -it    --security-opt label=type:metarpm.process   --security-opt seccomp=unconfined -v /dev/fuse:/dev/fuse         -v metadata:/meta localhost/metarpm  /usr/bin/bash


If you look in the process list, you can see the container running:

$ ps -efZ | grep meta
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 ayoung 1977102 1609151  0 13:25 pts/0 00:00:00 podman run -it --security-opt label=type:meth
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 ayoung 1977115 1977102  0 13:25 pts/0 00:00:00 podman run -it --security-opt label=type:meth
system_u:system_r:metarpm.process:s0:c492,c842 100999 1977157 1977145  0 13:25 pts/0 00:00:00 /usr/bin/bash
unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 ayoung 1977217 1939598  0 13:26 pts/1 00:00:00 grep --color=auto meta

If I rerun the bitbake command now (after cleaning the output of the successful permissive run) it will succeed.

Here is the difference between the default output of udica and adding in the flag that pulls in the audit file:

# diff metarpm.cil.orig metarpm.cil
11a12,13
>     (allow process proc_t ( file ( ioctl ))) 
>     (allow process proc_t ( filesystem ( unmount ))) 

HDR in Linux: Part 1

Posted by Jeremy Cline on May 07, 2021 03:55 PM

In recent months I have been investigating high dynamic range (HDR) support for the Linux desktop, and what needs to be done so that a user could, for example, watch a high dynamic range video.

The problem with HDR is not so much that there is no material out there covering it, but that there’s a huge amount of material can be rather confusing. I thought it best to add more material that is probably also confusing, but may help me think through HDR better. The following content is likely wrong as I have no background in colorimetry, the human visual system, or graphics generally. I’d love to hear what makes no sense.

In this post I’ll cover what HDR is and why we care about it. In the next post, I’ll cover the work in specific projects that has been done to support it and what is left to be done.

Background

Before attempting to understand dynamic range, I found it helpful to understand the basics of electromagnetic radiation (light) and how the human visual system (eyes) interact with light so that we can produce graphics for human consumption. With an understanding of how to do that, we can complete our ultimate goal of tricking the human visual system into seeing extremely realistic images of cats even if there are no cats around.

Light

Light can be described as a wave with an amplitude and frequency/wavelength or a particle (photon) with a frequency/wavelength. Human eyes detect a very small frequency range of electromagnetic radiation. More particles is equivalent to a higher amplitude wave. The range of wavelengths eyes typically see is approximately 380 nanometers to 750 nanometers.

The visible light spectrum

Humans detect wavelength of light as colors, with blue in the 450-490nm range, green in 525-560nm range, and red in the 630-700nm range.

More photons of a given wavelength are perceived by humans as a brighter light of the color associated with the wavelength.

Luminance

Luminance is the measure of the amount of light in the visibile spectrum that passes through an area. The SI unit is candela per square meter (cd/m²). This is often also referred to as a “nit”, presumably because cd/m² is somewhat laborious to type out.

The human eye can detect a luminance range from about 0.000001 cd/m² to around 100,000,000 cd/m². We can readily go out into the world and experience this entire range, too. For example, the luminance of the sun’s disk at noon is around 1,600,000,000 cd/m². This is much higher than the human eye can safely experience; do not go stare at the sun.

One important detail is that the human perception of luminance is roughly logarithmic. You might already be familiar with this as decibels are used to describe sound levels. The fact that human sensitivity to luminance is non-linear becomes very important later when we need to compress data.

Human Visual System

The eye is composed of two general types of cells. The rod cells which are very sensitive and can detect very low amplitude (brightness) light waves, but don’t differentiate between wavelengths (color) well.

The cone cells, by contrast, come in several flavors where each flavor is sensitive to a different wavelength. Many humans have three flavors, imaginatively referred to as S, M, and L. S detects “short” wavelengths (blue), M detects “medium” wavelengths (green), and L detects “long” wavelengths (red).

A diagram of the sensitivity of typical cone cells

Different colors are produced by stimulating these cone cells in with different ratios of wavelengths.

One thing to keep in mind, as it adds a minor complication to calculating luminance, is that not all the cone cells are equally sensitive. If you were to take a blue light, a green light, and a red light that all emit the same number of photons, the green light would appear the brightest by far. The luminance, therefore, depends on the wavelength of the light.

While the human visual system is capable of detecting a very large range of luminance levels, it cannot do so all at once. If you’re outside on a sunny day and walk into a dark room, it takes some time for your eyes to adjust to the new luminance levels.

Color Spaces

When discussing displays and their capabilities you will hear about color spaces and perhaps see a diagram like this:

Chromaticity diagram by Paulschou at en.wikipedia, CC BY-SA 3.0 , via Wikimedia Commons

There’s a lot going on in this diagram and it is rather confusing (to me, anyway), so it’s worth covering the basics. There are some good blog posts with much more detail.

You’ll notice the outside curve of this color blob has numbers marked along them. These are light wavelengths, and the outer curve is the color we see for that pure wavelength. On the bottom edge there are no wavelengths. This is because those colors are what we see when the long cones and the short cones both sense light. All the colors not on the edge of the curve are created by blending together multiple wavelengths of light.

Often, you’ll hear people talk about the “temperature” of light. This is in reference to how as objects heat up they begin emitting light in the visible spectrum. The curve starting on the red side, passing through the center, and continuing toward blue shows the mapping of temperature to colors.

If we pick three points on this plane and form a triangle, we can create any color enclosed by the triangle by adding various ratios of those three colors together. These points have coordinates in this two-dimensional space, and we can describe the triangle by providing those coordinates. Using this approach, we can describe the colors a display can create by specifying three pairs of coordinates. This is the “color space”.

If you’re using GNOME, you can see a diagram of your display’s color gamut by going to the “Color” settings section, selecting the color profile, and clicking view details:

GNOME Settings displaying a CIE 1931 chromaticity diagram

Note that these diagrams do not include the luminance. Doing so creates a third dimension and can be used to visualize a “color volume”.

Displays

There are several ways to display images for human consumption. One way is to reflect light off a surface that absorbs some frequencies of light and reflect the rest into the eye (paintings, books, e-ink, etc). The other way is to produce light to shine into the eye, such as the liquid crystal or organic LED screen you’re probably looking at right now.

There’s a few challenges with producing the light directly. The first is that we need to produce a range of light frequencies to cover the visible spectrum of human vision. As humans usually have three types of frequency-sensing cells, we don’t necessarily need to produce every single frequency in the visible spectrum. Instead, we can pick three frequencies that align with where the cone cells are sensitive and by adjusting the ratio of these three produce light that contains all information the human eye uses and none of the information it cannot detect.

This is why a display pixel is typically made up of blueish, greenish, and reddish components. Since this isn’t a perfect world, displays don’t tend to be capable of producing a single frequency. The frequency range of these components impact how the three overlapping cone sensitivities can be stimulated and therefore determine the range of colors the display is capable of producing. This range is referred to as the display’s “color gamut”. The color gamut can vary quite a bit from display to display, so we need to account for this when we encode images. We also need to handle cases where the display cannot produce the color we wish to show. We could not, for example, accurately display an image of a laser emitting pure 525nm light unless the green sub-pixel happened to also emit pure 525nm light. This is called “gamut mapping” as we map one color gamut to another color gamut.

The second issue is that if we want to perfectly replicate the real world, we would need to be able to emit light in the range of 0.000001-1,600,000,000 cd/m². However, as we’ve already established, the sun is not safe to look at so it would be best to not faithfully reproduce it on a display. Even if we restrict ourselves to the 0.000001-100,000,000 cd/m² range, the power requirements and heat produced would be astronomical. We have to attempt to represent the world using a smaller luminance range than we actually experience. In other words, we have to compress the luminance range of our world to fit the luminance range of the display. This process is often referred to as “tone-mapping”.

Dynamic Range

Dynamic range is the ratio between the smallest value and largest value. When used in reference to displays, the value is luminance.

The lowest luminance level is determined by how much ambient light is reflecting off the surface of the display (e.g. are you sitting in a dark room, outside on a sunny day, etc), how much of the backlight (if there is one) leaks through the display panel, and so on.

The highest luminance level is determined by the light source of the display. This depends on the type of display, power usage requirements, user preferences, on the ambient temperature (emitting visible light also emits heat, and excessive heat can damage the display components), and perhaps other factors I am not aware of.

As both these values depend on environmental factors the dynamic range of a display can change from moment to moment, which is something we may wish to account for when compressing the luminance range of the world to fit the particular display.

So what is a high dynamic range display? In short, a display that is simultaneously capable of lower luminance levels and higher luminance levels than previous “standard dynamic range” displays. What these levels are exactly can be hand-wavy, but VESA provides some clear specifications.

This higher dynamic range ultimately means the way in which we compress the luminance range (tone-map) needs to change.

Light from Scene to Display

Now that we know the pieces of the puzzle, we can examine how an image ends up on our displays. In this section, we’ll cover the journey of an image from its creation to its display. Perhaps the most self-contained example is a modern video game. This allows us to dodge the added complication of camera sensors, but the process for real-world image capture is similar.

Scene-referred Lighting

Video games typically model a world complete with realistic lighting. When they do this, they need to work with “real world” luminance levels. For example, the sun that lights the scene may be modeled to be seen as 1.6 billion nits. Working with real world luminance that cannot be directly displayed is often called “scene-referred” luminance. We have to transform the scene-referred luminance levels to something we can display (“display-referred” luminance) by compressing or shifting the luminance values to a displayable range.

You might think it’s best to leave content in scene-referred lighting levels until the last possible moment (when the display is setting the value for each pixel), and that would indeed make some things simpler. However, it makes some things more difficult, like attempting to blend the scene-referred image with an image that is already display-referred. When doing operations like that, we need every part to be in the same frame of reference, either scene-referred or display-referred.

There’s a more difficult problem with keeping all content scene-referred, though. Representing numbers accurately in the 0.000001-1,600,000,000 cd/m² range requires a lot of bits, and we’re about to have a serious bit budget imposed on us. We have to transport the image data from the host graphics processing unit to the display. While both HDMI and DisplayPort have rather enormous bandwidth, the image resolution is also quite large. If we used a 32 bit floating point number for each red, green, and blue (RGB) component of a pixel on a display with 3840 x 2160 (4K) pixels, it would require 32 x 3 x 3840 x 2160 bits per frame, or 760 Mbits per frame. At a modest 60 frames per second, that’s 44.5 Gbits. DisplayPort 1.4 provides 25.9 Gbits and HDMI 2.0 give us 14.4 Gbits.

Fortunately for us, as long as we ensure the way we encode luminance levels matches up with the way the human eye detects luminance levels, we can save a lot of bits. In fact, for the “standard” dynamic range, we can manage with only 8 bits (256 luminance levels) per color (24 bits total for RGB).

Tone-mapping

In order to convert scene-referred, real world luminance to a range for our target display, we need to:

  1. Have an algorithm for mapping one luminance value to another. Input values have a range [0,∞) and the output values need to cover the range of our target display.

  2. Know the capability of our target display.

For now we’ll focus on #1, although #2 is important and, unfortunately, somewhat tricky.

There are many different approaches to tone-mapping and results can be subjective. This is not a post about tone-mapping (there are many multi-hundred page publications on the subject), we’ll just go over a couple examples to get the idea.

An extremely silly, but technically valid tone-mapping algorithm would be f(x) = 1, where x is the input luminance. This maps any input luminance level to 1.

A more useful approach might be the function f(x) = xⁿ/(xⁿ + sⁿ) where x is the input luminance, n is a parameter changes the slope of the mapping curve, and s shifts the curve on the horizontal axis. As a somewhat random example, with n = 3 and s = 10:

Example tone mapping curve that maps all input luminances between 0 and 1

This sigmoid function gently approaches the minimum and maximum values of 0 and 1. We can then map the [0, 1] range to the display luminance range.

There are many articles and papers out there dedicated to tone mapping. There’s a chapter on the topic in “The high dynamic range imaging pipeline” by Gabriel Eilertsen, for example, which include a number of sample images and a much more thorough examination of the process than this blog post.

This process is occasionally referred to as an “opti-optical transfer function” or “OOTF” as it maps optical (luminance) values to different optical values. Note that it is also possible to tone-map luminance values that are encoded for storage or transportation, so-called “electronic” values, which you might see referred to as electro-electronic transfer functions or “EETF”

Encoding

Once we’ve tone-mapped our content to use the display’s luminance range, we need to send the content off to the display so that we can see it. As the section on scene-referred light mentioned, we have a limited amount of bandwidth, and any increase or decrease in bits used per pixel impacts how many bits are required for a frame, and thus how many frames we can send to the display every second. The key is to use as few bits as we can get away with.

The method used to encode luminance levels is called the “transfer function”, “opto-electronic transfer function”, or “OETF”. These are functions that approximate the human’s sensitivity to luminance levels.

Since the display needs to decode the encoded data we send it back to linear luminance values, we need functions that can be easily “undone” with an inverse function. The property we’re looking for in our encoding and decoding functions is g(f(x)) = x. This is called the “inverse transfer function”, “electro-optical transfer function”, or “EOTF” since it converts the “on-the-wire” values back to optical values.

Note that the terminology and definitions of these transfer functions can be confusing and vague. Some people appear to use OETF to mean both a tone-mapping operation and a transfer to electronic values.

Encoding SDR

The “standard dynamic range” is not particularly standard, but usually tops out around 300 nits. Typically, 8 bits per RGB component is used. 8 bits allow us to express 256 luminance levels per component.

The Naïve Approach

One method to encoding would be to evenly spread each level across the luminance range. What would this look like? In our thought experiment, we’ll assume the monitor’s minimum luminance is 0.5 nits, and its maximum luminance is 200 nits. This gives us a range of 199.5 nits. Equally distributed, each step increases the luminance by about 0.78 nits.

Sending [0, 0, 0] for red, green, and blue results in the minimum luminance of 0.5 nits. Adding a step [1, 1, 1] would result in 1.28 nits.

Sending [254, 254, 254] results in 198.44 nits, and sending [255, 255, 255] results in 200 nits.

Graph of linear display-referred luminance mapped to code points

The problem with this approach is that with lower luminance values, each step is above what the human eye differentiate, its just-noticeable difference. At higher luminance levels, each step is so far below the just-noticeable difference that it would require many steps for a human to notice any difference at all. It results in clearly visible bands in the lower luminance areas of the image we display and completely undetectable detail in the higher luminance areas of the image. We could resolve this by adding more bits (and therefore more levels), but we can’t afford to do that without giving up resolution and framerate.

Instead, we want to take these wasted levels at high luminances and use them in lower luminances.

Gamma

The “gamma” transfer function has a long and interesting history and is part of the sRGB standard. The sRGB version is actually a piecewise function which uses the gamma function for all but the very lowest luminance levels, but we can ignore that complication here.

The basic function is in the form f(x) = Axᵞ. In sRGB, A = 1.055 and 𝛾 = 1/2.4. The inverse function used to decode the luminance is A = 1/1.055 and 𝛾 = 2.4

The encoding curve looks like this:

Graph of the gamma function with gamma = 1/2.4

In this graph, we map the linear, display-referred luminance values into the range [0, 1]. These input luminances are mapped to an encoded value also in the range [0, 1]. We can see from the graph above that the lower third of the linear luminance levels get mapped on the lower two thirds of our encoding range.

We can convert the output to integer values between 0 and 255 by multiplying the encoded luminance value by 255 and rounding:

Graph of the gamma function with gamma = 1/2.4 mapped to 0-255

Now, instead of wasting many of our precious bits on luminance steps on the high end that no one can see, we’re spending most of them on low luminance levels.

This is a pretty good approach when we have 256 steps (8 bits) per component and luminance range of a few hundred nits, but the further we stretch the range of nits, the bigger our steps have to be, and eventually we’ll cross a point where some steps are about the just-noticeable difference line.

We can fix this by increasing the number of bits used. What happens if we move to 10 or even 12 bits? At 12 bits per component, we have 4096 steps.

Graph of the gamma function with gamma = 1/2.4 mapped to 0-4096

At first glance this seems fine, but what if we adjust the graph to display actual nit values for the linear light? Here’s the same graph with linear light using a range of 0-1000 nits.

Graph of the gamma function with gamma = 1/2.4 mapped to 0-4096

We can see that fewer than half our steps, or approximately 2000, land in the luminance ranges we use today for standard dynamic range displays. Once again, we’re allocating too many steps to the very high luminance levels where the human eye becomes less and less sensitive. This is even worse as the luminance range increases:

Graph of the gamma function with gamma = 1/2.4 mapped to 0-4096

Thus, for displays with a higher dynamic range, we may want to adjust the transfer function to get the most value out of each bit we spend.

Encoding Higher Dynamic Ranges

As we saw when examining the gamma transfer function, it works reasonably well for low luminance ranges, but as the range increases it starts to be sub-optimal.

There are two major approaches for higher dynamic range encoding. Both have advantages and disadvantages, and displays can support either or both. If you’re curious if your displays are capable of using either of these approaches, you can check by installing edid-decode and running:

find /sys/devices -name edid -exec edid-decode {} \;

Look for a “HDR Static Metadata Data Block” and see what is included in the electro-optical transfer function list. This section will likely not be present if your display doesn’t support HDR.

The Perceptual Quantizer

The Perceptual Quantizer (PQ) transfer function, sometimes referred to by the much less cool name “SMPTE ST 2084”, defines a new curve to use for encoding. It is unlike the gamma transfer function in that it is defined to cover a well-defined luminance range, rather than just being fit to whatever the display happens to be capable of. The luminance range the curve is defined to cover is 0.0001-10,000 cd/m².

This is a useful concept when encoding the content as we can now express physical quantities in addition to “more/less luminance”. It gives the decoding side a frame of reference if it needs to further tone-map the content.

The curve looks like:

Part of the Perceptual Quantizer curve

This graph is only part of the curve, and you can see the slope is so extreme it doesn’t render very well. It packs a vast majority of the code points into the lower luminance range where the eye is most discerning. Currently, it’s common to use this curve with 10 bits per component.

The function that describes this curve is more complicated than the gamma curve so I won’t attempt to render it poorly here. The curious can consult the Wikipedia page linked above.

The perceptual quantizer transfer function can be paired with metadata describing the image or images it is used to encode. This metadata is usually the primary colors of the display used to create the content as well as luminance statistics like the image’s minimum, maximum, and average luminance. This metadata allows consumers of the content to perform better tone-mapping and gamut-mapping since it describes the exact color volume of the content.

The PQ curve supports encoding luminance up to 10,000 nits, but there are no consumers displays capable of that. Even professional displays built for movie-making are well short of that, currently around 4,000 nits. This is still much higher than current consumer displays, so the metadata can be used to tone-map and gamut-map the content from the capabilities of fancy display the film studio used to the capabilities of the individual display. In the future, when consumer monitors become more capable, the same films will look better as they require less tone-mapping and the original intent can be more accurately rendered.

The Hybrid Log-Gamma

As the name suggests, the second approach for higher dynamic range encoding is to use a hybrid of the gamma function and a log function. The hybrid log-gamma (HLG) curve is designed with backwards-compatibility in mind.

HLG uses a piecewise function defined as follows when the input luminance has been normalized to a range of 0-1:

  • When the input luminance L is between 0 and 1/12: sqrt(3) * L^0.5 (this is our old friend the gamma function f(x) = Axᵞ, with A = 1.732 and 𝛾 = 1/2)

  • When input luminance L is between 1/12 and 1: a * ln(12L - b) + c where a = 0.17883277, b = 1 - 4a, and c = 0.55991073.

The Gamma and Log curves in the HLG transfer function

In the above graph, the red curve is the gamma curve, the green curve is the log curve, and the vertical line is the point where the gamma curve stops being used in favor of the log curve when encoding.

As this encoding scheme is defined in terms of relative luminance like the gamma transfer function, there is no metadata for tone-mapping.

Transmission and Display

Once we’ve encoded the image with a transfer function the display supports, the bits are sent to the display via HDMI or DisplayPort. At this point, what happens next is up to whatever is on the other end of the cable. I have not pulled apart a display and learned what secrets it holds, but we can make some reasonable guesses without destroying anything:

  1. If metadata is provided (PQ-encoded images only), the display examines it and determines of any tone-mapping is required due to the content exceeding its capabilities. If it’s not provided, it likely assumes the content spans the entire range defined by PQ.

  2. If the display includes light sensors to detect ambient light levels, it might decide to tone-map the content, even if it’s capable of displaying all luminance levels encoded.

  3. Displays tend to include configuration to alter the color and brightness. It will take these into account when deciding if/how to tone-map or gamut-map the content we gave it.

  4. Some gamut-mapping and tone-mapping will occur in the display depending on all the above variables, at which point it will emit some light.

Now, that’s a very high-level overview of the process, but we can map these high-level steps to portions of the Linux desktop and discuss what needs to change in order for us to support HDR.

Summary

Now that we understand (in theory) HDR, it’s worth asking what all this is good for.

If, as part of your movie-watching experience, you want an eye-wateringly bright sunset where you can still make out the blades of grass in the shadow of a rock this is an important feature. The more cinematic video games could look even better. These are the most obvious applications, and where many users will encounter HDR, but it’s not necessarily limited to entertainment. HDR is all about transmitting and displaying more information, so any process that involves visualization could benefit from more luminance levels.

As we’ll see in the next post, however, there is a good bit of work left to be done before you can, for example, enjoy an HDR film in GNOME’s Videos application.

tar::Builder isn’t Send

Posted by Colin Walters on May 07, 2021 03:22 PM

I recently made a new project in Rust that is generating multiple bootable operating system disk image types from a "pristine" image with the goal of deduplicating storage.

At one point I decided to speed it up using rayon. Each thread here is basically taking a pristine base (read-only), doing some nontrivial computation and writing a new version derived from it. The code is using .par_iter().try_for_each(); here the rayon crate handles spinning up worker threads, etc.

That all worked fine.

Then later, due to some other constraints I realized it was better to support writing to stdout in addition. (This code needs to run in a container, and it’s easier to podman run --rm -i myimage --output stdout > newfile.iso instead of dealing with bind mounts.)

I came up with this:

enum OutputTarget<W: std::io::Write> {
    Stdout(W),
    Tar(tar::Builder<W>),
}

Basically if you’re only asking for one file, we output it directly. If you ask for multiple, we wrap them in a tarball.

But, it didn’t compile – there was an error message about tar::Builder not having the Send trait that pointed at the closure being passed to the rayon try_for_each(). I’ve been using Rust long enough that I understand Send and immediately realized the problem: multiple worker threads trying to concurrently write to the same tar stream just can’t work. (And the same is true for stdout, but the compiler can’t know for sure there’s only one thread in that case.)

But, I still wanted the parallelism from doing the actual file generation. Some refactoring to more cleanly split up "generate files" from "output files" would have been cleanest, and probably not hard.

But this project was still in the fast iteration/prototyping phase so I decided to just wrap the OutputTarget enum to be an Arc<Mutex<OutputTarget>> – and that compiled and worked fine. The worker threads still parallelize generation, then serialize output.

Other languages don’t do this

This project is one of those that honestly could have easily started in bash or Python too. Or Go. But those languages don’t have built-in concurrency protections.

Out of curiosity I just wrote a quick Python program to write to a tarfile from multiple threads. As expected, it silently generated a corrupted tarball with intermixed content. (At this point hopefully everyone knows basically to avoid threads in Python since they’re mostly useless due to the GIL, of course)

And also as expected, a lightly modified example of the code from the Go archive/tar example compiles fine, and generates corrupted output. Now this is a well known problem in Go given its heavy focus on concurrent goroutines, and to be fair go run -race does correctly find errors here. But there’s a bunch of tradeoffs involved there; the race detector is only probabilistic, you have to remember to use it in your CI tests, etc.

I’m really not saying anything here that hasn’t been said before of course. But this was my experience this week. And it’s experiences like this that remind me why I sunk so much time into learning Rust and using it for new projects.

توزیع Rocky Linux 8.3 RC1 منتشر شد

Posted by Fedora fans on May 07, 2021 11:58 AM
rocky-linux

rocky-linuxبنیاد نرم افزار Rocky Enterprise خبر انتشار اولین نسخه ی کاندیدای انتشار یعنی Rocky Linux 8.3 RC1 را اعلام کرد.

Rocky Linux توسط یکی از بنیان گذاران توزیع CentOS یعنی آقای Gregory Kurtzer رهبری می شود. این توزیع بر پایه سورس کد های Red Hat Enterprise Linux می باشد و هم اکنون اولین نسخه ی کاندیدای انتشار آن برای معماری های x86_64 و AArch64 در دسترس و قابل دانلود می باشد.

جهت اطلاعات بیشتر در مورد Rocky Linux 8.3 RC1 کافیست تا آگهی انتشار آن را مطالعه کنید:

https://rockylinux.org/news/rocky-linux-8-3-rc1-release/

جهت دانلود Rocky Linux 8.3 RC1 می توانید از لینک پایین استفاده کنید:

https://rockylinux.org/download/

 

The post توزیع Rocky Linux 8.3 RC1 منتشر شد first appeared on طرفداران فدورا.

Contribute to Fedora Kernel 5.12 Test Week

Posted by Fedora Magazine on May 07, 2021 08:00 AM

The kernel team is working on final integration for kernel 5.12. This version was recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, May 09, 2021 through Sunday, May 16, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Soft unbricking Bay- and Cherry-Trail tablets with broken BIOS settings

Posted by Hans de Goede on May 06, 2021 07:50 PM
As you may know I've been doing a lot of hw-enablement work on Bay- and Cherry-Trail tablets as a side-project for the last couple of years.

Some of these tablets have one interesting feature intended to "flash" Android on them. When turned on with both the volume-up and the volume-down buttons pressed at the same time they enter something called DNX mode, which it will then also print to the LCD panel, this is really just a variant of the android fastboot protocol built into the BIOS. Quite a few models support this, although on Bay Trail it sometimes seems to be supported (it gets shown on the screen) but it does not work since many models which only shipped with Windows lack the external device/gadget-mode phy which the Bay Trail SoC needs to be able to work in device/gadget mode (on Cherry Trail the gadget phy has been integrated into the SoC).

So on to the topic of this blog-post, I recently used DNX mode to unbrick a tablet which was dead due to the BIOS settings get corrupted in a way where it would not boot and it was also impossible to enter the BIOS setup. After some duckduckgo-ing I found a thread about how in DNX mode you can upload a replacement for the efilinux.efi bootloader normally used for "fastboot boot" and how you can use this to upload a binary to flash the BIOS. I did not have a BIOS image of this tablet, so that approach did not work for me. But it did point me in the direction of a different, safer (no BIOS flashing involved) solution to unbrick the tablet.

If you run the following 2 commands on a PC with a Bay- or Cherry-Trail connected in DNX mode:

fastboot flash osloader some-efi-binary.efi
fastboot boot some-android-boot.img

Then the tablet will execute the some-efi-binary.efi. At first I tried getting an EFI shell this way, but this failed because the EFI binary gets passed some arguments about where in RAM it can find the some-android-boot.img. Then I tried booting a grubx64.efi file and that result in a grub commandline. But I had not way to interact with it and replacing the USB connection to the PC with a OTG / USB-host cable with a keyboard attached to it did not result in working input.

So my next step was to build a new grubx64.efi with "-c grub.cfg" added to the commandline for the final grub2-mkimage step, embedding a grub.cfg with a single line in there: "fwsetup". This will cause the tablet to reboot into its BIOS setup menu. Note on some tablets you still will not have keyboard input if you just let the tablet sit there while it is rebooting. But during the reboot there is enough time to swap the USB cable for an OTG adapter with a keyboard attached before the reboot completes and then you will have working keyboard input. At this point you can select "load setup defaults" and then "save and exit" and voila the tablet works again.

For your convenience I've uploaded a grubia32.efi and a grubx64.efi with the necessary "fwsetup" grub.cfg here. This is build from this branch at this commit (this was just a random branch which I had checked out while working on this).

Note the image used for the "fastboot boot some-android-boot.img" command does not matter much, but it must be a valid android boot.img format file otherwise fastboot will refuse to try to boot it.

Testing all the pixels

Posted by Cockpit Project on May 06, 2021 10:00 AM

The Cockpit integration tests can now contain “pixel tests”. Such a test will take a screenshot with the browser and compare it with a reference. The idea is that we can catch visual regressions much easier this way than if we would hunt for them in a purely manual fashion.

Preparing a repository for pixel tests

A pixel test will take a screenshot of part of the Cockpit UI and compare it with a reference. Thus, these reference images are important and play the biggest role.

A large part of dealing with pixel tests will consequently consist of maintaining the reference images. At the same time, we don’t want to clog up our main source repository with them. While the number and size of the reference images at any one point in time should not pose a problem, we will over time accumulate a history of them that we are afraid would dominate the source repository.

Thus, the reference images are not stored in the source repository. Instead, we store them in an external repository that is linked into the source repository as a submodule. That external repository doesn’t keep any history and can be aggressively pruned.

Developers are mostly isolated from this via the new test/common/pixel-tests tool. But if you are familiar with git submodules, there should be no suprises for you here.

A source repository needs to be prepared before it can store reference images in a external storage repository. You can let the tool do it by running

$ ./test/common/pixel-tests init

and then committing the changes it has done to the source repository. (Those changes will be a new or modified .gitmodules file, and a new gitlink at test/reference.)

Adding a pixel test

To add a pixel test to a test program, call the new “assert_pixels” function of the Browser class:

    def testSomeDialog(self):
        ...
        self.browser.assert_pixels('#dialog button.apply', "button")
        ...

The first argument is a CSS selector that identifies the part of the UI that you want to compare. The screenshot will only include that element. The second argument is an arbitrary but unique key for this test point. It is used to name the files that go with it.

For each such call, there needs to be a reference image in the test/reference/ directory.

As mentioned above, the test/reference/ directory is very special indeed, and needs to be carefully managed with the test/common/pixel-tests tool (or even more carefully with git submodule et al).

First, make sure test/reference is up-to-date:

$ ./test/common/pixel-tests pull

Then you can add new reference images to it.

The easiest way to get a new reference image is to just run the test once (locally or in the CI machinery). It will fail, but produce a reference image for the current state of the UI.

When you run the test locally, the new reference image will appear in the current directory. Just move it into test/reference/. The next run of the test should then be green.

When you run the tests in the CI machinery, you need to download the new reference images from the test results directory. They show up as regular screenshots.

If there are parts of the reference image that you want to ignore, you can pass a suitable “ignore” argument to assert_pixels. It contains a list of CSS selectors, and any pixel that is within their bounding rectangles is ignored. For example, this

    def testSomeDialog(self):
        ...
        self.browser.assert_pixels('#dialog', "dialog", ignore=["#memory-available"])
        ...

would compare a full dialog to a reference, but would exclude the DOM element that shows a number that is dependent on the environment that the test runs in.

You can also change the transparency (alpha channel) of parts of the reference image itself (with the GIMP, say). Any pixel that is not fully opaque will be ignored during comparison.

When you are done adding images to test/reference, push them into the storage repository like so:

$ ./test/common/pixel-tests push

This push command will record a change to test/reference in the main repository, and you need to commit this change. This is how the main repository specifies which reference images to use exactly for each of its commits.

If you want to see what changes would be pushed without actually pushing them, you can run this:

$ ./test/common/pixel-tests status

Here is a PR that adds two pixel test points to the starter-kit, complete with reference images:

starter-kit#436

Debugging a failed pixel test

When making changes that change how the UI looks, some pixel tests will fail. The test results will contain the new pixels, and you can compare them with the referene image right in the browser when looking at the test logs.

Here is a PR that makes the two pixel tests fail that had been added in #436:

starter-kit#435

As you can see, it has “changed pixels” links in the same place as the well known “screenshot” links. Clicking on it gets you to a page where you can directly compare the previous and current UI.

As the author of the pull request, you can decide from there whether these changes are intended or not.

For a intended change, see the next section. For unintended changes, you need to fix your code and try again, of course.

Updating a pixel test

If you make a change that intentionally changes how Cockpit looks, you need to install new reference images in test/reference/.

This is very similar to adding a new pixel test point. Take the TestFoo-testBasic-pixels.png that was written by the failed test run, move it into test/reference, push it to the storage repository with ./test/common/pixel-tests push, and commit test/reference in the main repository.

A local test run has dropped the new reference image into the current directory, for a remote run it will be in the test results directory and the pixel comparison view has a link to it.

When a test writes a new TestFoo-testBasic-pixels.png file for a failed test, it will have the alpha channel of the old reference copied into it. That makes it easy to keep ignoring the same parts.

Reviewing a changed pixel test

Here is a second version of the starter-kit pull request from the previous section:

starter-kit#438

It has the same code changes, but now the reference images have been updated as well, since the change in color was of course intended.

Now this PR needs to be reviewed, and the changed visuals need to be approved. But since the reference images are not stored in the main repository, Github will not include them in the PR diff view.

Instead, the robots will automatically add a comment to a pull request with a link to a page that allows reviewing the changed reference images.

Contribute to Fedora Kernel 5.12 Test Week

Posted by Fedora Community Blog on May 06, 2021 08:00 AM
Fedora Linux 35 Kernel 5.12

The kernel team is working on final integration for kernel 5.12. This version was recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, May 06, 2021 through Sunday, May 16, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

The post Contribute to Fedora Kernel 5.12 Test Week appeared first on Fedora Community Blog.

More doorbell adventures

Posted by Matthew Garrett on May 06, 2021 06:26 AM
Back in my last post on this topic, I'd got shell on my doorbell but hadn't figured out why the HTTP callbacks weren't always firing. I still haven't, but I have learned some more things.

Doorbird sell a chime, a network connected device that is signalled by the doorbell when someone pushes a button. It costs about $150, which seems excessive, but would solve my problem (ie, that if someone pushes the doorbell and I'm not paying attention to my phone, I miss it entirely). But given a shell on the doorbell, how hard could it be to figure out how to mimic the behaviour of one?

Configuration for the doorbell is all stored under /mnt/flash, and there's a bunch of files prefixed 1000eyes that contain config (1000eyes is the German company that seems to be behind Doorbird). One of these was called 1000eyes.peripherals, which seemed like a good starting point. The initial contents were {"Peripherals":[]}, so it seemed likely that it was intended to be JSON. Unfortunately, since I had no access to any of the peripherals, I had no idea what the format was. I threw the main application into Ghidra and found a function that had debug statements referencing "initPeripherals and read a bunch of JSON keys out of the file, so I could simply look at the keys it referenced and write out a file based on that. I did so, and it didn't work - the app stubbornly refused to believe that there were any defined peripherals. The check that was failing was pcVar4 = strstr(local_50[0],PTR_s_"type":"_0007c980);, which made no sense, since I very definitely had a type key in there. And then I read it more closely. strstr() wasn't being asked to look for "type":, it was being asked to look for "type":". I'd left a space between the : and the opening " in the value, which meant it wasn't matching. The rest of the function seems to call an actual JSON parser, so I have no idea why it doesn't just use that for this part as well, but deleting the space and restarting the service meant it now believed I had a peripheral attached.

The mobile app that's used for configuring the doorbell now showed a device in the peripherals tab, but it had a weird corrupted name. Tapping it resulted in an error telling me that the device was unavailable, and on the doorbell itself generated a log message showing it was trying to reach a device with the hostname bha-04f0212c5cca and (unsurprisingly) failing. The hostname was being generated from the MAC address field in the peripherals file and was presumably supposed to be resolved using mDNS, but for now I just threw a static entry in /etc/hosts pointing at my Home Assistant device. That was enough to show that when I opened the app the doorbell was trying to call a CGI script called peripherals.cgi on my fake chime. When that failed, it called out to the cloud API to ask it to ask the chime[1] instead. Since the cloud was completely unaware of my fake device, this didn't work either. I hacked together a simple server using Python's HTTPServer and was able to return data (another block of JSON). This got me to the point where the app would now let me get to the chime config, but would then immediately exit. adb logcat showed a traceback in the app caused by a failed assertion due to a missing key in the JSON, so I ran the app through jadx, found the assertion and from there figured out what keys I needed. Once that was done, the app opened the config page just fine.

Unfortunately, though, I couldn't edit the config. Whenever I hit "save" the app would tell me that the peripheral wasn't responding. This was strange, since the doorbell wasn't even trying to hit my fake chime. It turned out that the app was making a CGI call to the doorbell, and the thread handling that call was segfaulting just after reading the peripheral config file. This suggested that the format of my JSON was probably wrong and that the doorbell was not handling that gracefully, but trying to figure out what the format should actually be didn't seem easy and none of my attempts improved things.

So, new approach. Rather than writing the config myself, why not let the doorbell do it? I should be able to use the genuine pairing process if I could mimic the chime sufficiently well. Hitting the "add" button in the app asked me for the username and password for the chime, so I typed in something random in the expected format (six characters followed by four zeroes) and a sufficiently long password and hit ok. A few seconds later it told me it couldn't find the device, which wasn't unexpected. What was a little more unexpected was that the log on the doorbell was showing it trying to hit another bha-prefixed hostname (and, obviously, failing). The hostname contains the MAC address, but I hadn't told the doorbell the MAC address of the chime, just its username. Some more digging showed that the doorbell was calling out to the cloud API, giving it the 6 character prefix from the username and getting a MAC address back. Doing the same myself revealed that there was a straightforward mapping from the prefix to the mac address - changing the final character from "a" to "b" incremented the MAC by one. It's actually just a base 26 encoding of the MAC, with aaaaaa translating to 00408C000000.

That explained how the hostname was being generated, and in return I was able to work backwards to figure out which username I should use to generate the hostname I was already using. Attempting to add it now resulted in the doorbell making another CGI call to my fake chime in order to query its feature set, and by mocking that up as well I was able to send back a file containing X-Intercom-Type, X-Intercom-TypeId and X-Intercom-Class fields that made the doorbell happy. I now had a valid JSON file, which cleared up a couple of mysteries. The corrupt name was because the name field isn't supposed to be ASCII - it's base64 encoded UTF16-BE. And the reason I hadn't been able to figure out the JSON format correctly was because it looked something like this:

{"Peripherals":[]{"prefix":{"type":"DoorChime","name":"AEQAbwBvAHIAYwBoAGkAbQBlACAAVABlAHMAdA==","mac":"04f0212c5cca","user":"username","password":"password"}}]}


Note that there's a total of one [ in this file, but two ]s? Awesome. Anyway, I could now modify the config in the app and hit save, and the doorbell would then call out to my fake chime to push config to it. Weirdly, the association between the chime and a specific button on the doorbell is only stored on the chime, not on the doorbell. Further, hitting the doorbell didn't result in any more HTTP traffic to my fake chime. However, it did result in some broadcast UDP traffic being generated. Searching for the port number led me to the Doorbird LAN API and a complete description of the format and encryption mechanism in use. Argon2I is used to turn the first five characters of the chime's password (which is also stored on the doorbell itself) into a 256-bit key, and this is used with ChaCha20 to decrypt the payload. The payload then contains a six character field describing the device sending the event, and then another field describing the event itself. Some more scrappy Python and I could pick up these packets and decrypt them, which showed that they were being sent whenever any event occurred on the doorbell. This explained why there was no storage of the button/chime association on the doorbell itself - the doorbell sends packets for all events, and the chime is responsible for deciding whether to act on them or not.

On closer examination, it turns out that these packets aren't just sent if there's a configured chime. One is sent for each configured user, avoiding the need for a cloud round trip if your phone is on the same network as the doorbell at the time. There was literally no need for me to mimic the chime at all, suitable events were already being sent.

Still. There's a fair amount of WTFery here, ranging from the strstr() based JSON parsing, the invalid JSON, the symmetric encryption that uses device passwords as the key (requiring the doorbell to be aware of the chime's password) and the use of only the first five characters of the password as input to the KDF. It doesn't give me a great deal of confidence in the rest of the device's security, so I'm going to keep playing.

[1] This seems to be to handle the case where the chime isn't on the same network as the doorbell

comment count unavailable comments

Community Platform Engineering is hiring

Posted by Fedora Community Blog on May 05, 2021 01:48 PM

The Community Platform Engineering (CPE) group is the Red Hat team combining IT and release engineering from Fedora and CentOS. Our goal is to keep core servers and services running and maintained, build releases, and other strategic tasks that need more dedicated time than volunteers can give. See our docs for more information.

We are hiring new talent to come work full time on Fedora and CentOS. The following positions are now open:

  • LATAM-based Associate / Engineer level, perfect for someone new to the industry or someone early in their career. We are looking for an infrastructure focused hire.
  • Associate / Engineer level in India. This is more a software engineering / DevOps focused role, perfect for graduates or people early in their career.
  • EMEA-based Associate Manager. We are looking for a people-focused Manager to join our team to help with the people management workload. This is perfect for aspiring managers looking to move into their first management role or for anybody early in their people management career.

Please note that due to a constraint in how the jobs system works, a single country is nominated for the advertisement. Kindly ignore that, two of the roles are available in the geographical regions outlined above.

A source of the hiring is the internal mobility of three of our CPE members. Pierre-Yves Chibon (pingou), Stephen Smoogen (smooge), and Leonardo Rossetti have moved onto another project within Red Hat. While pingou and Leonardo are still within CPE, their efforts will not be geared towards Fedora going forward. We still hope to see them in and around the Fedora community, but it will not be in a full time capacity. For transparency, we wish to share this with the community. 

We are looking forward to meeting you and hopefully working with you soon!

The post Community Platform Engineering is hiring appeared first on Fedora Community Blog.

Introducing the Fedora i3 Spin

Posted by Fedora Magazine on May 05, 2021 08:00 AM

Fedora 34 features the brand new i3 Spin created by the Fedora i3 S.I.G. This new spin features the popular i3wm tiling window manager. This will appeal to both novices and advanced users who prefer not to use a mouse, touchpad, or other pointing device to interact with their environment. The Fedora i3 spin offers a complete experience with a minimalistic user interface and a lightweight environment. It is intended for the power user, as well as others.

The i3 Experience

The i3 window manager is designed and developed with power users and developers in mind. The use of keyboard short cuts, however, will appeal to novices and advanced users who prefer not to use a mouse, touchpad, or other pointing device to interact with their environment. Our intention is to bring this experience to all users, giving them a lightweight environment, usable and extendable, where people can just work.

Design Goals

The Fedora i3 S.I.G work is based on the design goals for the project. These goals determine what we decide to include and how we tune or customize the contents of the Fedora i3 Spin.

The following is a list of the packages included. Others may be added from the Fedora Linux repository as required. Keep in mind that this is a minimalist spin.

Thunar

Thunar file manager is a modern file manager for the Xfce Desktop Environment. It is designed from the ground up to be fast, easy-to-use, lightweight, and full featured.

Mousepad

Mousepad text editor aims to be an easy-to-use and fast editor. This is not a complete development environment, however, it is powerful enough to read code and highlight syntax.

Azote

Azote is a GTK+3-based picture browser and background setter. The user interface is designed with multi-display setups in mind. Azote includes several color management tools.

network-manager-applet

nm-applet is a small GTK+3-based front-end for NetworkManager. It allows you to control, configure and use your network. It will cover everything from wired to wireless connections, including VPN management.

Firefox

Firefox is the default web browser chosen by the Fedora Project to be included in the different projects we ship. While not the lightest weight browser, it is the standard for Fedora Linux.

Get Fedora i3 Spin

Fedora 34 with i3wm is available for download here. For support, you can visit the #fedora-i3 channel on Freenode IRC or use the Users Mailing List.

Booting helios4 or ClearFog from SPI

Posted by Dennis Gilmore on May 04, 2021 10:44 PM

Helios4 is a NAS device made by Kobol Innovations, it is a mvebu device and based on the same SoM(System on Module) from SolidRun as is used in their ClearFog devices. While some of the early ClearFog devices do not have SPI flash all recent ones and all Helios4 devices have an onboard SPI flash that can be used to boot from.

Recently Fedora added SPI and UART u-boot images for the Kobol helios4 and SolidRun ClearFog. Getting your device to boot from SPI is fairly straightforward. There are two things that you need to do, put u-boot on the SPI flash and set the jumpers so the system will boot from SPI.

There are two ways to initially install u-boot onto the SPI flash, the first is to boot from a sdcard, the second is to boot from uart.

The simplest way to get started is to use a sdcard, to date, the ones I have used had a vfat partition in the first partition ext4 should also work. On any fedora, system install arm-image-installer and uboot-images-armv7 as this provides the necessary bits

$ sudo dnf install arm-image-installer uboot-images-armv7

With the packages installed we then need to set up the card so that we can put u-boot into the SPI flash. U-Boot in Fedora puts all of the images into /usr/share/uboot/ for the mvebu devices that have SPI flash there is a directory that contains the sdcard version of u-boot and one ending in -spi and another with -uart containing images for the target boot types.

$ sudo update-uboot --target helios4 --media=/dev/XXX
$ sudo cp /usr/share/uboot/helios4-spi/u-boot-spl.kwb /mnt/point/of/sdcard/
$ sudo eject /dev/XXX

Then insert the sdcard to your ClearFog or Helios4 connecting to the serial console and power on. When u-boot shows its output it is simplest to interrupt the boot process and proceed with flashing u-boot to SPI. We use a tool in u-boot called bubt to do the hard work for us. bubt takes the filename as the first argument the destination device type as the second and the source device type as the third argument. The source device has to be one of mmc, usb, or tftp, which in the case of using a ClearFog or Helios4 solely from HDD upgrades will need network or another device.

=> help bubt
bubt - Burn a u-boot image to flash

Usage:
bubt [file-name] [destination [source]]
        -file-name     The image file name to burn. Default = flash-image.bin
        -destination   Flash to burn to [spi, nand, mmc]. Default = spi
        -source        The source to load image from [tftp, usb, mmc]. Default = tftp
Examples:
        bubt - Burn flash-image.bin from tftp to active boot device
        bubt flash-image-new.bin nand - Burn flash-image-new.bin from tftp to NAND flash
        bubt backup-flash-image.bin mmc usb - Burn backup-flash-image.bin from usb to MMC

=> bubt u-boot-spl.kwb spi mmc
Burning U-Boot image "u-boot-spl.kwb" from "mmc" to "spi"
Image checksum…OK!
SF: Detected w25q32 with page size 256 Bytes, erase size 4 KiB, total 4 MiB
Erasing 561152 bytes (137 blocks) at offset 0 …Done!
Writing 561072 bytes from 0x800000 to offset 0 …Done!

Once we have installed u-boot onto the SPI flash there is one more step, change the settings to tell the SoC where to boot from. https://developer.solid-run.com/knowledge-base/clearfog-boot-select/ lists the settings for the ClearFog devices and https://wiki.kobol.io/helios4/spi/#under-generic-linux lists the settings for the Helios4

After setting the dip switches correctly to boot from SPI you can go and install using one of the many supported options, PXE, sdcard, or USB. Without any working u-boot the SoC will fall back to booting from uart. If you want to go down this path you can use kwboot from the uboot-tools package, it is also a good recovery method.

$ sudo yum install uboot-tools
$ sudo /usr/bin/kwboot -t -b /usr/share/uboot/helios4-uart/u-boot-spl.kwb -B 115200 /dev/ttyUSBX

You will then get access to a mini terminal to run u-boot commands to flash to SPI, note that you will need a supported source to use for bubt to flash to SPI. To update you will need to be attached to the serial console and follow the same process, as noted earlier if you are using a m.2 SSD or a sata drive you will need to put the u-boot binary on a supported media, either a sdcard or a USB stick, Interrupting the boot process and writing the new image. If you boot from a sdcard you can on the running system run a simple command to put u-boot in place and reboot

sudo cp /usr/share/uboot/helios4-spi/u-boot-spl.kwb /boot/efi/

All the examples are using the helios4, the process works for all supported devices currently: ClearFog, helios4, and turris_omnia, for the ClearFog both the Base and Pro versions work. Please note that some early versions of the ClearFog did not have a SPI flash on the SOM and if you have one of those you will get an error trying to initialise the SPI flash as it does not exist, in that case, you have to boot from a sdcard.

Pipewire low latency

Posted by Adam Young on May 04, 2021 10:41 PM

Just wanted to leave myself a note here. On QJackCtrl It shows the latency in the bottom right of the Parameters page. If I drop the Frames/Period to 16 (Lowest) the latency drops to 1 msec. For a Jamulus server with a ping time of 22ms I get an overall delay of 44 ms.

And that is over wireless.

This is on my laptop, not my NUC, and it does not have the Scarlet Solo USB Analog-to-Digital converter on it.

But it is encouraging.

New badge: Fedora 34 CoreOS Test Day !

Posted by Fedora Badges on May 04, 2021 08:06 PM
Fedora 34 CoreOS Test DayYou helped solidify the core for Fedora 34!

Link-o-Rama: FTP is 50, stick with email, FVWM(3) …

Posted by Joe Brockmeier on May 04, 2021 01:59 PM

The unintentional theme of today’s Link-o-Rama is, apparently, tech nostalgia and why old tools are the best tools. The File Transfer Protocol is now 50...

The post Link-o-Rama: FTP is 50, stick with email, FVWM(3) … appeared first on Dissociated Press.

Rocky Linux, AlmaLinux, CentOS & syslog-ng

Posted by Peter Czanik on May 04, 2021 11:42 AM

Last year, the CentOS project announced a major shift in strategy. Until recently, CentOS Linux has been a rebuild of Red Hat Enterprise Linux (RHEL) sources, each RHEL release was quickly followed by a corresponding CentOS Linux release. While CentOS 7 keeps working this way, CentOS 8 will reach its end of life by the end of this year. The CentOS project is focusing on CentOS Stream. It is a continuous stream of bug fixes and new features.

Some of the users were not happy about the change, that is how Rocky Linux and AlmaLinux were born.

As about 80% of syslog-ng Open Source Edition (OSE) installations run on CentOS and RHEL (if we do not count Kindle devices…), support for CentOS Stream and CentOS Linux alternatives is a returning question. From this blog, you can learn about CentOS Stream and CentOS Linux alternatives and how the situation is affecting syslog-ng OSE users.

CentOS Linux vs. CentOS Stream

CentOS Linux was a recompilation of RHEL source packages. Of course, it is not this simple, there is a re-branding as well. But from the users’ point of view, it was RHEL for free with a slight delay.

CentOS Stream does not have releases in a traditional sense. New features and bug fixes are integrated continuously. At Devconf.cz and Red Hat Summit, various Red Hat managers described CentOS Stream as a development version of RHEL where new technologies are tested, and people can try new technologies before they enter RHEL. However, CentOS and Fedora developers consider CentOS Stream a stable and production-ready operating system. As you will see, most likely both sides are right.

Talking to many syslog-ng and CentOS Linux users, the attitude towards CentOS Stream can be very different based on company size and job function as well. It is true for every generalization that it is not valid for everybody, still, I see three major trends:

  • Large companies with dedicated OS teams love CentOS Stream. They receive new features as soon as they are available, there is no need to wait for the next release. Inside their companies they can do their own operating system releases based on CentOS Stream. A good talk about it was given by Facebook at DevConf.cz, but I had similar impressions in private discussions as well.

  • Developers too like CentOS Stream: instead of larger jumps, they receive changes as they appear in much smaller dozes.

  • The rest of the users, mostly smaller and mid-sized companies are not too enthusiastic about the change. Some already switched to something completely different, others are waiting for CentOS Linux alternatives. They do not have resources to tailor their infrastructure to an ever-changing CentOS Stream, traditional releases and occasional jumps suit their needs better.

CentOS Linux alternatives

Red Hat says that if you want to use traditional releases, the best alternative to CentOS Linux is to use RHEL. Of course, it costs money, but if you are a small business, you can get free RHEL licenses for up to 16 hosts.

From those who already did the jump to another distribution, many switched to Ubuntu LTS. Others changed to openSUSE Leap, which is the closest nonRed Hat OS to CentOS in many ways. But I even talked to CentOS Linux users who ended up running FreeBSD. And – while many syslog-ng users consider Oracle Linux almost as controversial as CentOS Stream – quite a few switched to this RHEL clone.

Many CentOS Linux users are still waiting. The reason is, that there are two more CentOS Linux replacements under way.

  • AlmaLinux was made by the same team who created CloudLinux OS. It is in release candidate phase, and it has been available for months. Some people use it already in production.

  • Rocky Linux was created from scratch by a large community of developers led by Gregory Kurtzer, who was also part of the original CentOS Linux effort. Their first release candidate arrived on 1st May.

And what about syslog-ng support?

I maintain syslog-ng in the official EPEL repository and also maintain some unofficial syslog-ng repositories. So, no wonder that I was a bit nervous when I learned that instead of CentOS Linux and RHEL we will have RHEL, CentOS Stream, AlmaLinux, Rocky Linux, and even Oracle Linux.

The good news is that after initial testing, it does not seem to create any extra work for me. Of course, I could not test all the syslog-ng features on all platforms, but anything I tested worked fine regardlessof the distribution I used. Both the EPEL packages and my unofficial packages work perfectly. I even test CentOS Stream regularly, and while I ran into minor problems and inconveniences, none of those affected syslog-ng in any way until now.

What is next?

I do not have a recommendation here for the OS. CentOS Linux 8 is going to reach its end of life at the end of the year. Make sure that you switch to CentOS Stream or one of its alternatives during the summer or early autumn. No matter what you choose, syslog-ng will work just fine on the OS of your choice. If you report a problem, I will be able to test it on RHEL, CentOS Stream, Alma Linux, Rocky Linux, and even on Oracle Linux. But based on my current experiences, I do not expect any differences between these platforms.


If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

Fedora on the Pinebook Pro

Posted by Peter Robinson on May 04, 2021 11:37 AM

First thing to note here is that this is not limited to the Pinebook Pro, I’m just using it as the example for 64 bit Rockchip devices with SPI flash on Fedora. This post is focused on devices with SPI but I’ll do a separate follow-up post for other devices including details for writing to eMMC over USB.

The story of Fedora on the Pinebook Pro, and other Rockchip devices, has been a sordid story of a lack of time, bugs, rabbit holes, more bugs and various other things. Not at all sordid at all really, mostly just a lack of time on my behalf, and nobody else stepping up to assist in a way to benefit all Fedora users, mostly they do one time hacks to sort themselves. Overall the support in Fedora for Rockchip devices has been quite solid for a number of releases. The problem has been with the early boot firmware, notable because without SPI flash it wants to splat itself across the first 8Mb of the disk, and if there was SPI flash it generally wasn’t overly stable/straight forward.

Anyway we’re now in a place where devices with SPI flash should mostly work just fine, those devices without it will work with a little manual intervention, and while the support isn’t complete, and will need more polish, they’re all details we can polish with little interruption to users by standard package updates. By default users will have accelerated graphics and from my testing on GNOME 40 it’s by all accounts a pretty decent experience!

Setting up the firmware

First step is to get the firmware written to SPI flash. This is a two step process, the first is to write out a micro SD card from another device, the second is to boot that mSD card on the Pinebook Pro, or another device like the Rockpro64, and write the firmware to the SPI flash.

There’s some nuances to this process, and the way the early boot firmware works, if another version of U-Boot takes precedence that is likely OK as it should still be able to work, the fall back is to use the internal switch to turn off the eMMC temporarily. I also have no idea if the Pine64 shipped U-Boot has any display output, the Fedora build does, if not you’ll need to use the option to disable eMMC or use a serial console cable. Anyway on to the steps:

Set up the mSD card
Use a mSD card that has no data you wish to keep, this process will wipe it out. You want at least U-Boot build 2021.04-3.fc34, you can adjust the umount to be more specific, and you need to substitute XXX for the media, otherwise it’s a relatively quick and straightforward process:

sudo dnf install --enablerepo=updates-testing -y arm-image-installer uboot-images-armv8
sudo umount /run/media/<username>/*
sudo spi-flashing-disk --target=pinebook-pro-rk3399 --media=/dev/XXX

Write the firmware to flash
Now remove the mSD card from your host and put it into the Pinebook Pro. Press the power button, from experience you likely need to press and momentarily hold and in a second or two the display will light up with text output. Interrupt the boot by pressing space. Next up we write out the flash:

Hit any key to stop autoboot:  0 
=> ls mmc 1:1
   167509   idbloader.img
   335872   idbloader.spi
   975872   u-boot.itb
  9331712   u-boot-rockchip.bin

4 file(s), 0 dir(s)

=> sf probe
SF: Detected gd25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB

=> load mmc 1:1 ${fdt_addr_r} idbloader.spi
335872 bytes read in 39 ms (8.2 MiB/s)

=> sf update ${fdt_addr_r} 0 ${filesize}
device 0 offset 0x0, size 0x52000
61440 bytes written, 274432 bytes skipped in 0.803s, speed 427777 B/s

=> load mmc 1:1 ${fdt_addr_r} u-boot.itb
975872 bytes read in 107 ms (8.7 MiB/s)

=> sf update ${fdt_addr_r} 60000 ${filesize}
device 0 offset 0x60000, size 0xee400
914432 bytes written, 61440 bytes skipped in 9.415s, speed 106127 B/s

Once the last command above has completed eject the mSD card and type reset at the => prompt and the device should reboot and you should see output similar to before but running from the SPI flash!

If you had to turn off the eMMC you can now turn it back on.

Installing Fedora

The nice thing with the firmware on SPI flash it should now work mostly like any other laptop and you can use either the pre canned desktop images (Workstation, KDE, XFCE, Sugar), the Workstation LiveCD iso or the standard everything network installer.

To run the arm Workstation image off a micro SD card or USB stick you can do the following:

arm-image-installer --media=/dev/XXX --resizefs --target=none --image=Fedora-Workstation-34-1.2.aarch64.raw.xz

Note ATM you’ll need to use the USB port on the right hand side, I need to investigate the USB/USB-C port on the left as it appears not to currently work in firmware, but works fine once Fedora is running.

Next steps and improvements

The two biggest issues remaining for the Pinebook Pro is enabling PCIe support and the lack of the brcmfmac firmware, both WiFi and bluetooth, being upstream. For the later issue if there’s anyone from Synaptics that can assist in resolving that problem please reach out to me! A interim WiFi firmware to use is here.

Some things at the Fedora level I’ve not really tested and will do so more, and likely polish with OS updates, in the coming weeks include sound, USB-C port (charging and display output). On the firmware level there’s still some more improvements to be done, tweaks to improve the USB support, turning on the power LED as early as possible to give an indicator, improvements to the EFI framebuffer to ensure consistent early boot output, support for UEFI BGRT to enable smooth boot etc.

For support please email the Fedora Arm mailing list or reach out on IRC via #fedora-arm on Freenode.

Community Blog monthly update: April 2021

Posted by Fedora Community Blog on May 04, 2021 08:00 AM
Community Blog update

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.

Stats

In April, we published 19 posts. The site had 4,326 visits from 2,704 unique viewers. 984 visits came from search engines, while 525 came from Twitter and 96 from Fedora Planet.

The most read post last month was Fedora Linux 34 Upgrade Test Day.

Badges

Last month, contributors earned the following badges:

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly update: April 2021 appeared first on Fedora Community Blog.

Passing SSL configuration to Hackney

Posted by Josef Strzibny on May 04, 2021 12:00 AM

If you depend on Erlang’s Hackney library or an Elixir HTTP library built on Hackney, changes are your SSL configuration is wrong.

An investigation into a sudden SSL error revealed how one could easily create a wrong SSL configuration. Imagine using HTTPoison to make a GET request, and it returns an SSL error complaining about the certificate authority. So you might try to fiddle with the defaults, like trying to pass a specific list of ciphers:

options = [ssl: [{:versions, [:'tlsv1.2']}]

HTTPoison.get! "https://...", [], options

It worked!

Well, it didn’t. It just didn’t complain to you because passing the required TLS version is overriding en entire ssl option! So now you are missing :verify, :cacertfile, :verify_fun, and customize_hostname_check SSL options.

So, always remember to pass the entire ssl list of options when working with HTTPoison or Hackney.

Here’s an example:

options = [
  ssl: [
    {:versions, [:'tlsv1.2']},
    {:verify, :verify_peer},
    {:cacertfile, :certifi.cacertfile()},
    {:verify_fun, &:ssl_verify_hostname.verify_fun/3},
    {:customize_hostname_check, [
      match_fun: :public_key.pkix_verify_hostname_match_fun(:https)]}
  ]
]

HTTPoison.get! "https://...", [], options

A sudden Hackney SSL’s unknown certificate authority error

Posted by Josef Strzibny on May 04, 2021 12:00 AM

A small report on how upgrading OTP to version 23 brought out unknown certificate authority errors when making requests from HTTP libraries based on Hackney.

One morning, we started to receive errors similar to the one below:

An error occurred while making the network request. The HTTP client returned the following reason: {:tls_alert, {:unknown_ca, 'TLS client: In state certify at ssl_handshake.erl:1895 generated CLIENT ALERT: Fatal - Unknown CA\\n'}}

We use HTTPoison, which in turn uses Erlang’s Hackney HTTP library. Suddenly all outgoing requests could no longer validate the authority of the SSL connection.

The error wasn’t extremely detailed:

CLIENT ALERT: Fatal - Unknown CA

Because we needed the service to be operational immediately, we deployed a hotfix that turns off the SSL checks. Since we use HTTPoison, it looks like this:

HTTPoison.post(
  @api_url,
  params,
  headers,
  pool: :custom_pool,
  hackney: [:insecure]
)

If you use Hackney pools, it’s fair to mention that these options are reused for connections in the pool.

After we saved the production from a collapse, I needed to find out what happened and how it could happen to us at all.

My investigation revealed that the issue is happening in the OTP 23, which we now run within production. I knew it’s something with the new OTP and Hackney but didn’t know what precisely at first. But after a little bit of search, I found it. It turned out to be OTP 23 change regarding the TLS hostname validation when providing custom verify_fun function (which Hackney indeed does).

Then it wasn’t difficult to find that it’s also already fixed in the latest Hackey 1.17.0. So, upgrade people!

Finally, it’s worth mentioning how can an OTP upgrade happen out of the blue. We use a multi-stage Docker build, and we make the release first from the alpine-elixir:latest image. I wasn’t creating the Dockerfile, so I had no idea we are upgrading OTP like that.

I am not saying it’s a bad practice per se, but I don’t advise it without some additional safeguards.

The last of the Intel Mac Mini is upon us!

Posted by Jon Chiappetta on May 03, 2021 07:44 PM

So before Apple’s last event, I decided to buy a brand-new-yet-also-pre-out-dated Intel Mac Mini to use as a WiFi bridge / router / firewall in place of the Linksys WRT32X. It took me a little bit to re-figure out the BSD Packet Filter firewall again but I got some good routing speeds out of it (I had to use the NAT option in PF because without it I was only getting ~45MB/s vs the Linksys +80MB/s — I dunno why, maybe some sort of kernel level network driver bug going on?). Anyway, I chose to order the Intel version for the following reasons (as of writing this post):

  • Intel 6-Core i5 CPU
  • Optioned 16GB RAM
  • Upgraded to 10-GigE
  • 802.11ac-3×3 WiFi Radios
  • VirtualBox VMs with Debian Linux (Bridgeable Network Adapters)
  • 4x Thunderbolt-3 Ports (plus a Sonnet Solo10G Ethernet Adapter)
  • It signals the end of an x86-era which won’t exist much longer!

[Short Tip] Add a path entry to Nushell

Posted by Roland Wolters on May 03, 2021 03:48 PM
<figure class="alignright size-thumbnail"></figure>

Adding a path in nushell is pretty straight forward: the configuration is done in ~/config/nu/config.toml in the [path] section.

If you don’t have it, make sure that the default entries are listed there as well when you start bringing in your own directories. The fastest way to populate your configuration with the default entries is to ask nushell to do it: config set path $nu.path

Next, add the directories you need:

path = ["/usr/local/bin", "/usr/local/sbin", "/usr/bin", "/usr/sbin","/home/rwolters/go/bin"]

In the above example I added the default go binary directory to the list.

<script> __ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-609a26cf6fd42', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy', } } }); }); </script>

Configure WireGuard VPNs with NetworkManager

Posted by Fedora Magazine on May 03, 2021 08:00 AM

Virtual Private Networks (VPNs) are used extensively. Nowadays there are different solutions available which allow users access to any kind of resource while maintaining their confidentiality and privacy.

Lately, one of the most commonly used VPN protocols is WireGuard because of its simplicity, speed and the security it offers. WireGuard’s implementation started in the Linux kernel but currently it is available in other platforms such as iOS and Android among others.

WireGuard uses UDP as its transport protocol and it bases the communication between peers upon Critokey Routing (CKR). Each peer, either server or client, has a pair of keys (public and private) and there is a link between public keys and allowed IPs to communicate with. For further information about WireGuard please visit its page.

This article describes how to set up WireGuard between two peers: PeerA and PeerB. Both nodes are running Fedora Linux and both are using NetworkManager for a persistent configuration.

WireGuard set up and networking configuration

You are only three steps away from having a persistent VPN connection between PeerA and PeerB:

  1. Install the required packages.
  2. Generate key pairs.
  3. Configure the WireGuard interfaces.

Installation

Install the wireguard-tools package on both peers (PeerA and PeerB):

$ sudo -i
# dnf -y install wireguard-tools

This package is available in the Fedora Linux updates repository. It creates a configuration directory at /etc/wireguard/. This is where you will create the keys and the interface configuration file.

Generate the key pairs

Next, use the wg utility to generate both public and private keys on each node:

# cd /etc/wireguard
# wg genkey | tee privatekey | wg pubkey > publickey

Configure the WireGuard interface on PeerA

WireGuard interfaces use the names: wg0, wg1 and so on. Create the configuration for the WireGuard interface. For this, you need the following items:

  • The IP address and MASK you want to configure in the PeerA node.
  • The UDP port where this peer listens.
  • PeerA’s private key.
# cat << EOF > /etc/wireguard/wg0.conf
[Interface]
Address = 172.16.1.254/24
SaveConfig = true
ListenPort = 60001
PrivateKey = mAoO2RxlqRvCZZoHhUDiW3+zAazcZoELrYbgl+TpPEc=

[Peer]
PublicKey = IOePXA9igeRqzCSzw4dhpl4+6l/NiQvkDSAnj5LtShw=
AllowedIPs = 172.16.1.2/32
EOF

Allow UDP traffic through the port on which this peer will listen:

# firewall-cmd --add-port=60001/udp --permanent --zone=public
# firewall-cmd --reload
success

Finally, import the interface profile into NetworkManager. As a result, the WireGuard interface will persist after reboots.

# nmcli con import type wireguard file /etc/wireguard/wg0.conf
Connection 'wg0' (21d939af-9e55-4df2-bacf-a13a4a488377) successfully added.

Verify the status of device wg0:

# wg
interface: wg0
  public key: FEPcisOjLaZsJbYSxb0CI5pvbXwIB3BCjMUPxuaLrH8=
  private key: (hidden)
  listening port: 60001

peer: IOePXA9igeRqzCSzw4dhpl4+6l/NiQvkDSAnj5LtShw=
  allowed ips: 172.16.1.2/32

# nmcli -p device show wg0

===============================================================================
                             Device details (wg0)
===============================================================================
GENERAL.DEVICE:                         wg0
-------------------------------------------------------------------------------
GENERAL.TYPE:                           wireguard
-------------------------------------------------------------------------------
GENERAL.HWADDR:                         (unknown)
-------------------------------------------------------------------------------
GENERAL.MTU:                            1420
-------------------------------------------------------------------------------
GENERAL.STATE:                          100 (connected)
-------------------------------------------------------------------------------
GENERAL.CONNECTION:                     wg0
-------------------------------------------------------------------------------
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveC>
-------------------------------------------------------------------------------
IP4.ADDRESS[1]:                         172.16.1.254/24
IP4.GATEWAY:                            --
IP4.ROUTE[1]:                           dst = 172.16.1.0/24, nh = 0.0.0.0, mt =>
-------------------------------------------------------------------------------
IP6.GATEWAY:                            --
-------------------------------------------------------------------------------

The above output shows that interface wg0 is connected. It is now able to communicate with one peer whose VPN IP address is 172.16.1.2.

Configure the WireGuard interface in PeerB

It is time to create the configuration file for the wg0 interface on the second peer. Make sure you have the following:

  • The IP address and MASK to set on PeerB.
  • The PeerB’s private key.
  • The PeerA’s public key.
  • The PeerA’s IP address or hostname and the UDP port on which it is listening for WireGuard traffic.
# cat << EOF > /etc/wireguard/wg0.conf
[Interface]
Address = 172.16.1.2
SaveConfig = true
PrivateKey = UBiF85o7937fBK84c2qLFQwEr6eDhLSJsb5SAq1lF3c=

[Peer]
PublicKey = FEPcisOjLaZsJbYSxb0CI5pvbXwIB3BCjMUPxuaLrH8=
AllowedIPs = 172.16.1.254/32
Endpoint = peera.example.com:60001
EOF

The last step is about importing the interface profile into NetworkManager. As I mentioned before, this allows the WireGuard interface to have a persistent configuration after reboots.

# nmcli con import type wireguard file /etc/wireguard/wg0.conf
Connection 'wg0' (39bdaba7-8d91-4334-bc8f-85fa978777d8) successfully added.

Verify the status of device wg0:

# wg
interface: wg0
  public key: IOePXA9igeRqzCSzw4dhpl4+6l/NiQvkDSAnj5LtShw=
  private key: (hidden)
  listening port: 47749

peer: FEPcisOjLaZsJbYSxb0CI5pvbXwIB3BCjMUPxuaLrH8=
  endpoint: 192.168.124.230:60001
  allowed ips: 172.16.1.254/32

# nmcli -p device show wg0

===============================================================================
                             Device details (wg0)
===============================================================================
GENERAL.DEVICE:                         wg0
-------------------------------------------------------------------------------
GENERAL.TYPE:                           wireguard
-------------------------------------------------------------------------------
GENERAL.HWADDR:                         (unknown)
-------------------------------------------------------------------------------
GENERAL.MTU:                            1420
-------------------------------------------------------------------------------
GENERAL.STATE:                          100 (connected)
-------------------------------------------------------------------------------
GENERAL.CONNECTION:                     wg0
-------------------------------------------------------------------------------
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveC>
-------------------------------------------------------------------------------
IP4.ADDRESS[1]:                         172.16.1.2/32
IP4.GATEWAY:                            --
-------------------------------------------------------------------------------
IP6.GATEWAY:                            --
-------------------------------------------------------------------------------

The above output shows that interface wg0 is connected. It is now able to communicate with one peer whose VPN IP address is 172.16.1.254.

Verify connectivity between peers

After executing the procedure described earlier both peers can communicate to each other through the VPN connection as demonstrated in the following ICMP test:

[root@peerb ~]# ping 172.16.1.254 -c 4
PING 172.16.1.254 (172.16.1.254) 56(84) bytes of data.
64 bytes from 172.16.1.254: icmp_seq=1 ttl=64 time=0.566 ms
64 bytes from 172.16.1.254: icmp_seq=2 ttl=64 time=1.33 ms
64 bytes from 172.16.1.254: icmp_seq=3 ttl=64 time=1.67 ms
64 bytes from 172.16.1.254: icmp_seq=4 ttl=64 time=1.47 ms

In this scenario, if you capture UDP traffic on port 60001 on PeerA you will see the communication relying on WireGuard protocol and the encrypted data:

<figure class="wp-block-image size-large is-style-rounded"><figcaption>Capture of UDP traffic between peers relying on WireGuard protocol</figcaption></figure>

Conclusion

Virtual Private Networks (VPNs) are very common. Among a wide variety of protocols and tools for deploying a VPN, WireGuard is a simple, lightweight and secure choice. It allows secure point-to-point connections between peers based on CryptoKey routing and the procedure is very straight-forward. In addition, NetworkManager supports WireGuard interfaces allowing persistent configurations after reboots.

Taking a break

Posted by Daniel Vrátil on May 03, 2021 08:00 AM

I’ve seen lots of posts like this in the past, never thought I’d be writing one myself.

I haven’t been very actively contributing to KDE for the past months. It’s been rather frustrating, because I felt like I have to contribute something, fix some bugs, finish some feature…but whenever I had the time to work on PIM, I just couldn’t bring myself to do anything. Instead I found myself running away to other projects or just playing games.

It took me a while to realize that the problem was that I was putting pressure on myself to contribute even though I did not feel like it. It turned from hobby and passion into a duty, and that’s wrong.

I think the main frustration comes from the feeling that I cannot innovate - I’m bound by various restrictions - libraries and languages I can use, APIs I must preserve/conform to, legacy behavior to not break anything for existing users… This has been taking away the fun. I have enough of this in my dayjob, thank you. So….

I decided to take a break from KDE PIM for a while. I’m sure I’ll be back at some point. But right now I feel like I gave it all I could and it’s still not where I’d like it to be and it’s no longer fun for me. What makes me very happy is the number of new contributors that have appeared over the past year or so.

Instead of hacking on KDE PIM I went back to some of my older projects - I improved Passilic, the Pass password manager frontend for Sailfish OS and revived my old Android app to sync Facebook events with Android calenar.

I also started playing with C++20 coroutines and how they could be used with Qt. The result is the QCoro library. I’ll blog about that soon and, hopefully, will talk about it in more depth in two months on Akademy (see, I’m not leaving completely 😉).

Finally, I spent the past week building a remote-controlled car using Lego, Arduino and a mobile app I wrote (with Qt, of course 😉). I’ll blog about that as well (spoiler alert: it’s fun!).

See y’all around!

/Dan

Episode 269 – Do not experiment on the Linux Kernel

Posted by Josh Bressers on May 03, 2021 12:01 AM

Josh and Kurt talk about the University of Minnesota experimenting on the Linux Kernel. There’s a lot to unpack in this one, but the TL;DR is you probably don’t want to experiment on the kernel.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2432-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_269_Do_not_experiment_on_the_Linux_Kernel.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_269_Do_not_experiment_on_the_Linux_Kernel.mp3</audio>

Show Notes

Rétrospective de l'adoption du nouveau logo de Fedora

Posted by Charles-Antoine Couret on May 02, 2021 10:45 AM

L’équipe design de Fedora a travaillé depuis fin 2018 sur un changement du logo de Fedora. L’équipe a proposé deux possibilités et obtenu des retours constructifs, permettant d'aboutir au résultat final.

Si vous lisez l’anglais ou si vous souhaitez voir l’ensemble des tests intermédiaires, je vous conseille de lire cet excellent article qui présente le sujet. Ce qui suit en résume l’essentiel.

fedora_logo.svg, avr. 2021

Historique

Il y a déjà eu deux versions du logo de Fedora, comme vous pouvez le voir ci‐dessous. Ce n’est donc pas un changement inédit même si le précédent date un peu, à savoir vers l’année 2005.

Premier logo : Fedora_Core.png, janv. 2019

Second et actuel logo : fedora-2005.png, janv. 2019

Pourquoi ce changement ?

Logo complexe à travailler

Tout d’abord il y a un problème dans le rendu. Le logo actuel contient plusieurs couleurs, ce qui complexifie la réalisation de produits dérivés ou les rend plus chers suivant le prestataire. C’est un élément important pour la réalisation de la communication autour du projet.

Fedora-décomposition.png, janv. 2019

Ensuite, cela rend le logo plus difficilement visible en cas de fond foncé, en particulier avec un fond bleu. Cela est particulièrement le cas pour la réalisation de fonds d’écran ou de pochettes CD. Pour les pochettes CD, il n’était pas rare que l’équipe graphique ruse un peu en utilisant un dégradé de bleu et en positionnant le logo de Fedora sur la partie claire. Mais pour le rendu d’un site Web, il est plus délicat de s’assurer de la position du logo par rapport à la clarté du bleu du fond de page, suivant la taille de l’écran du visiteur.

Jacquette-F12.png, janv. 2019

De par la composition du logo — un texte plus la bulle avec le fameux F —, il est difficile de centrer les éléments et de calculer les espaces entre les éléments, que ce soit pour un centrage vertical ou horizontal.

Enfin, la police choisie à l’époque souffre d’un défaut. Le a final ressemble trop à un o, ce qui gêne, bien entendu, la communication.

Confusion possible

La bulle de Fedora avec son F ressemble trop au logo de Facebook. Si cela peut prêter à sourire, les logos étant quand même différents, il est en effet courant (pour l’avoir vécu comme d’autres ambassadeurs) que les personnes qui ne sont pas du milieu confondent les deux. Et, en effet, c’est une remarque apparemment récurrente depuis 2009-2010, quand le réseau social a commencé à se répandre.

Question de cohérence, pour la liberté !

Fedora se veut être une distribution libre. C’est un élément important du projet. Mais jusqu’ici la police choisie pour former le texte du logo n’était pas libre. C’est en effet la version 2005 de Bryant. À l’époque, c’était justifié car il y avait assez peu de polices libres de qualité, mais depuis les temps ont changé. Red Hat, Google, d’autres entreprises et des amateurs ont travaillé sur la question, le choix aujourd’hui est bien plus large. Pour respecter les principes du projet, abandonner une police propriétaire semble évident.

Cheminement

Le travail a été amorcé suite à une discussion au sein du Conseil en octobre 2018. Qui a mené à l’ouverture d’un ticket auprès de l’équipe design. Il y a eu pas mal d’essais et de réflexion en jouant sur le logo. Jouer sur le F, le symbole infini, sur la perspective ou encore en modifiant la bulle.

Ce n’est pas une décision qui a été prise à la légère, changer un logo a un gros impact. Il faudra en effet changer toutes les références de ce logo au fur et à mesure. Sur le site du projet ainsi que sur les sites non officiels, mais liés à Fedora comme fedora-fr.org.

Mais à cause de l’inertie du logo actuel, adopter le prochain prendra du temps. Que ce soit dans les sites d’actus, dans les produits dérivés employés et distribués, sur les différents sites où Fedora est mentionnée comme Wikipédia, etc.

C'est pourquoi le processus a pris du temps. Début janvier 2019 seulement l'équipe graphique a proposé à la communauté deux propositions, visibles plus bas, afin de collecter des retours pour améliorer le rendu et choisir l'un des deux.

Et finalement fin mars 2019, le Conseil de Fedora a rendu son verdict final, ce qui a autorisé d'entamer les procédures pour son adoption, comme vérifier auprès des avocats de Red Hat si ce logo ne pose pas de problèmes ou débuter la campagne de promotion pour son adoption définitive.

Résultat

Les deux illustrations sont les premières propositions qui montrent les différentes déclinaisons du logo et donnent un exemple d’usage. La police retenue est Confortaa de Johan Aakerlund. Elle a été légèrement modifiée pour l’occasion.

Voici la première proposition : Proposition1.png, janv. 2019

Et la deuxième : Proposition2.png, janv. 2019

La version définitive choisie s'est basée sur la première proposition car elle plus proche du logo actuel. Elle a été retravaillée à partir des différents avis, ce peaufinage est expliqué sur le blog de Máirín Duffy. Et voici le rendu final :

Nouveau_logo_final_de_Fedora.png, avr. 2021

Virtual Reality

Posted by Radka Janek on May 01, 2021 12:01 PM

After the summer 2020 has ended I got entangled in VR (pun intended,) as the weather got unbearably disgusting for mountain biking, and covid still ruled over the world. At the time of writing this blog post, this is still true on both counts.

What gear did you get?

I’ve got the Valve Index with 5 base stations and an extra pair of controllers to be able to just swap them for fresh pair when one runs out of power. Oh and 6 trackers - both Vive 2.0 and 3.0, and I will be testing the Tundra trackers when they do happen for real. If they do? We shall see. I did get powerbanks for them, but I stopped using them when I got the 2nd set of trackers. Now I just swap between the two sets. Powerbanks are heavy. Very heavy. It causes wobble and ruins the immesion for my victims. I intimidate (innocent-dom?) people in VR and generally roleplay a lot. Flawless tracking matters to me.

Rhea in VR

Rhea in VR

This setup is arguably the best one could buy - even Pimax 8k doesn’t compare at all. The only advantage of Pimax is better visual fidelity, however it has worse audio, comfort, ease of use, support, durability, …. And you can’t even use prescription lenses with it.

How much? Well let’s do the back of the envelope math:

  • Index Kit 1300$
  • Extra controllers 370$
  • Extra base stations 3x 200$
  • Vive 2.0 trackers 3x 120$
  • Vive 3.0 trackers 3x 170$
  • Rebuff Reality TrackStrap (two sets) 2x 80$
  • Rebuff Reality powerbanks 3x 45$
  • Spare Index cable (cuz I did break one and support was slow) 190$
  • Kiwi cooler for Index 55$
  • Prescription Lenses 140$

The total sum is 4000$ rounded up for other minor expenses (cables, charger, …) ~ Insane, isn’t it? Wait for it… 5950x + nVidia 3090 do count another 3500$ !

The tracking is indeed flawless, and when paired with a good pilot and avatar, the experience that the people around you (your victims) is just amazing. Worth it. No regrets.

WHY?!

I don’t know. Don’t do it.

Seriously though? It’s amazing experience. I got to roleplay with some of the legends, I got to dance with some of the legends, and I found countless of friends for life. Do it. Don’t hesitate. (Get a mountain bike first though. And actually use it.)

Wanna give it a try? Look up Up To Date reviews. See ThrillSeeker or some other youtubers. At the time of writing this, TLDR is: Index is the best. Quest2 is the cheapest, and wireless. (Yes you still need PC with Quest2. It’s a damn mobile phone, what do you expect?)

Valhalla

You can join our VR adventures in the Discord server Valhalla, or watch these on twitch

Fedora 34 en Raspberry Pi 4

Posted by Alex Callejas on April 30, 2021 09:09 PM

Con la reciente salida de Fedora 34, es momento de probarla en la raspberry 4.

<figure class="aligncenter size-large is-resized"><figcaption>raspi4.rootzilopochtli.lab</figcaption></figure>

Descargamos la imagen raw y la planchamos en la microSD:

$ sudo arm-image-installer --image=Fedora-Server-34-1.2.aarch64.raw.xz \
  --target=rpi4 --media=/dev/mmcblk0

Cuando termine de instalarse la imagen la desconectamos y se la insertamos a la raspi.

Al reinciar, la imagen nos solicitará agregar algunos datos (hostname, creación de usuario, contraseña de root, etc), finalizando con la inicialización del sistema.

Después de hacer login con el usuario creado, note que no tenía red, puesto que no tiene cable de red conectado.

Al hacer una pequeña inspección, notamos que no está iniciando el dispositivo inalámbrico:

[root@raspi4 ~]# nmcli device wifi list
IN-USE  BSSID  SSID  MODE  CHAN  RATE  SIGNAL  BARS  SECURITY
[root@raspi4 ~]# nmcli device
DEVICE         TYPE      STATE         CONNECTION 
wlan0          wifi      unavailable   --
eth0           ethernet  unavailable   --         
lo             loopback  unmanaged     --         

Revisando el journal:

[root@raspi4 ~]# journalctl -t NetworkManager
-- Journal begins at Mon 2021-04-05 19:00:00 CDT, ends at Fri 2021-04-30 15:34:15 CDT. --
...
Apr 05 23:00:31 fedora NetworkManager[971]: <error> [1617681631.5571] device (wlan0): Couldn't initialize supplicant interface: Failed to D-Bus activate wpa_supplicant service
Apr 05 23:00:33 fedora NetworkManager[971]: <info>  [1617681633.9649] manager: startup complete
Apr 05 23:00:42 fedora NetworkManager[971]: <warn>  [1617681642.3040] device (wlan0): re-acquiring supplicant interface (#1).
...

Checando el servicio:

[root@raspi4 ~]# systemctl status wpa_supplicant
Unit wpa_supplicant.service could not be found.

Googleando un poco, encontré un bugzilla que reportaba la falla desde fedora 31 beta: Bug 1756488 – Missing wpa_supplicant in Fedora Server 31-34 Beta.

Entonces, simplemente descargamos el rpm de aquí y se lo copiamos a la raspi con un pendrive. Para instalarlo usamos el comando rpm:

[root@raspi4 ~]# rpm -Uvh wpa_supplicant-2.9-12.fc34.aarch64.rpm

Después de reiniciar, se muestran los dispositivos correctamente:

[root@raspi4 ~]# nmcli device
DEVICE         TYPE      STATE         CONNECTION 
wlan0          wifi      disconnected  --         
p2p-dev-wlan0  wifi-p2p  disconnected  --         
eth0           ethernet  unavailable   --         
lo             loopback  unmanaged     --

Re-escaneamos la wifi y obtenemos la lista de SSID’s:

[root@raspi4 ~]# nmcli device wifi rescan
[root@raspi4 ~]# nmcli device wifi list

Seleccionamos el nuestro y la conectamos:

[root@raspi4 ~]# nmcli device wifi connect SSID password SSID-password

Verificamos el dispositivo:

[root@raspi4 ~]# nmcli device
DEVICE         TYPE      STATE         CONNECTION 
wlan0          wifi      connected     SSID

Probamos la conexión remota:

[acalleja@isengard ~]$ ssh dark.axl@192.168.0.20
dark.axl@192.168.0.20's password: 
Web console: https://raspi4.rootzilopochtli.lab:9090/

Last login: Tue Apr  6 00:00:49 2021
[dark.axl@raspi4 ~]$ uname -a
Linux raspi4.rootzilopochtli.lab 5.11.12-300.fc34.aarch64 #1 SMP Wed Apr 7 16:12:21 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

Actualizamos paquetes:

[dark.axl@raspi4 ~]$ sudo -i
[sudo] password for dark.axl: 
[root@raspi4 ~]# dnf clean all
0 files removed
[root@raspi4 ~]# dnf update

Después de instalar y reiniciar, la raspi ya está lista para usarse.

[dark.axl@raspi4 ~]$ neofetch 
          /:-------------:\          dark.axl@raspi4.rootzilopochtli.lab 
       :-------------------::        ----------------------------------- 
     :-----------/shhOHbmp---:\      OS: Fedora 34 (Server Edition) aarch64 
   /-----------omMMMNNNMMD  ---:     Kernel: 5.11.16-300.fc34.aarch64 
  :-----------sMMMMNMNMP.    ---:    Uptime: 2 hours, 13 mins 
 :-----------:MMMdP-------    ---\   Packages: 941 (rpm) 
,------------:MMMd--------    ---:   Shell: bash 5.1.0 
:------------:MMMd-------    .---:   Resolution: 1920x1080, 1920x1080 
:----    oNMMMMMMMMMNho     .----:   WM: Mutter 
:--     .+shhhMMMmhhy++   .------/   WM Theme: Adwaita 
:-    -------:MMMd--------------:    Theme: Adwaita [GTK3] 
:-   --------/MMMd-------------;     Icons: Adwaita [GTK3] 
:-    ------/hMMMy------------:      Terminal: /dev/pts/0 
:-- :dMNdhhdNMMNo------------;       CPU: (4) @ 1.500GHz 
:---:sdNMMMMNds:------------:        Memory: 295MiB / 7836MiB 
:------:://:-------------::
:---------------------://                                    
                                                             
[dark.axl@raspi4 ~]$ 

Espero les sirva…

The post Fedora 34 en Raspberry Pi 4 first appeared on .

rpminspect-1.5 released

Posted by David Cantrell on April 30, 2021 06:52 PM

rpminspect 1.5 is now available. There are several new features in this along with the usual round of bug fixes.

The biggest improvement in this release comes in the form of per-inspection ignore lists in the configuration file. rpminspect runs in Fedora CI for successful builds and I have been working with package maintainers on creating rpminspect.yaml files in pkg-git that help further control how rpminspect runs for your build. With per-inspection ignore lists, you can list individual files and paths using standard glob(7) syntax that you want an individual inspection to ignore. This gives more control than simply disabling the inspection or adding the file to the global ignore list. See the /usr/share/rpminspect/generic.yaml configuration file for documentation on how to use per-inspection ignore lists.

The other big change I wanted to point out is being able to set size_threshold to info in the filesize inspection. For Fedora this inspection does not really offer a lot of advantage. It is really meant more for downstream distributions like CentOS and RHEL. Still, some package maintainers want to see overall file growth changes but not have those trigger failures. By setting size_threshold to info you will get all of the findings, but all reported at the INFO level in the results.

Work on 1.6 is now underway and it has taken me a few days to put this post together. I appreciate everyone’s feedback and pull requests. Please keep it coming. Here’s a summary of this release:

General release and build process changes:

  • Generate regular changelog in utils/srpm.h
  • Skip branches without targets in submit-koji-builds.sh
  • Simplify the utils/determine-os.sh script
  • Fix $(OS) check in the Makefile
  • BuildRequires libmandoc-devel >= 1.14.5

Config file or data/ file changes:

  • Add commented out per-inspection ignore blocks
  • Note all regular expression settings use regex(7) syntax
  • Note size_threshold can be the keyword ‘info’

Changes to the GitHub Actions CI scripts and files:

  • Fedora and CentOS systems in ci need ‘diffstat’
  • opensuse-leap CI job requires ‘diffstat’
  • Fix the Debian CI jobs in GitHub Actions
  • Fix and enable the Ubuntu extra-ci job in GitHub Actions
  • Use ‘pip’ instead of ‘pip3’ for the Ubuntu command
  • Use ‘apt-get -y install’ in ubuntu’s pre.sh
  • Enable the opensuse-tumbleweed GHA job again
  • Make sure the Gentoo GHA job has ‘diffstat’
  • Get the Arch Linux GHA job working again
  • Use ubuntu:latest for the ubuntu GHA image
  • Fix the ubuntu GitHub Actions extra-ci job
  • Make sure the centos8 job has git available before cloning
  • Install cpp-coveralls using pacman on Arch Linux
  • Install cpp-coveralls using pip on Arch Linux
  • Install cpp-coveralls in pre.sh on Arch Linux
  • Install required Python modules in pre.sh on Arch Linux
  • Do not upgrade pip on Arch Linux, go back to using pip.txt
  • Do not run ‘apt-get update’ as a second time on Debians systems
  • Update the OpenSUSE Tumbleweed files, but disable it anyway
  • Manually install mandoc on centos7 for now

rpminspect(1) changes:

  • Allow any number of builds specified for fetch only mode
  • Fix fetch only mode download directory
  • Do not crash with the -c option specifies a non-existent file
  • Remove what working directories we can

Documentation changes:

  • Update license table in README.md
  • Update GitHub Action status badges in README.md
  • Update TODO list

General bug fix in the library or frontend program:

  • Use llabs() instead of labs() in the filesize inspection
  • Improve ‘has invalid execstack flags’ reporting
  • Use long unsigned int to report size changes in ‘patches’
  • Fix some errors in the changedfiles inspection
  • Check DT_SONAME in is_elf_shared_library()
  • Skip debuginfo and debugsource files in abidiff
  • Report INFO level for patches findings by default
  • Handle old or broken versions of libmagic in ‘changedfiles’
  • Use json_tokener_parse_ex() to get better error reporting
  • Fix reading of the javabytecode block in the config file
  • Catch missing/losing -fPIC correctly on .a ELF objects (#352)
  • Refactor elf_archive_tests() and its helper functions
  • Followup fix for find_no_pic, find_pic, and find_all
  • Drop DEBUG_PRINT from source generated by pic_bits.sh
  • Clean up the config file section reading code
  • Perform symbolic owner and group matching in ‘ownership’ (#364)
  • Restrict download_progress() to systems with CURLOPT_XFERINFOFUNCTION
  • Report annocheck failures correctly in librpminspect.
  • Call mparse_reset() before mparse_readfd()
  • Ensure ctxt->lastError.message is not NULL before strdup (#382)
  • Handle corrupt compressed files in ‘changedfiles’ (#382)
  • Correctly find icons for desktop files in subpackages (#367)
  • Followup to the Icon= check in the desktop inspection (#367)

librpminspect feature or significant change:

  • Change strappend() to work as a variadic function
  • Define inspection_ignores in struct rpminspect
  • Add add_ignore() to init.c
  • Stub out libcurl download progress callback function
  • Read per-inspection ignore lists from the config file.
  • Implement per-inspection path ignore support (#351)
  • Allow ‘size_threshold: info’ in the config file (#261)
  • Check ignore list in ‘files’ for path prefixes to ignore (#360)
  • Support a list of expected empty RPMs in the config file (#355)
  • Disable debugging output for the ignore lists in init.c
  • Drop debugging output in the ‘xml’ inspection

Test suite commits:

  • Update the ‘changedfiles’ test cases
  • Make sure abidiff test cases add a DT_SONAME to the test lib
  • Update the test/test_patches.py cases for patches changes
  • The lost PIC tests need to invoke gcc with -fno-PIC
  • Make sure brp-compress is disabled in test_manpage.py

See https://github.com/rpminspect/rpminspect/releases/tag/v1.5 for more information.

There is also a new rpminspect-data-fedora package available. Information on the changes there can be found at https://github.com/rpminspect/rpminspect-data-fedora/releases/tag/v1.5

Where to get these new releases?

Fedora and EPEL 8 users can get new builds from the testing updates collection. If you install from the testing update, please consider a thumbs up in Bodhi. Without that it takes a minumum of two weeks for it to appear in the stable repo.

NOTE: There is a delay getting the EPEL-7 build done because I needed to update the mandoc package there first.

Friday’s Fedora Facts: 2021-17

Posted by Fedora Community Blog on April 30, 2021 06:27 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Fedora Linux 34 was released on Tuesday. Fedora Linux 32 will reach end of life on Tuesday 25 May.

Join us tomorrow for day two of the Fedora Linux 34 Release Party

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

  • Fedora Linux 32 will reach end of life on Tuesday 25 May.

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
All Things Openvirtual17-19 Octcloses 30 Apr
Akademyvirtual18-25 Junecloses 2 May
openSUSE Virtual Conferencevirtual18-20 Junecloses 4 May
DevConf.USvirtual2-3 Sepcloses 31 May
Nest With Fedoravirtual5-8 Augcloses 17 Jul
</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1847627kernelNEW
</figure>

Upcoming meetings

Releases

Fedora Linux 34

Schedule

Upcoming key schedule milestones:

  • 2021-04-28 — Elections questionnaire process and nomination period begins.

Fedora Linux 35

Changes

<figure class="wp-block-table">
ProposalTypeStatus
RPM 4.17System-WideApproved
Smaller Container Base Image (remove sssd-client, util-linux, shadow-utils)Self-ContainedApproved
Erlang 24Self-ContainedApproved
Switching Cyrus Sasl from BerkeleyDB to GDBMSystem-WideApproved
Debuginfod By DefaultSelf-ContainedFESCo #2597
Package information on ELF objectsSystem-WideFESCo #2598
CompilerPolicy ChangeSystem-WideAnnounced
Node.js 16.x by defaultSystem-WideAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-17 appeared first on Fedora Community Blog.

Begining with nemomobile

Posted by Jozef Mlich on April 30, 2021 05:00 PM
There is an effort to build friendly, mobile, and open Qt based user interface since Meego Harmatan and Nokia N9 was widely used. The Jolla Ltd. did pretty nice job with Sailfish OS in the past. I am still using Jolla 1, which now reached its EOL. Its time to look on new options. My … Continue reading "Begining with nemomobile"

Untitled Post

Posted by Caolán McNamara on April 30, 2021 04:07 PM

Math Selection Rendering

Towards 7.2 the Math edit window text selection is now drawn the same as the selection in the main applications. This affects the selection of similar uses of this EditView in LibreOffice such as the writer comments in sidebar.

Access freenode using Matrix clients

Posted by Fedora Magazine on April 30, 2021 08:00 AM

Matrix (also written [matrix]) is an open source project and a communication protocol. The protocol standard is open and it is free to use or implement. Matrix is being recognized as a modern successor to the older Internet Relay Chat (IRC) protocol. Mozilla, KDE, FOSDEM and GNOME are among several large projects that have started using chat clients and servers that operate over the Matrix protocol. Members of the Fedora project have discussed whether or not the community should switch to using the Matrix protocol.

The Matrix project has implemented an IRC bridge to enable communication between IRC networks (for example, freenode) and Matrix homeservers. This article is a guide on how to register, identify and join freenode channels from a Matrix client via the Matrix IRC bridge.

Check out Beginner’s guide to IRC for more information about IRC.

Preparation

You need to set everything up before you register a nick. A nick is a username.

Install a client

Before you use the IRC bridge, you need to install a Matrix client. This guide will use Element. Other Matrix clients are available.

First, install the Matrix client Element from Flathub on your PC. Alternatively, browse to element.io to run the Element client directly in your browser.

Next, click Create Account to register a new account on matrix.org (a homeserver hosted by the Matrix project).

Create rooms

For the IRC bridge, you need to create rooms with the required users.

First, click the ➕ (plus) button next to People on the left side in Element and type @appservice-irc:matrix.org in the field to create a new room with the user.

Second, create another new room with @freenode_NickServ:matrix.org.

Register a nick at freenode

If you have already registered a nick at freenode, skip the remainder of this section.

Registering a nickname is optional, but strongly recommended. Many freenode channels require a registered nickname to join.

First, open the room with appservice-irc and enter the following:

!nick <your_nick>

Substitute <your_nick> with the username you want to use. If the nick is already taken, NickServ will send you the following message:

This nickname is registered. Please choose a different nickname, or identify via /msg NickServ identify <password>.

If you receive the above message, use another nick.

Second, open the room with NickServ and enter the following:

REGISTER <your_password> <your_email@example.com>

You will receive a verification email from freenode. The email will contain a verification command similar to the following:

/msg NickServ VERIFY REGISTER <your_nick> <verification_code>

Ignore /msg NickServ at the start of the command. Enter the remainder of the command in the room with NickServ. Be quick! You will have 24 hours to verify before the code expires.

Identify your nick at freenode

If you just registered a new nick using the procedure in the previous section, then you should already be identified. If you are already identified, skip the remainder of this section.

First, open the room with @appservice-irc:matrix.org and enter the following:

!nick <your_nick>

Next, open the room with @freenode_NickServ:matrix.org and enter the following:

IDENTIFY <your_nick> <your_password>

Join a freenode channel

To join a freenode channel, press the ➕ (plus) button next to Rooms on the left side in Element and type #freenode_#<your_channel>:matrix.org. Substitute <your_channel> with the freenode channel you want to join. For example, to join the #fedora channel, use #freenode_#fedora:matrix.org. For a list of Fedora Project IRC channels, see Communicating_and_getting_help — IRC_for_interactive_community_support.

Further reading

PHP version 7.3.28 ,7.4.18 and 8.0.5

Posted by Remi Collet on April 30, 2021 07:36 AM

RPMs of PHP version 8.0.5 are available in remi-php80 repository for Fedora 32-34 and Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 7.4.18 are available in remi repository for Fedora 32-34 and remi-php74 repository Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 7.3.28 are available in remi-php73 repository for Enterprise Linux (RHEL, CentOS).

emblem-important-2-24.pngPHP version 7.2 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 32-34 and EL-8.

security-medium-2-24.pngThese versions fix a few security bugs, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.0 installation (simplest):

yum-config-manager --enable remi-php80
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.3
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu65 (version 65.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.1
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php73 / php74 / php80)

Tech tip: using mutt to access mailfence.com

Posted by Harish Pillay 9v1hp on April 30, 2021 01:34 AM

tl;dr: make sure smtp_url definition starts as smtps.

I needed to set up access to a paid-for email provider, mailfence.com via mutt. Yes, they do have a web-based access, but real email users use mutt (and I’ve been using it since about 1998).

At least they are providing standard IMAP services which is the Right Thing and also support GPG signing and encryption built-in. I have not checked other providers, but for an email service to offer up standard GPG is a Big Win in my books.

I am writing this down more an as reminder to myself as well as a means of documenting how I got mutt to work with mailfence.

The most important thing to have set up right is the .muttrc file. Here’s what I’ve created for accessing mailfence.com.

### Hosts
#
set hostname=<MY NEW DOMAIN>

### Paths

set mailcap_path="~/.mutt/mailcap-for-mutt:/etc/mailcap"
set mbox=+read
set record=+out
set postponed=+postponed
set signature="~/.mutt/signature"
set tmpdir="/tmp"
set alias_file=~/.mutt/aliases
set header_cache="$HOME/.mutt/.mutt_header_cache-mailfence"

### IMAP Stuff

set imap_user="<USERNAME>@mailfence.com"
set imap_idle=yes
set imap_pass="<PASSWORD>" 

#---  SSL Specific Settings

set ssl_force_tls = yes

### External commands
#
set editor="vi"
set visual="vi"
set smtp_url = "smtps://<USERNAME>@mailfence.com@smtp.mailfence.com:465"
set smtp_pass = "<PASSWORD>"
set shell="/bin/sh"
set use_from=yes
set realname="Harish Pillay"
set from=harish@<MY NEW DOMAIN>
set envelope_from=yes

### PGP stuff
# 
source ~/.mutt/pgp2.rc
set pgp_auto_decode

### Save hooks
# 
save-hook . +read 

### Aliases / Addressbook
# 
source ~/.mutt/aliases

Note carefully the “smtp_url” showing as “smtps://<USERNAME>@mailfence.com@smtp.mailfence.com:465”

It WILL NOT WORK if the url is “smtp://…”.

Only thing still pending is how the ‘b'(ounce) command works. Hitting ‘b’ to an email does pull in the mail and does send it out, but fails at the end of the sending. I use ‘b'(ounce) a lot so that the email remains intact in headers and is not one that is ‘f'(orwarded).

Think Different!

Posted by Jon Chiappetta on April 29, 2021 09:49 PM

With lots of Apple colours to choose from:

At some point UI loses usefulness

Posted by Tomasz Torcz on April 29, 2021 06:47 PM

When it comes to configurability, modern software often hits a sweet spot. We are given nice, usable User Interface (UI) helping with configuration – by hinting, auto-filling and validating fields. Additionaly, the configuration itself is stored in text format, making it easy to backup and track changes. For example in git version control system.

Recently I encountered at least two cases, where the above features conflict.

Argo CD

Argo CD is a wonderful tool to implement GitOps with you Kubernetes cluster.

Kubernetes is configured by plain text files in YAML format. That's a perfect form to track in git. Argo CD provides synchronization service: what you have in git repository is applied to kubernetes. Synchronization could be automatic or you can opt to sync manually. In later case, Argo CD provides a nice diff view, showing what's currently configured and how should it be.

Argo CD also has a nice concept of responsibility boundaries: it cares only about YAML sections and fields present in the git repo. If you add new section on the running cluster, it won't be touched. It may be a single field, for example – replicas:

Above can be utilized when you manage Argo CD by Argo CD. install.yaml file defines configuration resources likes ConfigMaps and Secrets, yet it doesn't provide actual data: sections. When you configure Argo CD installation – using nice web UI, no less - data: sections are created and configuration is stored into k8s cluster.

Those sections are not part of what is stored in git repository, so they will neither be touched nor rewritten.

But what happens when we want to store the Argo CD configuration in the repository, and gitops it to the Moon and back?

If we add data: sections, they will be synced. But we will lose ability to use nice UI directly! As UI makes changes on the running cluster, Argo CD will notice live configuration differs from git repository one. It will overwrite our new configuration, undoing changes.

If we want to gitops configuration, we basically must stop using UI and manually add all changes to the text files in the repository!

Grafana

Grafana is another cool project. It is a graphing/dashboarding/alerting solution, which looks pretty and is quite powerful, yet easy to use. Mainly because user interface is a pleasure to use; all changes are visible instantly and we are free to experiment.

Behind the scenes dashboards are just text (JSON) files. Great, text, let's store it in git! Well…

First of all, generated JSON tend to be dynamic. If you do some manipulations in the UI, sections in final file may move relative to each other. Even if the content does not change.

Second, those documents tend to be verbose. Like, really. It is not recommended to edit them manually, better use some templating language. For example grafonnet, which is a customisation of jsonnet - templating for JSON.

The reader probably sees where it's all going. Decision to use grafonnet makes the whole nice UI almost useless, as it spits JSON only. Again, to have better control, history and visibility we must forego one of the main selling points of the software.

Solutions? Are there any?

Frankly, I don't see anything perfect. We have some workarounds, but they feel cumbersome.

For Argo CD we can disable self-healing of an app. That is, disable automatic synchronisation. That way we can still use the UI to do the configuration. Argo CD will notice out-of-sync status between git and live cluster. It will also provide helpful diff: showing exactly how changes made in the UI are reflected in the text configuration.

When we're happy with the changes, we have to extract them from diff view and commit to the git repository. Cumbersome. And we lose active counter-measurements against configuration drift.

Grafana problem we fight with sandbox instance. Any (templated) dashboard can be loaded, then customised with clickety click and exported to JSON. Now the tedious part begins: new stuff from JSON need to be identified, extracted, translated back into templating language and hand merged into grafonnet dashboard definition.

The improved dashboard should be imported into sandbox again and verified. If it is all right, it could be promoted to more important environments. Cumbersome².

I'm very interested in better solutions. If you have comments, ideas, links, please use comments section below!

نسخه نهایی Fedora Linux 34 منتشر شد

Posted by Fedora fans on April 29, 2021 03:52 PM
fedora34-final

fedora34-finalپس از انتشار نسخه ی بتا از Linux Fedora 34 هم اکنون تیم توسعه ی پروژه ی فدورا خبر انتشار نسخه نهایی Linux Fedora 34 را اعلام کرد. فدورا ۳۴ یکی از محبوب ترین توزیع های سیستم عامل گنو/لینوکس می باشد که همواره آخرین فناوری ها و ابزارهای دنیای نرم افزارهای آزاد را به ارمغان می آورد.

فدورا ۳۴ نیز مانند نسخه های پیشین خود دارای ویژگی ها و تغییرات جدیدی می باشد که در ادامه به برخی از مهمترین های آنها اشاره خواهد شد:

فدورا ۳۴ دارای میزکارهای گوناگونی می باشد و برای معماری های مختلفی توسعه و منتشر می شود. جهت دانلود نسخه ی مورد نظر خود می توانید به وب سایت رسمی پروژه ی فدورا مراجعه کنید:

https://getfedora.org

جهت دانلود فدورا ۳۴ از تورنت می توانید از لینک پایین استفاده کنید:

https://torrent.fedoraproject.org

پیش بسوی فدورا ۳۴

 

The post نسخه نهایی Fedora Linux 34 منتشر شد first appeared on طرفداران فدورا.