Fedora People

Gangstalking and victim-blaming

Posted by Daniel Pocock on February 28, 2021 07:00 PM

I will destroy you, threat, Washington

It is ironic that the first person to depart the Biden administration was sanctioned for threatening somebody else's career.

This week Marko Rodriguez went public with news that rogue members of the Apache Software Foundation had decided to persecute him for his commentary on social issues. The board had voted to reclassify satire as a form of prose that "borders" on hate speech. Either it is hate speech or it isn't. To suggest it "borders" on hate speech is a fudge. The sly comparison of these very different types of writing is simply a smear to hurt his career.

To put this in perspective, board members who disagreed with this defamation did not only vote against it but also choose to resign.

Around Valentine's Day, Brittany Higgins, a former employee of Australia's Minister for Defence went public with news about being raped on the ministerial sofa. The questions this woman raises are extraordinary, for example, if the Minister for Defence, Linda Reynolds, can not defend the new girl in the office, how can we rely on her to defend our country?

Brittany Higgins, Australia

Higgins chose not to name the accused publicly. It appears she wishes to focus attention on the culture and the cover-up. Two independent news organizations, True Crimes News Weekly and independent journalist Shane Olsen have identified a suspect. There is now a twitter hashtag too. A Youtube video shows the former Attorney General, George Brandis, praising Bruce Lehrmann and other former staff in the presence of high court justices.

George Brandis (former Attorney General): All of us know how important staff are to us. We spend so much time together, mostly away from home. We share so many experiences that they become like a second family.

As the man departed days after the incident in 2019, it appears that the Government have had plenty of time to remove his name from virtually all official web sites although there is no super-injunction (yet) to prevent discussion of his identity.

Against this backdrop, Google admitted two female researchers subject to high-profile sackings may have been doing legitimate research. Like Rodriguez and Higgins, both of Google's female victims had been threatened to self-censor, they refused, they were shamed, they bravely chose to put their persecution in the public domain.

All these cases inevitably remind me of other cases, the growing body count in the free and open source software world.

Higgins' decision to go public helps us all see how a cover-up was built from day one. Her boss, Linda Reynolds, had suggested that pursuing a criminal justice complaint would destroy Miss Higgins' career. In effect, the victim was blackmailed to stay silent. This is the thread that draws all these cases of oppression together. In December 2018, two long standing volunteers in the free, open source software world, Dr Norbert Preining and I, revealed how we were subject to blackmail and coercion in our respective roles. In our cases, we both received the veiled threats in writing:

Knife at throat, Debian Account Managers, DAM, blackmail
We are sending this email privately, leaving its disclosure as your decision (although traces in public databases are unavoidable)

In other words, they are saying that if we call out the coercive nature of their communications, they will seek to destroy us.

When you receive a threat like this from somebody with a history of publicly shaming people on a hideous scale, it really feels like they are holding a knife to your throat.

Chilling.

In my case, the community of volunteers and donors had clearly elected me as the fellowship representative so this blackmail was an attack on all those who voted. It was my duty to inform people and call it out.

The crimes were very different but the message seems to be the same: the organization must be protected at any cost. When those in authority do something wrong, the victims have to stay silent, grin and bear it or some gang will impose a bigger pain on the victim.

More on the former Debian Project Leader (DPL), Chris Lamb, giving negative references for volunteers

One volunteer sent me the following comments about Chris Lamb. Many people receiving copies of defamation have showed it to the survivors:

Volunteer: But I am scared that Lamb actually also hosed an application for a company in NY, a job related to Debian. If that has happened, and I can reasonably document it, I would consider a defamation law suit

When the leader of any organization, whether it is Apache, Debian or Google, uses the authority of their position to push defamation, it is like using the height of a bridge to stand above a freeway and drop bricks onto the cars underneath. Lamb may not fear consequences for his actions, his father is a barrister, Robert Lamb, who appears well qualified to stifle any volunteers seeking redress.

James Jamerson: The unsung Motown bassist that influenced Paul McCartney

Posted by Fedora Cloud Working Group on February 27, 2021 10:29 PM

James Jamerson is just one of many session players in the 50s and 60s who went virtually unknown during his lifetime. Even now, after being...

The post James Jamerson: The unsung Motown bassist that influenced Paul McCartney appeared first on Dissociated Press.

Getting started with COBOL development on Fedora Linux 33

Posted by Fedora Magazine on February 27, 2021 08:00 AM

Though its popularity has waned, COBOL is still powering business critical operations within many major organizations. As the need to update, upgrade and troubleshoot these applications grows, so may the demand for anyone with COBOL development knowledge.

Fedora 33 represents an excellent platform for COBOL development.
This article will detail how to install and configure tools, as well as compile and run a COBOL program.

Installing and configuring tools

GnuCOBOL is a free and open modern compiler maintained by volunteer developers. To install, open a terminal and execute the following command:

# sudo dnf -y install gnucobol 

Once completed, execute this command to verify that GnuCOBOL is ready for work:

# cobc -v

You should see version information and build dates. Don’t worry if you see the error “no input files”. We will create a COBOL program file with the Vim text editor in the following steps.

Fedora ships with a minimal version of Vim, but it would be nice to have some of the extra features that the full version can offer (such as COBOL syntax highlighting). Run the command below to install Vim-enhanced, which will overwrite Vim-minimal:

# sudo dnf -y install vim-enhanced

Writing, Compiling, and Executing COBOL programs

At this point, you are ready to write a COBOL program. For this example, I am set up with username fedorauser and I will create a folder under my home directory to store my COBOL programs. I called mine cobolcode.

# mkdir /home/fedorauser/cobolcode
# cd /home/fedorauser/cobolcode

Now we can create and open a new file to enter our COBOL source program. I’ll call it helloworld.cbl.

# vim helloworld.cbl

You should now have the blank file open in Vim, ready to edit. This will be a simple program that does nothing except print out a message to our terminal.

Enable “insert” mode in vim by pressing the “i” key, and key in the text below. Vim will assist with placement of your code sections. This can be very helpful since every character space in a COBOL file has a purpose (it’s a digital representation of the physical cards that developers would complete and feed into the computer).

       IDENTIFICATION DIVISION.
       PROGRAM-ID. HELLO-WORLD.
      *simple helloworld program.
       PROCEDURE DIVISION.
           DISPLAY '##################################'.
           DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'.
           DISPLAY '#!!!!!!!!!!FEDORA RULES!!!!!!!!!!#'.
           DISPLAY '#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#'.
           DISPLAY '##################################'.
           STOP RUN.

You can now press the “ESC” key to exit insert mode, and key in “:x” to save and close the file.

Compile the program by keying in the following:

# cobc -x helloworld.cbl

It should complete quickly with return status: 0. Key in “ls” to view the contents of your current directory. You should see your original helloworld.cbl file, as well as a new file simply named helloworld.

Execute the COBOL program.

# ./helloworld

If you see your text output without errors, then you have sucessfully compiled and executed the program!

<figure class="wp-block-image size-large"></figure>

Now that we have the basics of writing, compiling, and running a COBOL program, lets try one that does something a little more interesting.

The following program will generate the Fibonacci sequence given your input. Use Vim to create a file called fib.cbl and input the text below:

      ******************************************************************
      * Author: Bryan Flood
      * Date: 25/10/2018
      * Purpose: Compute Fibonacci Numbers
      * Tectonics: cobc
      ******************************************************************
       IDENTIFICATION DIVISION.
       PROGRAM-ID. FIB.
       DATA DIVISION.
       FILE SECTION.
       WORKING-STORAGE SECTION.
       01  N0             BINARY-C-LONG VALUE 0.
       01  N1             BINARY-C-LONG VALUE 1.
       01  SWAP           BINARY-C-LONG VALUE 1.
       01  RESULT         PIC Z(20)9.
       01  I              BINARY-C-LONG VALUE 0.
       01  I-MAX          BINARY-C-LONG VALUE 0.
       01  LARGEST-N      BINARY-C-LONG VALUE 92.
       PROCEDURE DIVISION.
      *>  THIS IS WHERE THE LABELS GET CALLED
           PERFORM MAIN
           PERFORM ENDFIB
           GOBACK.
      *>  THIS ACCEPTS INPUT AND DETERMINES THE OUTPUT USING A EVAL STMT
       MAIN.
            DISPLAY "ENTER N TO GENERATE THE FIBONACCI SEQUENCE"
            ACCEPT I-MAX.
            EVALUATE TRUE
              WHEN I-MAX > LARGEST-N
                 PERFORM INVALIDN
              WHEN I-MAX > 2
                 PERFORM CASEGREATERTHAN2
              WHEN I-MAX = 2
                 PERFORM CASE2
              WHEN I-MAX = 1
                 PERFORM CASE1
              WHEN I-MAX = 0
                 PERFORM CASE0
              WHEN OTHER
                 PERFORM INVALIDN
            END-EVALUATE.
            STOP RUN.
       *>  THE CASE FOR WHEN N = 0
       CASE0.
           MOVE N0 TO RESULT.
           DISPLAY RESULT.
      *>  THE CASE FOR WHEN N = 1
       CASE1.
           PERFORM CASE0
           MOVE N1 TO RESULT.
           DISPLAY RESULT.
      *>  THE CASE FOR WHEN N = 2
       CASE2.
           PERFORM CASE1
           MOVE N1 TO RESULT.
           DISPLAY RESULT.
      *>  THE CASE FOR WHEN N > 2
       CASEGREATERTHAN2.
           PERFORM CASE1
           PERFORM VARYING I FROM 1 BY 1 UNTIL I = I-MAX
                   ADD N0 TO N1 GIVING SWAP
                   MOVE N1 TO N0
                   MOVE SWAP TO N1
                   MOVE SWAP TO RESULT
                   DISPLAY RESULT
            END-PERFORM.
      *>  PROVIDE ERROR FOR INVALID INPUT
       INVALIDN.
           DISPLAY 'INVALID N VALUE. THE PROGRAM WILL NOW END'.
      *>  END THE PROGRAM WITH A MESSAGE
       ENDFIB.
           DISPLAY "THE PROGRAM HAS COMPLETED AND WILL NOW END".
       END PROGRAM FIB.

As before, hit the “ESC” key to exit insert mode, and key in “:x” to save and close the file.

Compile the program:

# cobc -x fib.cbl

Now execute the program:

# ./fib

The program will ask for you to input a number, and will then generate Fibonocci output based upon that number.

<figure class="wp-block-image size-large"></figure>

Further Study

There are numerous resources available on the internet to consult, however vast amounts of knowledge reside only in legacy print. Keep an eye out for vintage COBOL guides when visiting used book stores and public libraries; you may find copies of endangered manuals at a rock-bottom prices!

It is also worth noting that helpful documentation was installed on your system when you installed GnuCOBOL. You can access them with these terminal commands:

# info gnucobol
# man cobc
# cobc -h
<figure class="wp-block-image size-large"></figure>

Thunderbolt bridge connection in Fedora 33

Posted by Lukas "lzap" Zapletal on February 27, 2021 12:00 AM

Thunderbolt bridge connection in Fedora 33

My home network is extremely slow, because I have CAT5e cables everywhere. I was wondering if I can use Thunderbolt ports which I have both on the new Mac M1 and Intel NUC with Fedora. So without my breath, since some Thunderbolt docks are known to brick the new Macs, I connected the two guys. And it worked automatically!

Fedora’s (33) kernel automatically recognized thunderbolt0 device and NetworkManager created a new connection named “Wired connection 1”. There must be some autonegotiation in the spec, because the two devices created 169.254/16 network and picked some IP addresses. I was not expecting that, I mean maybe if this was Linux to Linux but with MacOS involved I thought this is not gonna work. Let’s see how fast is my 100Mbps connection:

mac$ nc -v -l 2222 > /dev/null

linux$ dd if=/dev/zero bs=1024K count=512 | nc -v 192.168.1.55 2222
Ncat: Version 7.80 ( https://nmap.org/ncat )
Ncat: Connected to 192.168.1.5:2222.
512+0 záznamů přečteno
512+0 záznamů zapsáno
536870912 bajtů (537 MB, 512 MiB) zkopírováno, 45,8012 s, 11,7 MB/s
Ncat: 536870912 bytes sent, 0 bytes received in 45.86 seconds.

That’s expected on a 100Mbps ethernet. On a gigabit network, which I considered to upgrade to, we should see something like 117 MB/s for my ideal case (just a switch). But let’s see how Thunderbolt works for me:

linux$ dd if=/dev/zero bs=1024K count=512 | nc -v 169.254.145.73 2222
Ncat: Version 7.80 ( https://nmap.org/ncat )
Ncat: Connected to 169.254.145.73:2222.
512+0 záznamů přečteno
512+0 záznamů zapsáno
536870912 bajtů (537 MB, 512 MiB) zkopírováno, 0,788541 s, 681 MB/s
Ncat: 536870912 bytes sent, 0 bytes received in 0.79 seconds.

Holy Moly! It’s not 1.1 Gbps but almost 900 MB/s that’s insane. This is a USB-C cable which is the best thing I currently have (this came with my LG screen). I am dropping a proper Thunderbolt3 into a basket to see how faster this can be. I mean in theory, I don’t have that fast SSD in my Intel NUC server.

Allright, so that’s looks like should be my preferred connection between my desktop and Linux. Let’s rename the connection first:

nmcli con modify "Drátové připojení 1" connection.id thunderbolt0

Oh gosh, I need to switch back to English from Czech language. Next up, set static IP address.

nmcli con modify thunderbolt0 ipv4.method static ipv4.address 192.168.13.4/24
nmcli con down thunderbolt0
nmcli con up thunderbolt0

And after quick update in a MacOS network dialog and /etc/hosts change, the connection between my new desktop and my working Linux machine is 10Gbps.

Using the Display Posts plugin with WordPress and custom CSS

Posted by Fedora Cloud Working Group on February 26, 2021 04:30 PM

In case this helps anybody else, wanted to share how I created the Top 100 Albums page here on Dissociated Press. I wanted to be...

The post Using the Display Posts plugin with WordPress and custom CSS appeared first on Dissociated Press.

Friday’s Fedora Facts: 2021-08

Posted by Fedora Community Blog on February 26, 2021 03:00 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! The Beta freeze is underway.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
DevConf.USvirtual2-3 Sepcloses 31 May
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1883609shimASSIGNED
</figure>

Upcoming meetings

Releases

Fedora 34

Schedule

Upcoming key schedule milestones:

  • 2021-03-16 — Beta release early target
  • 2021-03-23 — Beta release target #1

Changes

Change tracker bug status. See the ChangeSet page for details of approved changes.

<figure class="wp-block-table">
StatusCount
ASSIGNED1
MODIFIED3
ON_QA49
CLOSED3
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
1929940dogtag-pkiNEWAccepted(Beta)
1930977mesaNEWAccepted(Beta)
1931070mesaON_QAProposed(Beta)
1930978xorg-x11-serverNEWProposed(Beta)
</figure>

Fedora 35

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Autoconf-2.71System-WideFESCo #2579
POWER 4k page sizeSystem-WideFESCo #2581
rpmautospec – removing release and changelog fields from spec filesSystem-WideFESCo #2582
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-08 appeared first on Fedora Community Blog.

NEXTCLOUD VIA PODMAN COM POD

Posted by Daniel Lara on February 26, 2021 10:34 AM


Uma dica rápida para subir o NextCloud via podman rapidamente.

Bom vamos criar 01 diretório e 03 subdiretórios:

$ mkdir -p nextcloud/app
$ mkdir nextcloud/data
$ mkdir nextcloud/db


Agora vamos criar o nosso Pod:

$ podman pod create --name ncloud -p8080:80

E agora vamos subir o contêiner do MariaDB:

$ podman run -d --pod=ncloud \
     --env MYSQL_DATABASE=nextcloud \
     --env MYSQL_USER=nextcloud \
     --env MYSQL_PASSWORD=nextcloud \
     --env MYSQL_ROOT_PASSWORD=nextcloud \
      -v ~/nexcloud/db:/var/lib/mysql:Z \
     --restart always \
     --name nextcloud-db \
     docker.io/library/mariadb:10


Agora vamos subir o contêiner do NextCloud:

$ podman run -d --pod=ncloud \
  --env MYSQL_HOST=127.0.0.1 \
  --env MYSQL_DATABASE=nextcloud \
  --env MYSQL_USER=nextcloud  \
  --env MYSQL_PASSWORD=nextcloud \
  --env NEXTCLOUD_ADMIN_USER=nextcloud \
  --env NEXTCLOUD_ADMIN_PASSWORD=nextcloud \
  -v ~/nextcloud/app:/var/www/html \
  -v ~/nextcloud/data:/var/www/html/data \
  --restart always  \
  --name nextcloud  \
  docker.io/library/nextcloud:20


Verifique se ficou ok:

$ podman ps



Pronto, só acessar pelo seu browser: http://localhost:8080


Caso tenha o cockpit, pode visualizar por lá também:



Guia de referência: 











Tailwind does not support pseudo-elements

Posted by Josef Strzibny on February 26, 2021 12:00 AM

This week I came across another tricky part of Tailwind, pseudo-elements. But what if you want to use them?

What are pseudo-elements anyway? Pseudo-elements are HTML elements that do not exist in the HTML markup at all. Such elements won’t be visible to the browser assistive technology, they can only be styled visually with CSS.

It’s quite common to define the :before and :after pseudo-elements that style a non-existing element in position relative to the element at hand. People use it for typography or drawing to keep markup clean and tidy. A lot of times, they are used in code pens to showcase some advanced CSS.

I wanted to use these pseudo-elements to create a 3D book for a new version of the landing page of my book. It’s a project where I am trying out Tailwind for the first time.

Tailwind uses pre-defined classes and even though it could support these pseudo-elements in theory, it doesn’t. So the only option is to use actual elements with aria-hidden=”true” attribute to hide them from the browser assistive capabilities:

<div class="tailwind classes as usual" aria-hidden="true">
  <!-- :before -->
</div>

<div class="tailwind classes as usual">
  <!-- main element -->
</div>

<div class="tailwind classes as usual" aria-hidden="true">
  <!-- :after -->
</div>

The author of Tailwind already expressed that pseudo-elements are not necessary, so I don’t think we get them in a future Tailwind release. There is an unofficial tailwind-pseudo plugin, though, which you can try.

Contribute at the Fedora Audio, Kernel 5.11 and i18n test days

Posted by Fedora Magazine on February 25, 2021 08:00 PM

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three upcoming test events in the next week.

  • Wednesday March 03, is to test the Pipewire Audio changes in Fedora.
  • Monday March 08 through March 15, this test week is focusing on testing Kernel 5.11.
  • Tuesday March 09 through March 15, is to test the Fedora 34 i18n features.

Come and test with us to make the upcoming Fedora 34 even better. Read more below on how to do it.

Audio test day

There is a recent proposal to replace the PulseAudio daemon with a functionally compatible implementation based on PipeWire. This means that all existing clients using the PulseAudio client library will continue to work as before, as well as applications shipped as Flatpak. The test day is to test everything works as expected. This will occur on Wednesday, March 03

Kernel test week

The kernel team is working on the final integration for kernel 5.11. This version was just recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week for Monday, March 08 through Monday, March 15. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps.

i18n test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. A lot of our users use Fedora in their preferred languages and it’s important that we test the changes. The wiki contains more details about how to participate. The test week is March 09 through March 15.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

ZimbraLogHostname is not configured - error

Posted by Luis Bazan on February 25, 2021 06:50 PM

[root@mail ~]# cat /etc/centos-release

CentOS Stream release 8

[zimbra@mail ~]$ zmcontrol -v

Release 8.8.15_GA_3953.RHEL8_64_20200629025823 UNKNOWN_64 FOSS edition, Patch 8.8.15_P19.

Log in as the zimbra user

[root@mail ~]# su - zimbra

[zimbra@mail ~]$

Now run this command to set the hostname in the Logs configuration.

Remember to change the domain name to yours.

[zimbra@mail ~]$ zmprov mcf zimbraLogHostname mail.ibtechpa.com

[zimbra@mail ~]$ exit

logout

Switch to root user.

[root@mail ~]#

Update the log configuration with this command.

[root@mail ~]# /opt/zimbra/libexec/zmsyslogsetup

updateSyslogNG: Updating /etc/syslog-ng/syslog-ng.conf...done.

Last step restart zimbra services



With this you can see all your statistics and logs from the administrator gui.





RODANDO WILDFLY NO PODMAN

Posted by Daniel Lara on February 25, 2021 11:06 AM

 Bom, primeiro vamos fazer o pull da imagem:


$ podman pull jboss/wildfly


Agora vamos subir nosso contêiner:


$ podman run -d --name wildfly -p 8080:8080 -p 9990:9990 jboss/wildfly




Agora vamos acessar o nosso contêiner e criar um usuário para o console de administração, caso queira:

$ podman exec -it wildfly /bin/bash




Vamos criar um usuário:

$ /opt/jboss/wildfly/bin/add-user.sh



Só acessar via browser: http://<ip ou hostname>:8080



Ou, acesso via console admin: http://<ip ou hostname>:9990



Guia de Referencia: 











QElectroTech version 0.80

Posted by Remi Collet on February 25, 2021 10:12 AM

RPM of QElectroTech version 0.80, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux ≥ 8.

A bit more than 1 year after the version 0.70 release, the project have just released a new major version of their electric diagrams editor.

Official web site : http://qelectrotech.org/ and version announcement.

Installation by YUM :

yum --enablerepo=remi install qelectrotech

RPM (version 0.80-1) are available for Fedora ≥ 32 and Enterprise Linux 8 (RHEL, CentOS, ...)

Updates are also on the road to official repositories

Notice :a Copr / Qelectrotech repository also exists, which provides "development" versions (0.90-DEV for now).

A sneak peek at Fedora Zine

Posted by Fedora Community Blog on February 25, 2021 08:00 AM

So my Outreachy internship is winding to a close, as is the creation of the first-ever edition of our very own Fedora Zine!

It has been a crazy journey so far and I have thoroughly enjoyed working on this awesome project, especially getting to see and work with all of these great submissions from the community. I have learned so much; from how to balance my design visually, how to pair fonts and use other typographic effects, how to use guides for a perfectly aligned design and also that you should read your printing specs very, very carefully before getting to work on a project☺. A huge thank you goes to my mentor, Marie Nordin, who has been incredibly helpful in guiding me through this whole process!

Now is the time for me to give you all a sneak-peek at what we have been working on for the past two and a half months.

Pages

In the gallery below you can see a small sample of the different pages of the zine:

  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I started off by re-working all the awesome Fedora team infographics that Smera Goel created in the previous round of Outreachy internships. I adjusted the designs to fit into the new size page/template.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">A call for contributions seemed like an obvious thing to do and I wanted to make sure it was unique and eye-catching.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I remixed a graphic art submission to highlight the four foundations of Fedora.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">Fedora’s vision & mission statements are an important base for Fedora. I created a spread for each paired with cool photo submissions.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I overlaid photo and traditional art submissions with other text and graphic elements.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I overlaid photo and traditional art submissions with other text and graphic elements.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">And of course, a credits page to acknowledge all the wonderful contributions from the community in this edition of the zine!</figcaption></figure>

And a quick flip through a mock-up of the zine:

<figure class="wp-block-video"><video controls="controls" src="https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/zine-mock-up-flip-through-1.mp4"></video></figure>

Submissions for this edition of the zine are now closed, but you can still contribute! Follow the steps here to submit your art – and it may be featured in future editions of the zine!

The post A sneak peek at Fedora Zine appeared first on Fedora Community Blog.

GITLAB NO PODMAN

Posted by Daniel Lara on February 24, 2021 12:41 PM

 Crie o diretório "gitlab" no seu Home:


$ mkdir gitlab

Agora, dentro do "gitlab", vamos criar 3 diretórios "logs", "data" e "config":

$ mkdir gitlab/config
$ mkdir gitlab/logs
$ mkdir gitlab/data

Agora vamos rodar o Gitlab no Podman:

$ sudo podman run --privileged -dit \
            --name gitlab \
            -p80:80 \
            -p443:443 \
            -p22022:22 \
            -v ${PWD}/gitlab/config:/etc/gitlab \
            -v ${PWD}/gitlab/logs/:/var/log/gitlab \
            -v ${PWD}/gitlab/data/:/var/opt/gitlab \
            gitlab/gitlab-ce:latest


Feito, já esta rodando:

Agora é só acessar o Gitlab:




Syslog-ng on BSDs

Posted by Peter Czanik on February 24, 2021 11:33 AM

My FOSDEM presentation in the BSD devroom showcased what is new in sudo and syslog-ng and explained how to install or compile these software yourself on FreeBSD. Not only am I a long time FreeBSD user (started with version 1.0 in 1994) I also work on keeping the syslog-ng port in FreeBSD up to date. But soon after my presentation I was asked what I knew about other BSDs. And – while I knew that all BSDs have syslog-ng in their ports system – I realized I had no idea about the shape of those ports.

For this article I installed OpenBSD, DragonFlyBSD and NetBSD to check syslog-ng on them. Admittedly, they are not in the best shape: they contain old versions, some do not even start or are unable to collect local log messages.

OpenBSD

OpenBSD ports have version 3.12 of syslog-ng. Some Linux distributions have an even earlier version of syslog-ng and they work just fine. Unfortunately, it is not the case here: logging in OpenBSD changed and it means that local log messages cannot be collected by syslog-ng 3.12. Support for collecting local log messages was added in a later syslog-ng version: https://github.com/syslog-ng/syslog-ng/pull/1875

Installation of this ancient syslog-ng version is really easy, just use pkg_add:

openbsd68# pkg_add syslog-ng
quirks-3.441 signed on 2021-02-13T20:25:37Z
syslog-ng-3.12.1p7: ok
The following new rcscripts were installed: /etc/rc.d/syslog_ng
See rcctl(8) for details.

Collecting log messages over the network works perfectly, so as a workaround, you might want to keep using syslogd from the base system as well while forwarding log messages to syslog-ng using the network.

DragonFlyBSD

Once upon a time DragonFlyBSD was forked from FreeBSD. While they took a different route from FreeBSD they also stayed close to the original. DragonFlyBSD ports build on FreeBSD ports even though there are some additional applications and other smaller differences. This means that syslog-ng is up to date in DragonFlyBSD ports, - which in this case means version 3.29. Installation is easy, using the same command as on FreeBSD:

pkg install syslog-ng

Problems start when you actually try to start syslog-ng:

dragon# /usr/local/etc/rc.d/syslog-ng forcestart
Starting syslog_ng.
[2021-02-17T08:59:13.598727] system(): Error detecting platform, unable to define the system() source. Please send your system information to the developers!; sysname='DragonFly', release='5.8-RELEASE'
Error parsing config, syntax error, unexpected LL_ERROR, expecting '}' in /usr/local/etc/syslog-ng.conf:19:14-19:20:
14      options { chain_hostnames(off); flush_lines(0); threaded(yes); };
15      
16      #
17      # sources
18      #
19----> source src { system();
19---->              ^^^^^^
20      	     udp(); internal(); };
21      
22      #
23      # destinations
24      #


syslog-ng documentation: https://www.syslog-ng.com/technical-documents/list/syslog-ng-open-source-edition
contact: https://lists.balabit.hu/mailman/listinfo/syslog-ng

While system() source works on FreeBSD, where this configuration was prepared, it does not work on DragonFlyBSD. You need to edit /usr/local/etc/syslog-ng.conf and replace system() source with the following lines:

     unix-dgram("/var/run/log");
     unix-dgram("/var/run/logpriv" perm(0600));
     file("/dev/klog" follow-freq(0) program-override("kernel"));

This is based on the earlier FreeBSD configuration and seems to work. I have filed an issue at the syslog-ng GitHub repo, so in a future release it might work automatically.

I also tried to build syslog-ng from ports myself, but right now it is broken. The sysutils/syslog-ng port is still a metaport referring to another port, but that version has already been deleted. The syslog-ng port was reorganized recently, and it seems like not everything was followed up on the DragonFlyBSD side perfectly.

NetBSD

NetBSD also has a quite ancient version of syslog-ng: 3.17.2. Installation of the package is easy, just:

pkgin install syslog-ng

Syslog-ng works and can collect local log messages out of box as well, with a catch. NetBSD seems to have switched to RFC5424 syslog format, just as FreeBSD 12.0, so local log messages collected by syslog-ng’s system() source look a kind of funny:

Feb 17 12:43:07 localhost 1 2021-02-17T12:43:07.935565+01:00 localhost sshd 2160 - - Server listening on :: port 22.
Feb 17 12:43:07 localhost 1 2021-02-17T12:43:07.936064+01:00 localhost sshd 2160 - - Server listening on 0.0.0.0 port 22.

Also, the system() source seems to have missed kernel logging. To fix this, open syslog-ng.conf in your favorite text editor, remove the system() source and add these two lines instead:

        unix-dgram("/var/run/log" flags(syslog-protocol));
        file("/dev/klog" flags(kernel) program_override("kernel"));

This makes sure that local logs are parsed correctly and that kernel messages are collected by syslog-ng as well.

What is next

In this blog I identified many problems related to syslog-ng in various BSD port systems. I also provided some workarounds, but of course these are not real solutions. I cannot promise anything, as I am not an active user or developer of any of these BSD systems and I am also short on time. However, I’m planning to fix as many of these problems at the best effort level, as time allows.

 

If you have any questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

POWER9, ARM64 and 64k page sizes

Posted by Daniel Pocock on February 23, 2021 10:40 PM
IBM POWER9

I've recently had discussions with other developers in the Fedora world about the default 64k page size on POWER9. The vast majority of GNU/Linux users have a 4k page size. There is now a change proposal for the ppc64le page size on Fedora 35 and a related discussion on the devel mailing list.

Why and when the non-x86 architectures are relevant

With each new generation of x86 processors from Intel and AMD there is a larger quantity of opaque microcode that independent developers are unable to audit or fix.

When a vendor has such an incredible market share, it is inevitable that problems like this will arise.

Investing some time and effort on alternative architectures is good insurance: when the day comes that this microcode is compromised, some percentage of users will jump ship.

Getting started on POWER9 and ARM64 is easier than ever

For POWER9, please see my recent blog about the Talos II Quickstart.

Similar blogs have recently appeared about ARM64, for example, this Fedora Magazine article about SolidRun HoneyComb LX2K.

Choosing between POWER9 and ARM64

ARM64 may be a better choice if you are very sensitive to the heat emissions and energy consumption costs.

POWER9 may be a better choice if you need a lot of compute power.

While neither ARM64 nor POWER9 have microcode comparable to Intel or AMD products, it is still important to look at the overall system. For example, Raptor's Talos II motherboard has the FSF Respects Your Freedom (RYF) certification but if you order the optional SATA controller, you end up with some proprietary firmware.

Why 64k page sizes are an issue

The GNU/Linux kernel for these platforms can be compiled with either 4k or 64k page size. The distribution chooses which of these options to select. The kernel created by the distribution is included in the installation disk for the distribution.

One acute consequence of this is the relationship between Btrfs sectorsize and kernel page size. Btrfs filesystems can only be used on systems with the same page size. The Btrfs driver is being improved to remove this restriction but for users of Fedora 34 and older systems, this is a very inconvenient issue. If you need to move Btrfs filesystems between systems with different page sizes then they simply won't work.

It appears that nobody tests the kernel and amdgpu drivers on these non-standard page sizes before each official release. Consequently, if there is a problem, it is only discovered by users after the upstream release. This means that users on these platforms are always a step behind users on other platforms.

Improving support for 64k page sizes

Personally, I'm quite keen to see the 64k page size succeed.

I believe that is only possible when these platforms have critical mass and when some of the upstream developers use these machines on a daily basis.

Until we get to that point, I feel it is a chicken-and-egg problem: things don't work, so people buy in more slowly, so there are less people to report bugs and/or fix things.

Automated CI testing of the kernel and amdgpu code may also help to catch some issues before official releases.

To summarize, I have an open mind about how to go about this. Please feel free to share your ideas and experiences in the discussion or through your blog.

Workarounds

Anybody who buys one of these machines today can still use it almost immediately.

One option is to use a distribution with a 4k page size or compile your own kernel with a 4k page size.

Another idea is to simply avoid using Btrfs for another six months: if you use the installation system of your preferred distribution to create filesystems, check that it isn't using Btrfs. You may need to manually override it to use ext4 for the moment.

If you don't need a modern GPU, some of the previous generation, such as the Radeon RX 580, seem to be working fine on any page size. These cards are available with up to 8GB VRAM. The performance of those cards is more than adequate for many workstation users.

The page size issue should not deter anybody from buying the hardware today. The community is always here to help.

How-to: Writing a C shared library in rust

Posted by Tony Asleson on February 23, 2021 03:59 PM
The ability to write a C shared library in rust has been around for some time and there is quite a bit of information about the subject available. Some examples: Exposing C and Rust APIs: some thoughts from librsvg Creating C/C++ APIs in Rust Rust Out Your C by Carol (Nichols || Goulding) (youtube video) Exporting a GObject C API from Rust code and using it from C, Python, JavaScript and others Rust Once, Run Everywhere All this information is great, but what I was looking for was a simple step-by-step example which also discussed memory handling and didn’t delve into the use of GObjects.

Concrete Blonde – “Happy Birthday” (song of the day)

Posted by Fedora Cloud Working Group on February 23, 2021 01:32 PM

Concrete Blonde had a lot of standout tracks on Free, but this one is timeless. “Happy Birthday” is a great song any day of the year, but...

The post Concrete Blonde – “Happy Birthday” (song of the day) appeared first on Dissociated Press.

New “How Do You Fedora” video series to interview members of the community

Posted by Fedora Community Blog on February 23, 2021 08:00 AM

A common answer to the question “What’s your favorite part about Fedora?” is often “the community”. Well, what’s so special about it?

The Fedora community shares the common values of the “Four Foundations”: Freedom, Friends, Features and First. Beyond that, although there are many great minds, not all of them think alike! Everyone contributes different approaches to problems, interesting ideas, and diverse perspectives. There is a place in Fedora for anyone who wants to help. 

That’s why we’re launching a new video series on the Fedora Youtube channel profiling some of Fedora’s various contributors and how they use Fedora. The goal is to get to know some community members better, especially in a time where in-person community events might not be practical.

But we already have “HDYF” articles!

Longtime community members may be aware of the “How Do You Fedora” series on the Fedora Magazine. This new series is intended to be a continuation of the written articles in a new format, not to replace it. Instead, this series serves as an option for interviewees who feel as if their personality might shine better in this different format, who prefer to “show, not tell”, or would simply like to participate in a video interview instead of a written one.

What Can You Expect?

Videos will be posted on the Fedora YouTube channel. Watch interviewees answer your burning questions about their Fedora habits, show off their setups, and more! Hopefully you’ll leave each video knowing something new about a fellow contributor.

Our first conversation will be with Marie Nordin, Fedora Community Action and Impact Coordinator. Next we plan on interviewing Matthew Miller, Fedora Project Leader.

I have some input!

We’d love your feedback and ideas about what you’d like to see included in the series, so send an email to Gabbie (gchang@redhat.com) if you’d like to get involved. Contact us on the feedback form to express your interest in becoming an interviewee for a written or video interview.

Happy watching! 

The post New “How Do You Fedora” video series to interview members of the community appeared first on Fedora Community Blog.

توزیع AlmaLinux 8.3 RC1 منتشر شد

Posted by Fedora fans on February 23, 2021 06:30 AM
AlmaLinux

AlmaLinuxپس از انتشار نسخه ی بتا از توزیع AlmaLinux، هم اکنون تیم توسعه این توزیع خبر انتشار اولین نسخه ی کاندیدای انتشار یا همان Release Candidate که به اختصار به آن RC گفته می شود را اعلام کرد.

AlmaLinux توزیعی از لینوکس می باشد که توسط تیم Cloud Linux به همراه جامعه کاربری ایجاد و توسعه داده می شود. AlmaLinux توزیعی در کلاس سازمانی می باشد و هدف آن جایگزین شدن با CentOS می باشد.

AlmaLinux 8.3 RC1 اولین نسخه ی کاندیدای انتشار می باشد که هم اکنون در دسترس و قابل دانلود می باشد. جهت دانلود آن می توانید با استفاده از لینک پایین و بر اساس موقعیت جغرافیایی خود نزدیک ترین mirror را جهت دانلود انتخاب کنید:

https://mirrors.almalinux.org

جهت اطلاعات بیشتر در مورد AlmaLinux 8.3 RC1 می توانید نکات انتشار آن را مطالعه کنید:

https://almalinux.org/blog/almalinux-8-3-rc-release-notes

The post توزیع AlmaLinux 8.3 RC1 منتشر شد first appeared on طرفداران فدورا.

Future of libsoup

Posted by Patrick Griffis on February 23, 2021 05:00 AM

The libsoup library implements HTTP for the GNOME platform and is used by a wide range of projects including any web browser using WebKitGTK. This past year we at Igalia have been working on a new release to modernize the project and I’d like to share some details and get some feedback from the community.

History

Before getting into what is changing I want to briefly give backstory to why these changes are happening.

The library has a long history, that I won’t cover all of, being created in 2000 by Helix Code, which became Ximian, where it was used in projects such as an email client (Evolution).

While it has been maintained to some degree for all of this time it hasn’t had a lot of momementum behind it. The library has maintained its ABI for 13 years at this point with ad-hoc feature additions and fixes being often added on top. This has resulted in a library that has multiple APIs to accomplish the same task, confusing APIs that don’t follow any convention common within GNOME, and at times odd default behaviors that couldn’t be changed.

What’s Coming

We are finally breaking ABI and making a new libsoup 3.0 release. The goal is to make it a smaller, more simple, and focused library.

Making the library smaller meant deleting a lot of duplicated and deprecated APIs, removing rarely used features, leveraging additions to GLib in the past decades, and general code cleanup. As of today the current codebase is roughly at 45,000 lines of C code compared to 57,000 lines in the last release with over 20% of the project deleted.

Along with reducing the size of the library I wanted to improve the quality of the codebase. We now have improved CI which deploys documentation that has 100% coverage, reports code coverage for tests, tests against Clang’s sanitizers, and the beginnings of automated code fuzzing.

Lastly there is ongoing work to finally add HTTP/2 support improving responsiveness for the whole platform.

There will be follow up blog posts going more into the technical details of each of these.

Release Schedule

The plan is to release libsoup 3.0 with the GNOME 41 release in September. However we will be releasing a preview build with the GNOME 40 release for developers to start testing against and porting to. All feedback would be welcomed.

Igalia plans on helping the platform move to 3.0 and will port GStreamer, GVFS, WebKitGTK and may help with some applications so we can have a smooth transition.

For more details on WebKitGTK’s release plans there is a mailinglist thread over it.

The previous release libsoup 2.72 will continue to get bug fix releases for the forseable future but no new 2.7x releases will happen.

How to Help

You can build the current head of libsoup to test now.

Installing it does not conflict with version 2.x however GObject-Introspection based applications may accidentally import the wrong version (Python for example needs gi.require_version('Soup', '2.4') and GJS needs imports.gi.versions.Soup = "2.4"; for the old API) and you cannot load both versions in the same process.

A migration guide has been written to cover some of the common questions as well as improved documentation and guides in general.

All bug reports and API discussions are welcomed on the bug tracker.

You can also join the IRC channel for direct communication (be patient for timezones): ircs://irc.gimp.net/libsoup.

Master, main and abuse

Posted by Daniel Pocock on February 22, 2021 09:55 PM

Free and open source software communities recently spent a lot of time and effort on renaming the master branches in Git repositories to main, or some other name, due to the association of the word master with the horror of slavery.

I plan to tackle the slavery issue in a separate blog. In this blog, my target is the misappropriation of the word abuse.

If we are sincere about abandoning the word master, we also need to stop using the word abuse, except in those situations where it is legitimate to use that word.

Abuse has a clear meaning. In the last week, we've seen women speak up about rape in Australia's parliament and misogyny on the set of Buffy the Vampire Slayer.

Buffy the Vampire Slayer

These are incredibly serious accusations.

The Buffy accusations are remarkably similar to the accusations against Matthias Kirschner, President of FSFE. The free software community elected me as a community representative in that organization. In 2018, after observing the culture of threats and blackmail, I resigned in disgust. Each new revelation about FSFE only confirms that I made the right decision to distance myself from those people.

Yet the accusations from the Australian parliament are even more disturbing. Having visited there on multiple occasions, I couldn't help contemplating the possibility that I may have visited the same office where this crime took place.

In the photo below, there may even be an unintended hint of male entitlement: I'm wandering around Australia's capitol in a t-shirt. It may simply be a reflection of how we live in Australia, the minimum dress code for visiting parliament doesn't set a very high bar. That particular t-shirt isn't easy to come by. The woman on the left is Senator Stott-Despoja, Australia's youngest woman in parliament and subsequently Australia's ambassador for women and girls. How shocking would it be if the crime took place in the same room where we took this photo?

The Debian Project is one of the oldest GNU/Linux distributions. In the 27 years of its existence, so-called leaders have never published a consolidated financial report. When people asked about the Google $300,000 obfuscated by $300,000 from the Handshake Foundation, leaders classified all questions as abuse. When oligarchs behave like this and use the word abuse to deflect questions about accountability, they are trivializing real victims of abuse.

The people who hid that money from the rest of us simply have no right to use the word abuse. Ever.

The situation in Australia's parliament has followed the same path as Debian: rather than resolving the most substantial issue, the employee was terminated on a minor technicality. A most serious act of abuse trivialized by equating it with a bureaucratic misdemeanor.

Natasha Stott-Despoja, Daniel Pocock, Parliament House, Canberra, Australia

Lad culture: when I found a rat in Australia's parliament

Visiting Canberra, I would usually carry my SLR. You never know who (or what) you might meet. After hearing about the bravery of the women speaking up this week and the discussions about the culture problems there, I felt now is the right time to share these images from the billiard hall.

There is nothing political about this blog or the photos. Many Australian men feel ashamed about the way our political leaders appear to live.

Australian Parliament House, Canberra, Billiard Room, Dead Rat

Master, main and abuse

Posted by Daniel Pocock on February 22, 2021 09:55 PM

Free and open source software communities recently spent a lot of time and effort on renaming the master branches in Git repositories to main, or some other name, due to the association of the word master with the horror of slavery.

I plan to tackle the slavery issue in a separate blog. In this blog, my target is the misappropriation of the word abuse.

If we are sincere about abandoning the word master, we also need to stop using the word abuse, except in those situations where it is legitimate to use that word.

Abuse has a clear meaning. In the last week, we've seen women speak up about rape in Australia's parliament and misogyny on the set of Buffy the Vampire Slayer.

Buffy the Vampire Slayer

These are incredibly serious accusations.

The Buffy accusations are remarkably similar to the accusations against Matthias Kirschner, President of FSFE. The free software community elected me as a community representative in that organization. In 2018, after observing the culture of threats and blackmail, I resigned in disgust. Each new revelation about FSFE only confirms that I made the right decision to distance myself from those people.

Yet the accusations from the Australian parliament are even more disturbing. Having visited there on multiple occasions, I couldn't help contemplating the possibility that I may have visited the same office where this crime took place.

In the photo below, there may even be an unintended hint of male entitlement: I'm wandering around Australia's capitol in a t-shirt. It may simply be a reflection of how we live in Australia, the minimum dress code for visiting parliament doesn't set a very high bar. That particular t-shirt isn't easy to come by. The woman on the left is Senator Stott-Despoja, Australia's youngest woman in parliament and subsequently Australia's ambassador for women and girls. How shocking would it be if the crime took place in the same room where we took this photo?

The Debian Project is one of the oldest GNU/Linux distributions. In the 27 years of its existence, so-called leaders have never published a consolidated financial report. When people asked about the Google $300,000 obfuscated by $300,000 from the Handshake Foundation, leaders classified all questions as abuse. When oligarchs behave like this and use the word abuse to deflect questions about accountability, they are trivializing real victims of abuse.

The people who hid that money from the rest of us simply have no right to use the word abuse. Ever.

The situation in Australia's parliament has followed the same path as Debian: rather than resolving the most substantial issue, the employee was terminated on a minor technicality. A most serious act of abuse trivialized by equating it with a bureaucratic misdemeanor.

Natasha Stott-Despoja, Daniel Pocock, Parliament House, Canberra, Australia

Lad culture: when I found a rat in Australia's parliament

Visiting Canberra, I would usually carry my SLR. You never know who (or what) you might meet. After hearing about the bravery of the women speaking up this week and the discussions about the culture problems there, I felt now is the right time to share these images from the billiard hall.

There is nothing political about this blog or the photos. Many Australian men feel ashamed about the way our political leaders appear to live.

Australian Parliament House, Canberra, Billiard Room, Dead Rat

The best practice method to install VirtualBox on Fedora 32/33 (and later)

Posted by Izhar Firdaus on February 22, 2021 02:55 PM

In any linux distribution, there will be multiple methods to achieve a certain goal. You might have encountered many guides out there on how to install VirtualBox on Fedora, however, please take note, many of them uses intrusive methods which can be difficult to maintain in long run, and would likely to break after a kernel update.

Everytime I caught a new team member following those guides, I tend to get annoyed, so I think I should write up the best practice method of doing this, which I have practiced on my Fedora installation for years now.

Installing RPMFusion Repository

Packages governed in Fedora official repositories are subject to several legal and technical guideline. This mean, many software which are not 100% FOSS or patent-encumbered would not be available in the official repositories because they would open the sponsors of Fedora to legal liability under the laws of United States.

Unfortunately, many free/open source software are patent incumbered, but while they might be illegal to distribute in the United States, they are perfectly legal to distribute in other countries.

Many of such packages are available in RPMFusion Repositories. Unlike many 3rd party repositories out there, RPMFusion contains the highest quality packages as its contributor also consist of people who package official RPMS in Fedora, and I generally recommend any new users of Fedora to install RPMFusion, especially if they are not US residents.

To install and enable RPMFusion on Fedora, you can install the release package using following command which will install both the free and nonfree repositories:

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
    https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Installing VirtualBox

Once you have RPMFusion installed, you can then install VirtualBox from it. VirtualBox in RPMFusion is very well maintained, would remain functional after kernel updates through an automatic kernel module builder (akmods), and generally carry the latest version of VirtualBox.

sudo dnf install VirtualBox
sudo akmods
sudo modprobe vboxdrv

And you are done and can now launch VirtualBox from the application menu. Please take note however, VirtualBox does not go well together with libvirtd. If you have libvirtd installed, it is suggested for you to disable it using:

sudo systemctl stop libvirtd
sudo systemctl disable libvirtd

Hope this guide will help you in installing VirtualBox in a less intrusive and more maintainable method.

افزایش CPU و RAM ماشین مجازی در KVM

Posted by Fedora fans on February 22, 2021 06:30 AM
CPU-RAM

CPU-RAMبرای افزایش و یا کاهش CPU و RAM یک ماشین مجازی (VM) در مجازی ساز KVM روش های گوناگونی وجود دارد که در این مطلب سعی خواهد شد تا به این موضوع پرداخته شود.

تغییر منابع با استفاده از Virtual Machine Manager :

Virtual Machine Manager یک نرم افزار گرافیکی برای مدیریت KVM server می باشد که بوسیله ی آن می توانید ماشین های مجازی را کنترل و مدیریت کرد. برای افزایش و یا کاهش CPU و RAM با استفاده از این ابزار کافیست تا VM مورد نظر خود را خاموش کنید و سپس به قسمت تنظیمات آن بروید و منابع مورد نظر خود را تغییر دهید.

Virtual_Machine_Manager

Virtual_Machine_Manager

تغییر منابع با استفاده از virsh :

virsh یک نرم افزار خط فرمانی است که بوسیله ی آن می توانید ماشین های مجازی خود در KVM را مدیریت کنید. برای افزایش و یا کاهش CPU و یا RAM یک ماشین مجازی کافیست تا ابتدا با دستور زیر لیست ماشین های مجازی خود را بدست آورید:

# virsh list --all

 

 

سپس با دستور زیر ماشین مجازی خود را خاموش کنید:

# virsh shutdown <My VM name>

سپس فایل XML ماشین مجازی مورد نظر را با دستور زیر جهت ویرایش باز کنید:

# virsh edit <My VM name>

اکنون برای تغییر CPU مقدار خط زیر را تغییر دهید که بجای عدد 2 باید عدد مورد نظر خود را وارد کنید:

<vcpu placement=’static’>2</vcpu>

برای تغییر مقدار RAM نیز باید خطوط زیر را ویرایش کنید که بجای 2097152 باید مقدار مورد نظر خود را وارد کنید:

<memory unit=’KiB’>2097152</memory>
<currentMemory unit=’KiB’>2097152</currentMemory>

پس از ویرایش مقادیر مورد نظر خود فایل را ذخیره کنید و سپس ماشین مجازی را با دستور زیر روشن کنید:

# virsh start <My VM name>

 

The post افزایش CPU و RAM ماشین مجازی در KVM first appeared on طرفداران فدورا.

Episode 259 – What even is open source anymore?

Posted by Josh Bressers on February 22, 2021 12:01 AM

Josh and Kurt talk about the question “what is open source?” Why do we think it’s broken today, and what sort of ideas about what should come next.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2313-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_259_What_even_is_open_source_anymore.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_259_What_even_is_open_source_anymore.mp3</audio>

Show Notes

What is waking my HDD up in Linux

Posted by Lukas "lzap" Zapletal on February 22, 2021 12:00 AM

What is waking my HDD up in Linux

When my disks wake up during the day, I am angry. I want silence, so I started investigating which process makes them to do that. I suspect that something is browsing Samba share, but to confirm I created this simple SystemTap script:

# cat syscall_open.stp 
#!/usr/bin/env stap
#
# System-wide strace-like tool for catching file open syscalls.
#
# Usage: stap syscall_open filename_regexp
#
probe syscall.open* {
  if (filename =~ @1) {
    printf("%s(%d) opened %s\n", execname(), pid(), filename)
  }
}

It’s as easy as starting this up and waiting until the process is found. It accepts regular expression, not a glob:

# dnf install systemtap systemtap-runtime
# stap syscall_open.stp '/mnt/int/data.*'

This will work on all SystemTap operating systems, I tested this on Fedora and any EL distribution should work too.

Ikebe Shakedown delivers cinematic instrumental funk

Posted by Fedora Cloud Working Group on February 21, 2021 06:28 PM

Ikebe Shakedown is another Bandcamp discovery. The band specializes in cinematic soul, an instrumental brand of soul/funk that feels like it should be straight out...

The post Ikebe Shakedown delivers cinematic instrumental funk appeared first on Dissociated Press.

Underground Chamber is a ride deep into the mind of Buckethead

Posted by Fedora Cloud Working Group on February 21, 2021 03:40 PM

Buckethead’s Underground Chamber is the fourth release in his “Pikes” series, and something like his 33rd studio release overall. Underground Chamber is too good to...

The post Underground Chamber is a ride deep into the mind of Buckethead appeared first on Dissociated Press.

Configuring an OpenWRT Switch to work with SSID VLANS on a UAP-AC-PRO

Posted by Jon Chiappetta on February 21, 2021 02:33 PM

Network setup: https://fossjon.wordpress.com/2021/02/05/home-networking-upgrade-unifi-uap-ac-pro-ufos/

On the OpenWRT Switch page, I have set LAN port 1 (along with a backup LAN port 2 but you can just use a single port) as the VLAN trunk port (tagged) to allow it to carry the traffic through to the VLAN access ports (untagged) [home = VLAN 3 && guest = VLAN 4]. This will create the sub-interfaces eth0.3 and eth0.4 which will contain the separated ethernet Layer 2 traffic from the WiFi clients (ARP, DHCP via dnsmasq, mDNS, etc).

Note: Make sure to tag the CPU along with the LAN ports and ignore the untagged VLAN 5, I’m using it as an isolated management network (firewalled off with iptables at Layer 3).

Linksys WRT32X Switch Setup:

<figure class="wp-block-image size-large"></figure>

You can then go to the Networks section in the UniFi AP Site configuration and add a VLAN-Only Network (set the ID to 3 or 4) and then on the Wireless page create an SSID which uses that Network Name in the WiFi settings.

Note: To achieve a similar setup on a OpenWRT AP, you can use the WAN port tagged on those same VLAN numbers and then on the Interfaces page create an unmanaged interface type from the related VLAN sub-interface listed – this interface can then be assigned to the SSID network under the Wireless networks page.

<figure class="wp-block-image size-large"></figure>

Making hibernation work under Linux Lockdown

Posted by Matthew Garrett on February 21, 2021 08:37 AM
Linux draws a distinction between code running in kernel (kernel space) and applications running in userland (user space). This is enforced at the hardware level - in x86-speak[1], kernel space code runs in ring 0 and user space code runs in ring 3[2]. If you're running in ring 3 and you attempt to touch memory that's only accessible in ring 0, the hardware will raise a fault. No matter how privileged your ring 3 code, you don't get to touch ring 0.

Kind of. In theory. Traditionally this wasn't well enforced. At the most basic level, since root can load kernel modules, you could just build a kernel module that performed any kernel modifications you wanted and then have root load it. Technically user space code wasn't modifying kernel space code, but the difference was pretty semantic rather than useful. But it got worse - root could also map memory ranges belonging to PCI devices[3], and if the device could perform DMA you could just ask the device to overwrite bits of the kernel[4]. Or root could modify special CPU registers ("Model Specific Registers", or MSRs) that alter CPU behaviour via the /dev/msr interface, and compromise the kernel boundary that way.

It turns out that there were a number of ways root was effectively equivalent to ring 0, and the boundary was more about reliability (ie, a process running as root that ends up misbehaving should still only be able to crash itself rather than taking down the kernel with it) than security. After all, if you were root you could just replace the on-disk kernel with a backdoored one and reboot. Going deeper, you could replace the bootloader with one that automatically injected backdoors into a legitimate kernel image. We didn't have any way to prevent this sort of thing, so attempting to harden the root/kernel boundary wasn't especially interesting.

In 2012 Microsoft started requiring vendors ship systems with UEFI Secure Boot, a firmware feature that allowed[5] systems to refuse to boot anything without an appropriate signature. This not only enabled the creation of a system that drew a strong boundary between root and kernel, it arguably required one - what's the point of restricting what the firmware will stick in ring 0 if root can just throw more code in there afterwards? What ended up as the Lockdown Linux Security Module provides the tooling for this, blocking userspace interfaces that can be used to modify the kernel and enforcing that any modules have a trusted signature.

But that comes at something of a cost. Most of the features that Lockdown blocks are fairly niche, so the direct impact of having it enabled is small. Except that it also blocks hibernation[6], and it turns out some people were using that. The obvious question is "what does hibernation have to do with keeping root out of kernel space", and the answer is a little convoluted and is tied into how Linux implements hibernation. Basically, Linux saves system state into the swap partition and modifies the header to indicate that there's a hibernation image there instead of swap. On the next boot, the kernel sees the header indicating that it's a hibernation image, copies the contents of the swap partition back into RAM, and then jumps back into the old kernel code. What ensures that the hibernation image was actually written out by the kernel? Absolutely nothing, which means a motivated attacker with root access could turn off swap, write a hibernation image to the swap partition themselves, and then reboot. The kernel would happily resume into the attacker's image, giving the attacker control over what gets copied back into kernel space.

This is annoying, because normally when we think about attacks on swap we mitigate it by requiring an encrypted swap partition. But in this case, our attacker is root, and so already has access to the plaintext version of the swap partition. Disk encryption doesn't save us here. We need some way to verify that the hibernation image was written out by the kernel, not by root. And thankfully we have some tools for that.

Trusted Platform Modules (TPMs) are cryptographic coprocessors[7] capable of doing things like generating encryption keys and then encrypting things with them. You can ask a TPM to encrypt something with a key that's tied to that specific TPM - the OS has no access to the decryption key, and nor does any other TPM. So we can have the kernel generate an encryption key, encrypt part of the hibernation image with it, and then have the TPM encrypt it. We store the encrypted copy of the key in the hibernation image as well. On resume, the kernel reads the encrypted copy of the key, passes it to the TPM, gets the decrypted copy back and is able to verify the hibernation image.

That's great! Except root can do exactly the same thing. This tells us the hibernation image was generated on this machine, but doesn't tell us that it was done by the kernel. We need some way to be able to differentiate between keys that were generated in kernel and ones that were generated in userland. TPMs have the concept of "localities" (effectively privilege levels) that would be perfect for this. Userland is only able to access locality 0, so the kernel could simply use locality 1 to encrypt the key. Unfortunately, despite trying pretty hard, I've been unable to get localities to work. The motherboard chipset on my test machines simply doesn't forward any accesses to the TPM unless they're for locality 0. I needed another approach.

TPMs have a set of Platform Configuration Registers (PCRs), intended for keeping a record of system state. The OS isn't able to modify the PCRs directly. Instead, the OS provides a cryptographic hash of some material to the TPM. The TPM takes the existing PCR value, appends the new hash to that, and then stores the hash of the combination in the PCR - a process called "extension". This means that the new value of the TPM depends not only on the value of the new data, it depends on the previous value of the PCR - and, in turn, that previous value depended on its previous value, and so on. The only way to get to a specific PCR value is to either (a) break the hash algorithm, or (b) perform exactly the same sequence of writes. On system reset the PCRs go back to a known value, and the entire process starts again.

Some PCRs are different. PCR 23, for example, can be reset back to its original value without resetting the system. We can make use of that. The first thing we need to do is to prevent userland from being able to reset or extend PCR 23 itself. All TPM accesses go through the kernel, so this is a simple matter of parsing the write before it's sent to the TPM and returning an error if it's a sensitive command that would touch PCR 23. We now know that any change in PCR 23's state will be restricted to the kernel.

When we encrypt material with the TPM, we can ask it to record the PCR state. This is given back to us as metadata accompanying the encrypted secret. Along with the metadata is an additional signature created by the TPM, which can be used to prove that the metadata is both legitimate and associated with this specific encrypted data. In our case, that means we know what the value of PCR 23 was when we encrypted the key. That means that if we simply extend PCR 23 with a known value in-kernel before encrypting our key, we can look at the value of PCR 23 in the metadata. If it matches, the key was encrypted by the kernel - userland can create its own key, but it has no way to extend PCR 23 to the appropriate value first. We now know that the key was generated by the kernel.

But what if the attacker is able to gain access to the encrypted key? Let's say a kernel bug is hit that prevents hibernation from resuming, and you boot back up without wiping the hibernation image. Root can then read the key from the partition, ask the TPM to decrypt it, and then use that to create a new hibernation image. We probably want to prevent that as well. Fortunately, when you ask the TPM to encrypt something, you can ask that the TPM only decrypt it if the PCRs have specific values. "Sealing" material to the TPM in this way allows you to block decryption if the system isn't in the desired state. So, we define a policy that says that PCR 23 must have the same value at resume as it did on hibernation. On resume, the kernel resets PCR 23, extends it to the same value it did during hibernation, and then attempts to decrypt the key. Afterwards, it resets PCR 23 back to the initial value. Even if an attacker gains access to the encrypted copy of the key, the TPM will refuse to decrypt it.

And that's what this patchset implements. There's one fairly significant flaw at the moment, which is simply that an attacker can just reboot into an older kernel that doesn't implement the PCR 23 blocking and set up state by hand. Fortunately, this can be avoided using another aspect of the boot process. When you boot something via UEFI Secure Boot, the signing key used to verify the booted code is measured into PCR 7 by the system firmware. In the Linux world, the Shim bootloader then measures any additional keys that are used. By either using a new key to tag kernels that have support for the PCR 23 restrictions, or by embedding some additional metadata in the kernel that indicates the presence of this feature and measuring that, we can have a PCR 7 value that verifies that the PCR 23 restrictions are present. We then seal the key to PCR 7 as well as PCR 23, and if an attacker boots into a kernel that doesn't have this feature the PCR 7 value will be different and the TPM will refuse to decrypt the secret.

While there's a whole bunch of complexity here, the process should be entirely transparent to the user. The current implementation requires a TPM 2, and I'm not certain whether TPM 1.2 provides all the features necessary to do this properly - if so, extending it shouldn't be hard, but also all systems shipped in the past few years should have a TPM 2, so that's going to depend on whether there's sufficient interest to justify the work. And we're also at the early days of review, so there's always the risk that I've missed something obvious and there are terrible holes in this. And, well, given that it took almost 8 years to get the Lockdown patchset into mainline, let's not assume that I'm good at landing security code.

[1] Other architectures use different terminology here, such as "supervisor" and "user" mode, but it's broadly equivalent
[2] In theory rings 1 and 2 would allow you to run drivers with privileges somewhere between full kernel access and userland applications, but in reality we just don't talk about them in polite company
[3] This is how graphics worked in Linux before kernel modesetting turned up. XFree86 would just map your GPU's registers into userland and poke them directly. This was not a huge win for stability
[4] IOMMUs can help you here, by restricting the memory PCI devices can DMA to or from. The kernel then gets to allocate ranges for device buffers and configure the IOMMU such that the device can't DMA to anything else. Except that region of memory may still contain sensitive material such as function pointers, and attacks like this can still cause you problems as a result.
[5] This describes why I'm using "allowed" rather than "required" here
[6] Saving the system state to disk and powering down the platform entirely - significantly slower than suspending the system while keeping state in RAM, but also resilient against the system losing power.
[7] With some handwaving around "coprocessor". TPMs can't be part of the OS or the system firmware, but they don't technically need to be an independent component. Intel have a TPM implementation that runs on the Management Engine, a separate processor built into the motherboard chipset. AMD have one that runs on the Platform Security Processor, a small ARM core built into their CPU. Various ARM implementations run a TPM in Trustzone, a special CPU mode that (in theory) is able to access resources that are entirely blocked off from anything running in the OS, kernel or otherwise.

comment count unavailable comments

Use btrfs compression in Fedora 33

Posted by Lukas "lzap" Zapletal on February 21, 2021 12:00 AM

Use btrfs compression in Fedora 33

Btrfs have been available in Fedora for quite some time and starting from Fedora 33, new installations of Workstation edition use it by default. Btrfs is pretty capable file system with lots of options, let’s take a look on one aspect: transparent per-file compression.

There’s little bit of misunderstanding how this works and some people recommend to mount with compress option. This is actually not necessary and I would actually strongly suggest NOT to use this option. See, this option makes btrfs to attempt to compress all files that are being written. If the beginning of a file cannot be effectively compressed, it’s marked as “not for compression” and this is never attempted again. This can be even forced via a different option. This looks nice on paper.

The problem is, not all files are good candidates for compression. Compression takes time and it can dramatically worsen performance, things like database files or virtual machine images should never be compressed. Performance of libvirt/KVM goes terribly down by order of magnitude if an inefficient backing store is used (qcow2).

I suggest to keep the default mount options Anaconda installer deploys, note there is none related to compression. Instead, use per-file (per-directory) feature of btrfs to mark files and directories to be compressed. A great candidate is /usr which contains most of the system including binaries or documentation.

One mount option is actually useful which Anaconda does not set by default and that is noatime. Writing access times for copy on write file system can be very inefficient. Note this option implies nodiratime so it’s not necessary to set both.

To enable compression for /usr simply mark this directory to be compressed. There are several compression algorithms available: zlib (slowest, best ratio), zstd (decent ratio, good performance) and lzo (best performance, worse ratio). I suggest to stick with zstd which lies in the middle. There are also compression level options, unfortunately the utility does not allow setting those at the time of writing. Luckily, the default level (3) is reasolable.

# btrfs property set /usr compression zstd

Now, btrfs does not immediately start compressing contents of the directory. Instead, everytime a file is written data in those blocks is compressed. To explicitly compress all files recursively, do this:

# btrfs filesystem defragment -r -v -czstd /usr

Let’s find out how much space have we saved:

# compsize /usr
Processed 55341 files, 38902 regular extents (40421 refs), 26932 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL       50%      1.1G         2.2G         2.3G       
none       100%      337M         337M         338M       
zstd        42%      844M         1.9G         1.9G  

Exactly half of space is saved on a standard installation of Fedora 33 Server when /usr is compressed using zstd algorithm with the default level. Note some files are not compressed, these are too small files when it does not make any sense (a block would be used anyway). Not bad.

To disable compression perform the following command:

# btrfs property set /usr compression ""

Unfortunately, at the time of writing it is not possible to force decompression of a directory (or files), there is no defragment command to do this. If you really need to do this, create a script which reads and writes all files but be careful.

Keep in mind the btrfs property command will force all files to be compressed, even if they do not compress well. This will work pretty well for /usr just make sure there is no 3rd party software installed there writing files. There is also a way to mark files for compression and if they don’t compress well btrfs could give up on it. You can do that by setting chattr +c on files or directories. Unfortunately, you can’t set compression algorithm that way - btrfs will default to slower zlib.

Remember: Do not compress everything, specifically directory /var should definitely not be compressed. If you happened to accidentally mark files within /var to be compressed, you can fix this with:

# find /var -exec btrfs property set {} compression "" \;

Again, this will only mark them not to be compressed, it’s currently not possible to explicitly decompress them. Use the compsize utility to find out how much of data is still compressed.

That’s all for today, I will probably sharing some more btrfs posts.

Remove rsyslog and use journald in Fedora

Posted by Lukas "lzap" Zapletal on February 21, 2021 12:00 AM

Remove rsyslog and use journald in Fedora

I am reinstalling my home server from scratch, I want to start using BTRFS which seems like a great fit for what I am doing (NAS, backups). Installation was smooth, no problems, however I noticed that Fedora Server 33 installed both journald and rsyslogd and journal was configured to do persistent logging.

You know, this is weird. On Red Hat Enterprise Linux 7 and 8, journald is configured in volatile mode and it’s set to forward all logs to syslog. On Fedora 33, it looks like both rsyslog and journald are logging (/var/log/messages and /var/log/journal respectively). No forwarding is going on. This is weird, I am going to file a BZ for folks to investigate.

However, I like to only use journald these days, here is how to do it. Stop the journald:

# systemctl stop systemd-journald

Configure journald, if you want use persistent logging there is actually nothing to configure and just make sure the directory exists:

# mkdir /var/log/journal
# systemd-tmpfiles --create --prefix /var/log/journal

If you want to use volatile logging (only in memory), configure as follows (feel free to modify the maximum memory I am just feeling that few megabytes is okay):

# cat /etc/systemd/journald.conf
[Journal]
Storage=volatile
RuntimeMaxUse=5M

Optionally, delete existing logs if you plan using volatile logging:

# journalctl --rotate
# journalctl --vacuum-size=0

Finally, start up the service:

# systemctl start systemd-journald

You may uninstall rsyslog too:

# systemctl disable --now rsyslog
# dnf remove rsyslog

Done!

Installing Unifi Controller on Fedora 33

Posted by Lukas "lzap" Zapletal on February 21, 2021 12:00 AM

Installing Unifi Controller on Fedora 33

Installing Unifi Controller in Fedora 33 is easy. Step one: install MongoDB from the official site since it is no longer available in Fedora due to licensing reasons. Use EL8 version which appears to work fine:

# dnf install ./mongodb-org-server-4.4.4-1.el8.x86_64.rpm

If you haven’t enabled RPMFusion repository, do it:

# dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Install the controller:

# dnf install unifi

For some reason, unifi has hardcoded path to java alternatives symlink which did not work on the initial start (/usr/lib/jvm/jre-1.8.0/bin/java), reinstalling Java did help:

# dnf reinstall java-1.8.0-openjdk-headless

And enable the service:

# systemctl enable --now unifi

Beware, mongo service does not need to be started, unifi service spawns it’s own process:

# systemctl disable --now mongod

You’re done! Visit https://nuc.home.lan:8443 to manage your site.

Caturday is for sunbeams

Posted by Fedora Cloud Working Group on February 20, 2021 02:40 PM

Starting Caturday right with Willow and Bubby.

The post Caturday is for sunbeams appeared first on Dissociated Press.

Friday’s Fedora Facts: 2021-07

Posted by Fedora Community Blog on February 19, 2021 10:18 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! Fedora 34 Changes should be 100% code complete on Tuesday. The Beta freeze begins Tuesday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
LISAvirtual1-3 Junecloses 23 Feb
</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1883609shimASSIGNED
</figure>

Upcoming meetings

Releases

Fedora 34

Schedule

Upcoming key schedule milestones:

  • 2021-02-23 — Change completion deadline (100% code complete)
  • 2021-02-23 — Beta freeze begins
  • 2021-03-16 — Beta release early target
  • 2021-03-23 — Beta release target #1

Changes

Change tracker bug status. See the ChangeSet page for details of approved changes.

<figure class="wp-block-table">
StatusCount
ASSIGNED5
MODIFIED19
POST1
ON_QA30
CLOSED3
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
1929940dogtak-pkiNEWProposed(Beta)
1916094pipewireON_QAProposed(Beta)
1928542xorg-x11-drv-nouveauNEWProposed(Beta)
</figure>

Fedora 35

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Autoconf-2.71System-WideFESCo #2579
POWER 4k page sizeSystem-WideAnnounced
rpmautospec – removing release and changelog fields from spec filesSystem-WideAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-07 appeared first on Fedora Community Blog.

One from the vaults: World Destruction by Time Zone

Posted by Fedora Cloud Working Group on February 19, 2021 02:27 PM

Needed a bit of adrenaline on top of my caffeine today, pulled this one out of the vaults for a quick boost. “World Destruction” is...

The post One from the vaults: World Destruction by Time Zone appeared first on Dissociated Press.

DevConf2021.cz - Presentation and Demo

Posted by Linux System Roles on February 19, 2021 12:00 PM

There was a presentation entitled “Managing Standard Operating Envs with Ansible” given at DevConf2021.cz. Demo files and links to videos can be found at DevConf2021.cz

PHP version 7.4.16RC1 and 8.0.3RC1

Posted by Remi Collet on February 19, 2021 06:02 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.0.3RC1 are available as SCL in remi-test repository and as base packages in the remi-php80-test repository for Fedora 32-34 and Enterprise Linux.

RPM of PHP version 7.4.16RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32-34 or remi-php74-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.3 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

x86_64emblem-notice-24.png builds now use Oracle Client version 19.9 (version 21.1 will be used soon)

emblem-notice-24.pngEL-8 packages are built using RHEL-8.3

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.0.0RC4 is also available as Software Collections

Software Collections ( php74, php80)

Base packages (php)

Helper script for easy cherry picks with git

Posted by Lukas "lzap" Zapletal on February 19, 2021 12:00 AM

Helper script for easy cherry picks with git

After many, many manual cherry picks, I’ve decided to put together a short script. It’s fully interactive and hopefully self-explanatory.

[lzap@box foreman]$ git xcp
Looks like you want to cherry-pick, huh?!
Branch you want to pick INTO: 2.4-stable
Branch you want to pick FROM: develop
Allright, allright, here is the menu:
bb1a931b6 Fixes #31064 - sequence helper macro
089e232c0 Fixes #31830 - support children in SkeletonLoader component
ed70741fa Fixes #31882 - set ENC var hostname to shortname
6c3794cf1 Refs #31720 - Apply @ezr-ondrej suggestions from code review
d43eba522 Refs #31720 - address comments and add links
809e6eec5 Refs #31720 - Apply tbrisker suggestions
0cc6abbb4 Fixes #31720 - add first draft doc
648f72365 Fixes #31873 - Expose edit permissions in api index layout
9ad6b1a25 Refs #30215 - mark string for translation
24d2df594 Fixes #31855 - add ellipsis with tooltip for long setting values
Commits separated by space or enter to give up: 0cc6abbb4 809e6eec5 d43eba522
Hit ENTER to push, Ctrl-C to interrupt. See ya!

The script remembers the INTO and FROM selection (stores it in the .git directory) and it uses git stash to work even when there are some uncommited changes.

Influential women

Posted by Daniel Pocock on February 18, 2021 10:50 PM

When people ask me about success engaging women in some of the mentoring programs for free, open source software, I never feel comfortable taking credit for that. I feel that it comes down to one simple thing: collaborating with a number of successful and influential women in a variety of different places. Today is the tenth anniversary of the passing of Sally Shaw. Sally had made monumental contributions to the success of the Yarra Yarra Rowing Club (YYRC), even while fighting cancer, raising a family and managing projects for IBM.

Around the same time I met Sally, I had also taken on one of my first web hosting clients, a newly elected politician in the opposition party, Lynne Kosky. Both the YYRC web site and Lynne's web site were among the first projects in my new content management system (CMS), hosted in a GNU/Linux environment. Compared to Sally, I had far fewer opportunities to meet Lynne, her party was elected into government and she became incredibly busy.

If Lynne were alive today, looking at the way her career progressed, there is every chance she would be the state premier, leading the state's response to the pandemic. Moreover, I suspect that if you swapped these two women, they could blaze a trail in each other's workplace just as easily as the career they had chosen. Sadly, both of their lives were cut short for similar reasons.

The 2010/2011 YYRC Annual Report has been used to recognize Sally's contributions:

Sally Shaw sadly lost her battle with Ovarian Cancer in --- 2011. Sally was a significant contributor to Yarra most notably through her involvement with the negotiations with Carey and the subsequent project management of the current Club House. Sally welcomed new Members to the Club through her involvement with novice coaching and later moved into a role on the Committee serving as Secretary and later as Vice President. Sally was a successful oarswoman, winning the Stokes Salver women's trophy in 2002 during the Winter Sculling Series in addition to regatta wins over a decade of rowing. Sally was a Life Member of Yarra and has a racing shell named in her honour.

Sally was committed to living life fully, despite the challenges of regular hospital visits over the last four years and this provided inspiration for her many rowing and other friends. Sally is survived by her parlner Bruce Ricketts, former President of Yarra Yarra, and their two children Grace and Felix, to whom we extend our love and support.

It is interesting to note that Sally is comemorated on the same page as Hubert Frederico, former president of Rowing Victoria, an organization that has produced numerous Olympic and World champions.

In fact, as I went looking through my archive, it wasn't long before I found Sally in a crew with three-time World Champion and Olympian Jane Robinson.

Sally Shaw, Jane Robinson, mixed eight, Yarra Yarra Rowing Club Sally Shaw, Jane Robinson, mixed eight, Yarra Yarra Rowing Club

The same annual report includes a tremendous list of projects ticked off from the Club's Long Term Plan, most notably, the recently completed Club House. It is incredible to see Sally's impact in so many areas of this document. It is even more incredible to think that she was ticking these things off while fighting cancer.

Melbourne's rowing precinct is at a point where the park meets the city center, on the opposite side of a bridge from the main railway station. With major events taking place in Melbourne throughout the year, there are an incredible number of stakeholders and influences on any construction project in this region. Projects like this test the team's skills in every way from compliance to diplomacy.

YYRC, old club house, boathouse drive YYRC, new club house

A video was made in the old Club House before it was demolished. Sally's tells us about her most memorable moment, in the world of free software it sounds a lot like a Code of Conduct violation.

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/godeqN4qjUs?start=1000" width="560"></iframe>

It looks like my camera captured that too:

YYRC, Sally Shaw, swimming YYRC, Sally Shaw, winning medals

Coxing Head of the Yarra

Sally competed both as a rower and a cox. The Head of the Yarra is a gruelling race where approximately one hundred crews row 8.6km upstream:

Head of the Yarra course

It was originally held in late January, the peak of the Australian summer but they now hold it in November to reduce the number of deaths.

It is particularly challenging for the cox because it is not a straight course, in fact, it is every cox's nightmare. This photo captures Sally skillfully steering a combined Yarra/Richmond crew through the treacherous "Big bend", crushing the hopes of another crew as they run into the bank:

YYRC, Head of the Yarra

and then passing a junior crew as they come out of the corner:

YYRC, Head of the Yarra

Meanwhile, Archive.org has captured the original web site we created for Lynne Kosky. Lynne resigned from public office shortly before Sally passed away and Lynne's battle with cancer only become public in 2011. Like Sally, Lynne had ticked off a huge list of projects: Minister for Finance, Minister for Education and the poisoned chalice, Minister for Public Transport.

Lynne Kosky's first web site

Both of these women contributed greatly to the community and to projects that are prominently visible in the city of Melbourne today. Yet their greatest legacy may be the impact they have had on people around them and how we see the potential of women in Australia.

One of the mentors I've worked with called me one day to ask about a female intern spending too much time on political pursuits. I assured him I've seen this before. Despite distractions, or maybe because of them, the intern completed more work than many other interns, male or female.

On the tenth anniversary of Sally's passing, there are news reports about the status of women in other countries, such as Japan, where women have been granted permission to watch men making decisions for them. When I hear about people who claim to represent free software mistreating female employees on sick leave, I imagine them inflicting pain on women like Sally or Lynne. Having a point of reference like this makes it easier to empathize with the victims in those cases.

If you want to comemorate these women or any other victims of cancer, please do not throw IBM employees into the river. A good idea is to simply ask some of the women you work with for suggestions. If you are in Melbourne, you can hire the YYRC Club House for an event or join a Learn to Row program.

Felix, Grace, Sally Shaw, YYRC

User Experience (UX) + Free Software = ❤

Posted by Máirín Duffy on February 18, 2021 08:07 PM

Today I gave a talk at DevConf.cz, which I previously gave as a keynote this past November at SeaGL, about UX and Free Software using the ChRIS project as an example. This is a blog-formatted version of that talk, although you can view a video of it here from SeaGL if you’d rather a video.

Let’s talk about a topic that is increasingly critical as time goes on, that is really important for those of us who work on free software and care really deeply about making a positive impact in the world. Let’s talk about user experience (UX) and free software, using the ChRIS Project as a case study.

What is the ChRIS Project?

The ChRIS project is an open source, free software platform developed at Boston Children’s Hospital in partnership with other organizations – Red Hat (my employer,) Boston University, the Massachusetts Open Cloud, and others.

The overarching goal of the ChRIS project is to make all of the amazing free software in the medical space more accessible and usable to researchers and practitioners in the field.

This is just a quick peek under the hood – because I’m saying ChRIS is a “platform,” but it can be unclear what exactly that means. ChRIS’ core is the backend, we call it “CUBE” – various UI are attached to that, which we’ll cover in a bit. The backend currently connects to OpenStack and OpenShift running on the Massachusetts Open Cloud (MOC.) It’s a container-based system, so the backend – which is also connected to data storage – pulls data from a medical institution, pushes the data into a series of containers that it chains together to construct a full pipeline.

Each container is a free software medical tool that performs some kind of analysis. All of them follow an input/output model – you push data into the container, it does the compute, you pull the data out of it, and pass that output on to the next container in the chain or back to up to the user via a number of front ends.

This is just a quick overview so you understand what ChRIS is. We’ll go into a little more detail later.

Who am I?

So who am I and what do I know about any of this anyway?

I’m a UX practitioner and I have been working at Red Hat for 16 years now. I specialize in working with upstream free software communities, typically embedded in those communities. ChRIS is one of the upstream projects I work in.

I’m also a long-term free software user myself. I’ve been using it since I was in high school and discovered Linux – I fell in love with it and how customizable it is and how it provides an opportunity to co-create my own computing environment.

Principles of UX and Software Freedom

From that experience and background, having worked on the UX on many free software projects over the years, I’ve come up with two simple principles of UX and software freedom.

The first one is that software freedom is critical to good UX.

The second, which I want to focus particularly here, is that good UX is critical to software freedom.

(If you are curious about the first principle, there is a talk I gave at DevConf.us some time ago that you can watch.)

3 Questions to Ask Yourself

So when we start thinking about how good UX is critical to software freedom, I want you to ask yourself these three questions:

  1. If a tree falls in the forest and no one is around to hear it… does it make a sound? (You’ve probably heard some form of this before.)
  2. If your software has features and no one can use those features… does it really have those features?
  3. If your software is free software, but only a few people can use it… is it really providing software freedom?

Lots of potential…

Here, in 2021, we have a wealth of free & open source technology available to us. Innovation does not require starting from scratch! For example:

  1. In seconds you can get containerized apps running on any system. (https://podman.io/getting-started/)
  2. In minutes you can deploy a Kubernetes cluster on your laptop. (https://www.redhat.com/sysadmin/kubernetes-cluster-laptop)
  3. In minutes you can deploy a deep learning model on Kubernetes. (https://opensource.com/article/20/9/deep-learning-model-kubernetes)

This is amazing – in free software, we’ve made so much progress in the past decade or so. You can work in any domain and startup a new software project, and all of the underlying infrastructure and plumbing you need is already available, off-the-shelf, free software licensed, for you to use.

You can focus on the bits, the new innovation, that you really care about, and avoid reinventing the wheel on the foundation stuff you need.

… and too much complication to easily realize that potential.

The problem – well, take a look at this. This is the cloud native landscape put out by the Cloud Native Computing Foundation.

I don’t mean to pick on cloud technology at all – you’ll see this level of complication in any technical domain, I think. It’s…. a little complicated, right? There’s just so much. So many platforms, tools, standards, ways of doing things.

Technologists themselves have a hard time keeping up with this.

How do we expect medical experts and clinicians to keep up with that, when even software developers have a difficult time keeping up?

The thing is, there’s a lot of potential here – there really are so many free software tools in the medical space. Some of them have been around for years.

By default, they tend to be developed and released as free software, because many are created by researchers and academic labs that want to collaborate and share.

But you know, as a medical practitioner – how do you actually make use of them? There’s a few reasons they end up being complicated to use:

  • They’re often built by researchers who don’t typically have a software development background.
  • They’re usually built for use in a specific study or under a specific lab environment without wider deployment in mind.
  • There tends to be a lack of standardization in the tools and lack of integration between them.
  • Depending on the computation involved, they may require a more sophisticated operating environment than most clinical practitioners have access to.
  • There’s a high barrier to entry.

Even though these tools are free software and publically available, these aren’t tools your typical medical practitioner could pick up and start using in their practice.

Free software and hacking the Gibson

We have to remember that these are very smart people in the medical field. Neuroscientists and brain surgeons, for example. They’re smart, but they can’t “hack the Gibson.”

A good UX does not require your users to be hackers.

Unfortunately, traditionally and historically, free software has kind of required users to be hackers in order to make the best use of it.

Bridging the gap between free software and frontline usage

So how do we bridge this gap between all of this amazing free software and clinical practice, so this free software can make a positive difference in the world, so it could feasibly positively impact medical outcomes?

Good UX bridges the gap. This is why good UX is so critical to software freedom.

If your software is free software, but only a few people can use it, are you really providing software freedom?

I’m telling you no, not really. You need a good UX to be able to do that – to allow more than a few people to be able to use it, and to be able to provide software freedom.

What are these tools, anyway?

What are all of these amazing free software tools in the medical space?

This is a very quick map I made, based on a 5-10 minute survey of research papers and conference proceedings in various technology-related medical groups. These are all free software tools.

This barely scratches the surface of what is available.

I want to talk about two of them in particular today: COVID-Net and Free Surfer. They are both tools now available for use on the ChRIS platform.

Free Surfer

Freesurfer is an open source suite of tools that focuses on processing brain MRI images.

It’s been around for a long time but it’s not really in clinical use.

This is a free software tool that has a ton of potential to impact medicine. This is a screenshot of a 3D animation created in Free Surfer running on the ChRIS platform. The workflow here involved taking 2D images from an MRI, that are taken in slices across the brain. Running on ChRIS, Freesurfer constructed a 3D volume out of those flat 2D images, and then segmented the brain into all its different structures, color-coding them so you can tell them apart from one another.

How might this be used clinically? You could have a clinician who’s not sure what’s wrong with a patient. Instead of just reviewing the 2D slices, she may pan around this color coded 3D structure and notice one of the structures in the brain is larger than is typical. That might be a clue that gets the patient to a quicker diagnosis and treatment.

This is just a hypothetical example so you can see some of the potential of this free software tool.

Another example of a great free software tool in this space is COVID-Net.

This is a free software project that is developed by a company called DarwinAI in partnership with the University of Waterloo. It uses a neural network to analyze CT and Xray chest scans and provide a probability if a patient has healthy lungs, covid, or pneumomia.

It’s open source and available to the general public.

The potential here is to provide an alternative way of triaging patients when perhaps COVID test results are backed up or too slow especially during a surge in COVID cases.

These are just two projects we’ve worked with in the ChRIS project.

How do we get these tools to the frontline, though?

How do we get amazing tools like these in the frontlines, in medical institutions? How do we provide the necessary UX to bridge the gap?

Dr. Ellen Grant, who is Director of the Fetal-Neonatal Neuroimaging and Developmental Science Center at Boston Children’s Hospital, came up with a list of three basic user experience requirements these tools need for clinicians to be able to use them:

  1. They have to be reproducible.
  2. They have to be rapid.
  3. They have to be easy.

Requirement #1: Reproducible

First, let’s talk about reproducibility. In the medical space, you’re interacting with scientists and medical researchers trying to find evidence that supports the effectiveness of new technology or methods. So if a new method comes out – let’s say it’s a new machine learning model – and you’re reading a study showing support for its effectiveness.

If you want the best possible shot at the technique achieving a similar level of effectiveness with your own data, you’ve got to use the same version of the code, in as similar a running environment as possible as in the study. You want to eliminate any variables – like the operating environment – that might skew the output.

Here’s a screenshot of setting up Freesurfer. This is just not something we can expect medical practitioners to go through in order to reproduce an environment.

How do we make free software tools more reproducible for clinicians?

I’ll use the COVID-Net tool as an example. We worked with the COVID-Net team and they packaged it into a ChRIS plugin container. The ChRIS plugin container contains the actual code and includes a small wrapper on top with metadata and whatnot. (Here is the template for that, we call it the ChRIS cookiecutter.)

Once a tool has been containerized as a ChRIS plugin, it can run on the ChRIS platform, which gives you a number of UX benefits including reproducibility. A clinician can just pick that tool from a list of tools from within the ChRIS UI, push their data to it, and get the results, and ChRIS manages the rest.

Taking a few steps back – we have a broader vision for reproducibility via ChRIS here.

This is a screenshot of a prototype of what we call the ChRIS Store.

We envision making all of these amazing free software medical tools as easy to install and run on top of ChRIS as it is to install and run apps on a phone from an app store. So this is an example of a tool containerized for ChRIS – you’d be able to take a look at the tool in the ChRIS Store, deploy it to your ChRIS server, and use it in your analysis.

Even if a tool is a little hard to install, run, and reproduce in the same exact way on its own, for the small cost of packaging and pushing it into the ChRIS plugin ecosystem, it becomes much easier to share, deploy, and reproduce that tool across different ChRIS servers.

Instead of requiring medical researchers and practitioners need to use the linux terminal, to compile code, and to set up environments with exact specifications, we envision them being able to browse through these tools, like in an app store, and be able to easily run them on their ChRIS server. That would mean they would get much more reproducibility out of these tools.

Requirement #2: Rapid

The second requirement from Dr. Grant is rapidness. These tools need to be quick. Why?

Well, for example, we’re still in a pandemic right now. As COVID cases surge, hospitals run out of capacity and need to turn over beds quickly. Computations that take hours or days to run will just not be used by clinicians, who do not have that kind of time. So the tools need to be fast.

Or for a non-pandemic case… you might have a patient who needs to travel far for specialized care – if results could come back in minutes, it could save a sick patient from having to stay in a hotel away from home and wait days for results and to move forward in their treatment.

Some of these computations take a long time, so couldn’t we throw some computing power at them to get the results back quicker?

ChRIS lays the foundation that will enable you to do that. ChRIS can run or orchestrate workloads on a single system, HPC, or a cloud, or across those combined. You can get really rapid results, and ChRIS gives you all the basic infrastructure to do it, so individual organizations don’t have to figure out how to set this up on their own from scratch.

For example – this is a screenshot of the ChRIS UI – it shows how you build these pipelines or analyses in ChRIS. The full pipeline is represented by the graph on the left, and each of the circles or “nodes” on the graph is a container running on ChRIS. Each of these containers is running a free software tool that was containerized for ChRIS.

The blue highlighted container in the graph is running Freesurfer. In this particular pipeline, ChRIS has spun up different copies of the same chain of containers as to run probably on different pieces of the data output by that blue Freesurfer node.

You can get this kind of orchestration and computing power just based on the infrastructure you get from ChRIS.

This is a diagram to show another view of it.

You have the ChRIS store at the top with a plugin (P) getting loaded into the ChRIS Backend.

You have the data source – typically a hospital PACS server with medical imaging data, and image data (I).

ChRIS orchestrates the movement of the data and deployment of these containers into different computing environments – maybe one of these here is an external cloud, for example. ChRIS retrieves the data from the data source and pushes it into the containers, and retrieves the container pipeline’s output and stores it, presenting it to the end user in the UI. Again, each one of those containers represents a node on the pipeline graph we saw in the previous slide, and the same pipeline can consist of nodes running in different computing environments.

One of those compute environments that ChRIS utilizes today is the Massachusetts Open Cloud.

This is Dr. Orran Krieger, he is the principal investigator for the Massachusetts Open Cloud at Boston University.

The MOC is a publicly-owned, non-commercial cloud. They collaborate with the ChRIS project, and we have a test deployment of ChRIS that we are using for COVID-Net user testing right now that runs on top of some of the powerpc hardware in the MOC.

The MOC partnership is another way we are looking to make rapid compute in a large cloud deployment accessible for medical institutions – a publicly-owned cloud like the MOC means institutions will not have to sign over their rights to a commercial, proprietary cloud who might not have their best interests at heart.

Requirement #3: Easy

Finally, the last UX requirement we have from Dr. Grant is “easy.”

What we’ve done in the ChRIS project is to create and assemble all of the infrastructure and plumbing needed to connect to powerful computing infrastructures for rapid compute. And we’ve created a container-based structure and are working on creating an ecosystem where all of these great free software tools are easily deployable and reproducible and you can get the same exact version and env as studied by researchers showing evidence of effectiveness.

One of the many visions we have for this: a medical researcher could attend a conference, learn about a new tool, and while sitting in the audience (perhaps by scanning a QR code provided by the presenters) access the same tool being presented in the ChRIS store. They could potentially deploy to their own ChRIS on their own data to try it out, same day.

This all needs to be reproducible, and it needs to be easy. I’m going to show you some screenshots of the ChRIS and COVID-Net UIs we’ve built in making running and working with these tools easier.

This is an example of the ChRIS feed list in the Core ChRIS UI. Each of these feeds (what we call custom pipelines) is running on ChRIS. Each pipeline is essentially a composition of various containerized free software tools chained together in an end to end workflow, kicked off with a specific set of data that is pushed through and transformed along the way.

This UI is not geared at clinicians, but is more aimed at researchers with some knowledge of the types of transformations the tools create in the data – for example, brain segmentation – who want to create compositions of different tools to explore the data. They would compose these pipelines in this interface, experiment with them, and once they have created one they have tested and believe is effective, they can save it and reuse it over and over on different data sets.

While you are creating this pipeline, or if you are looking to add on to a pre-existing workflow, you can add additional “nodes” – which are containers running a particular free software tool inside – using this interface. You can see the list of available tools in the dialog there.

As you add nodes to your pipeline, they run right away. This is a view of a specific pipeline, and you can see the node container highlighted in blue here has a status display on the bottom showing that it is currently still computing. When the output is ready, it appears down there as well, per-node, and it syncs the data out and passes it on to the next node to start working on.

Again, this is an interface geared towards a researcher with familiarity analyzing radiological images – but not necessarily the skill set to compile and run them from scratch on the command line. This allows them to select the tools and bring them into a larger integrated analysis pipeline, to experiment with the types of output they get and try the same analysis out on different data sets to test it. They are more likely looking at broad data sets to see trends across them.

A practicing clinician needs vastly simplified interfaces compared to this. They aren’t inventing these pipelines – they are consuming them for a very specific patient image, to see if a specific patient has COVID, for example.

As we collaborate with the COVID-Net team, we are focused on creating a single-purpose UI that used just one specific pipeline – the COVID-Net analysis pipeline – and could allow a clinician to simply select the patient image, click and go, and get the predictive analysis results.

The first step in our collaboration was containerizing the COVID-Net tool as a ChRIS plugin. That took just a few days.

Then together over this past summer, in maybe 2-3 months, we built this very streamlined UI aimed at just this specific case of a clinician running the COVID-Net prediction on a patient lung scan and getting the results back. Underneath this UI, is a pipeline, just like the one we just looked at in the core UI – but clinicians will never see that pipeline underneath – it’ll just be working silently in the background for them.

The user simply types in a patient MRN – medical record number – to look up the scans for that patient at the top of the screen, selects the scans they want to submit, and hits analyze. Underneath that data gets pushed into a new COVID-Net pipeline.

They’ll get the analysis results back after just a minute or two, and it looks like this. These are predictive analyses – so here the COVID-Net model believes this patient has about a 75% chance of having normal, healthy lungs and around a 25% or so chance of having COVID.

If they would like to explore this a little further, maybe confirm on the scan themselves to double check the model, they can click on the view button and pull up a full radiology viewer.

Using this viewer, you can take a closer look at the scan, pan, zoom, etc. – all the basic functionality a radiology viewer has.

This is an example of the model we see for ChRIS to provide simplified, easy ways of accessing the rapid compute and reproducible tool workflows we talked about: Standing up streamlined, focused interfaces on top of the ChRIS backend – which provides the platform, plumbing, tooling to quickly stand up a new UI – so clinicians don’t have to develop their own workflows, they can consume tested and vetted workflows created by experts in the medical data analysis field.

To sum it all up –

This is how we are working to meet these three core UX requirements for frontline medical use.

We’re looking to make these free software tools reproducible using the ChRIS container model, rapid by providing access to better computing power, and easy by enabling the development of custom streamlined interfaces to access the tools in a more consumable way.

In other words, the main requirement for these free software tools to get into the hands of front line medical workers is a great user experience.

Generally, for free software to matter, for us to make a difference in the world, for users to be able to enjoy software freedom – we have to provide a great user experience so they can access it.

So in review – the two principals of software freedom and UX:

  1. Software freedom is critical to good UX.
  2. Good UX is critical to software freedom.

I got HoneyComb

Posted by Marcin 'hrw' Juszkiewicz on February 18, 2021 06:09 PM

Few years ago SolidRun released MACCHIATObin board. Nice fast cpu, PCI Express slot, several network ports. I did not buy it because it supported only 16 GB of memory and I wanted to be able to run OpenStack.

Time has passed, HoneyComb LX2 system appeared on AArch64 market. More cores, more memory. Again I haven’t bought it — my Ryzen 5 upgrade costed less than HoneyComb price is.

And when someone asked me for some serious AArch64 system to buy I was suggesting HoneyComb.

Let us look at hardware

So what do we have here?

  • 16 Cortex-A72 cores
  • 2 SO-DIMM slots (up to 64GB ram in total)
  • USB 2.0 and 3.0 ports (as ports and/or headers)
  • standard ATX power socket (no 12V AUX needed)
  • 3 fan connectors (one with PWM, two with 12V)
  • front panel connectors like on x86-64 motherboards
  • M.2 slot for NVME (pcie x4)
  • PCI Express slot (open x8 one so x16 card fits)
  • MicroSD slot (for firmware)
  • 4 SFP+ ports for 10GbE networking
  • 1 GbE port
  • 4 SATA ports
  • serial console via microUSB port
  • power/reset buttons
<figure id="__yafg-figure-1"> HoneyComb board layout <figcaption>HoneyComb board layout</figcaption> </figure>

Lot of networking and there is even version with 100GbE port added: ClearFog CX LX2.

So how I got it?

I wrote that I did not bought it, right? Jon Nettleton (from SolidRun) contacted me recently and asked:

Morning. do you have any interest in a HoneyComb? I have some old stock boards available to the community. I figured it may help you out with your UEFI Qemu work.

We discussed about SBSA/SBBR stuff and I sent him an email with address information and shipping notes.

Some days passed and board arrived. I added spare NVME and two sticks of Kingston HyperX 2933 CL17 memory and it was ready to go (microsd card keeps firmware):

<figure id="__yafg-figure-2"> HoneyComb board ready to go <figcaption>HoneyComb board ready to go</figcaption> </figure>

Let’s run something

Debian ‘bullseye’ booted right away. Again I used pendrive from my EBBR compliant RockPro64. Started without problems.

Network ports issue

Ok, there was one problem — on-board ethernet ports do not work yet with mainline nor distribution kernels so I had to dig out my old USB based network card.

There are patches for Linux kernel to get all ports running. May get merged into 5.13 kernel if things go nicely.

Plans?

I plan few things for HoneyComb:

  • check several distributions how they handle AArch64 systems
  • improve SBSA ACS code as HoneyComb is almost SBSA level 3 compliant (there are some places where error/warning messages break output)
  • build, deploy and test OpenStack
  • test software
  • check how it works as AArch64 desktop (like I did with APM Mustang 6 years ago)

New badge: DevConf.cz 2021 Attendee !

Posted by Fedora Badges on February 18, 2021 05:52 PM
DevConf.cz 2021 AttendeeYou attended the 2021 iteration of DevConf.cz, a yearly open source conference in Czechia!

Creating XDG custom url scheme handler

Posted by Izhar Firdaus on February 18, 2021 02:49 PM

If you develop system tools or desktop software on Linux that also have an accompanying web application, you might want to have a way for the web application to launch the tool with some parameters specified through a web based link. For example, a link with dnf://inkscape as url, might be used to launch Gnome Software, and display the description of Inkscape, so that user may choose to install it or not.

In Linux, registering a custom URL handler can be done using XDG desktop file, of which it is configured to open x-scheme-handler MimeType.

To achieve this, you can simply create a .desktop in ~/.local/share/applications/, or /usr/local/share/applications, and configure it with MimeType=x-scheme-handler/<your-custom-proto>. If nothing went wrong with the setup, you should be able to open links with the custom url protocol afterwards.

For example, if you have a script dnfurl which takes dnf://<package-name> as its first parameter and launch Gnome Software with the package name, you can create a .desktop file with this content:

[Desktop Entry]
Version=1.0
Type=Application
Name=dnfurl
Exec=dnfurl %U
Terminal=false
NoDisplay=true
MimeType=x-scheme-handler/dnf

After installing the .desktop file, do run update-desktop-database to update the related indexes/cache. If you installed the .desktop file in ~/.local/share/applications/, you will have to run update-desktop-database ~/.local/share/applications.

If you interested with the example dnfurl program above, you can check it out at this git repository: github.com/kagesenshi/dnfurl. Or if you are in Fedora, you can install it from copr by running:

dnf copr enable izhar/dnfurl
dnf install dnfurl

Have fun~.

Free Software and Open Source: Get involved

Posted by Ingvar Hagelund on February 18, 2021 01:21 PM

Contributing to Free Software using Open Source methodics may look like intimidating deep expert work. But it doesn’t have to be that. Most Free Software communities are friendly to newcomers, and welcome all kind of contributions.

Reporting bugs

Hitting a bug is an opportunity, not a nasty problem. When you hit a bug, it should be reported, and with a bit of luck, it may even be fixed. Reporting the bug in an open forum also makes other users find the bug, give attention to it, and they may in turn be able to help out working around or fixing it. Reporting bugs is the most basic, but still of the most valuable contributions you may do. Finding bugs are finding real problems. Reporting bugs are helping fixing them, for you, and for other users. You may not complain to your coworker on a bug unless it is reported upstream.

While reporting bugs, remember to collect as much information as possible on the issue, including logs, runtime envionment, hardware, operating system version, etc. While collecting this information, make sure you don’t send any traceable private information that may be used by rouge parties, like ip adresses, hostnames, passwords, customer details, database names, etc.

Bugs in operating system packages

Bugs in components delivered by a Linux distribution (Ubuntu, Debian, Fedora, Red Hat, SuSE, etc), should be reported through their bug reporting interface. Remember to search for the bug before posting yet another duplicate bug. Perhaps a workaround already exists.

So the next time something strange happens to your haproxy, nginx, varnish, or your firefox browser crashes or has unexpected behaviour, collect data from your logs, and open a bug report.

  • Red Hat / EPEL / Fedora users should report bugs through https://bugzilla.redhat.com/
  • Similarly, OpenSuSE users may search for and report bugs at https://bugzilla.opensuse.org
  • Ubuntu users may have luck looking at https://help.ubuntu.com/community/ReportingBugs
  • As Ubuntu’s upstream is Debian, you may search for bugs, fixes and workarounds using their tools at https://www.debian.org/Bugs/Reporting

    These tools have detailed guidelines on the details on how to search, report, and follow up the bugs.

    For an example of an end user bug report with an impressive follow up from a dedicated package maintainer, have a look at https://bugzilla.redhat.com/show_bug.cgi?id=1914917

    Reporting upstream bugs

    Using software directly from the upstream project is growing more usual, specially as container technology has matured, enabling developers to use software components without interfering with the underlying operating system. Reporting and follow up bugs becomes even more important, as such components may not be filtered and quality assured by operating system security teams.

    Find your component’s upstream home page or project development page, usually on Github, Savannah, Gitlab, or similar code repo service. These services have specialised issue trackers made for reporting and following up bugs and other issues. Some projects only has good old mailing lists. They may require you to subscribe to the list before you are allowed to report anything.

    Following up the report, you may be asked for test cases and debugging. You will learn a lot in the process. Do not be shy to ask for help, or admitting that you don’t understand or need guidance. Everybody started somewhere. Even you may learn to use the GNU debugger (gdb) in time.

    Non code commits

    Similarly to reporting bugs, non code commits may be low-hanging fruit to you, but may be crucial to a project’s success. If you can write technical documentation, howtos, or do translations to your native language, such contributions to Free Software are extremely welcome. Even trivial stuff like fixing typos in a translated piece of software should be reported. No fix is too small. I once did a single word commit to GPG: A single word typo fix in their Norwegian translation. Also, write blog posts. Don’t have a blog yet? Get one. Free blog platforms are thirteen to a dozen.

    Use source code tools

    Admit it: You already use git in your day job. Using it for documentation or translation should be trivial. If you have not done so already, learn how to clone a project on github (just google it), grep through the source for what you like to fix or add, make a branch with your contribution, and ask for a pull request (again, just google it). If you changes are not merged at once, be patient, ask for the maintainer’s advice, and listen to their guidelines. Be proud of your contribution, but humble in your request.

    Feature requests

    Usage of a piece of software is not given from the start. Perhaps you have ideas to how a piece of code may be used in some other way, or there is some piece missing that is obvious to you, though not reported in the project’s future roadmap. Don’t be shy to ask. Report a feature request. Usually this is done the same way as reporting a bug. The worst you can get is that they are not interested, or a request for you to produce the missing code. Which you may do.

    Join a project

    If your work require it, and/or your interests and free time to spend allows for it, join a Free Software project.

    Distribution work

    Upstream distributions like Fedora, Debian, and OpenSuse (not to mention Arch and Gentoo) are always looking for volunteers, and have sub projects for packagers, documentation, translation, and even marketing. As long time players in the field, they have great documentation for getting started. Remember to be patient, ask for advice, follow guidelines. Be proud of your contributions, but humble in your requests.

    Upstream projects

    If you want to join a project, show your interest. Join the project’s social and technical forums. Subscribe to their development email lists. Join their IRC channels. Lurk for a while, absorbing the project’s social codes. Some projects are technoraties, and may seem hostile to newbie suggestions without code to back them up. Others are welcoming and supportive. Do some small work showing what you are capable of. Fix things in their wiki documentation. Create pull requests for simple fixes. Join in their discussion. Grow your fame. Stay humble. Listen the long time players.

    Release your own

    Made a cool script at work? A build recipe for some special case? An Ansible playbook automating som often-visited task? A puppet module? Ask your manager for permission to release it as Free Software. Put GPLv3 or some other OSS license on it, and put it on Github. Make a blog post about it. Tell about it in social media. Congratulations, you are now an open source project maintainer. Also, Google will find it, and so will other users.

  • Different OpenGPG DNS entries for the same email

    Posted by Miroslav Suchý on February 18, 2021 10:06 AM

    In previous blogpost, I wrote How to generate OpenPGP record for DNS (TYPE61). You may get puzzled what to do when you have different GPG keys with the same email. E.g. EPEL GPG keys are:

    pub   rsa4096 2019-06-05 [SCE]
          94E279EB8D8F25B21810ADF121EA45AB2F86D6A1
    uid           Fedora EPEL (8) <epel>
    pub   rsa4096 2013-12-16 [SCE]
          91E97D7C4A5E96F17F3E888F6A2FAEA2352C64E5
    uid           Fedora EPEL (7) <epel>
    pub   rsa4096 2010-04-23 [SCE]
          8C3BE96AF2309184DA5C0DAE3B49DF2A0608B895
    uid           EPEL (6) <epel>
    

    Three different GPG keys with the same email. How should we put it in DNS?

    This is actually nothing unusual in DNS. You can have multiple DNS entries normally. E.g.:

    ;; ANSWER SECTION:
    seznam.cz.        274    IN    A    77.75.74.172
    seznam.cz.        274    IN    A    77.75.74.176
    seznam.cz.        274    IN    A    77.75.75.172
    seznam.cz.        274    IN    A    77.75.75.176 
    

    When you run the command suggested in the previous blogpost, you will get:

    $ gpg2  --export-options export-dane --export epel@fedoraproject.org
    $ORIGIN _openpgpkey.fedoraproject.org.
    ; 94E279EB8D8F25B21810ADF121EA45AB2F86D6A1
    ; Fedora EPEL (8) <epel>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 1141 (
            99020d045cf7cefb011000c93882169651ae7719e9bc99e4c50cf60ada1623b8
            287559e8725add97cde4563a92429fb6760c6e1b99948800d47d81da450cf12b
            0f1e7ee427c31cd4f6467bd27802c6d99b4161a65267d24e189aa4ecf4d34d7c
            f9ea3930569b776bdd886a35cbee759b6b110e937ca9d09aa97928eb973232e2
            d7ae88c91c8baf440ff1ee2a8ead17f26bcc773b10b83c4825e698039cff2954
            ad252d89dc0c440237be83f6e6e16505a121217fcc923e7bcd3a57bd61cdfc8b
            fccb779909bf962fa544a536e54b24f5d59a7f8347ff06473083c0915278b83e
            07fee4b8f70a969f28936064fb8546e279c17b72b84b0a6951cef251f269113f
            aff84ff177b4d0cf5997833440d5154913147354ebf876f8edfdf0c358fdf3a1
            c8a68d2b79e713483a409d5d387df59571c0465a453ad5addd599953426758b7
            9876f6a9e047dff6a4d6649848ec2ab45a6f0380b5295e0365926aaebc4ecfd9
            6e4402bd84e40a8a4280db7f0fc6896751bc758a78d18c44b9c623742ea17dd4
            a409570fbbf5dd5759c4dc9dbe97b2a5b7ff01f472ac744a864523ddc535a589
            0cdc335913bec2e4951eebfd27c5bb7811351905380b8182463fc73b3db9159c
            ee88fca80bd63c3eecc5ba033f0ec7e45764bc8631599b5bf4cc73c85a25f28b
            328528dd3564f273e2521a0e0b4fe423b8fd8a516b071054e5d52d1782130167
            3cc671ba8c9b3f22cbd1a50011010001b4284665646f7261204550454c202838
            29203c6570656c406665646f726170726f6a6563742e6f72673e890238041301
            02002205025cf7cefb021b0f060b090807030206150802090a0b041602030102
            1e01021780000a091021ea45ab2f86d6a166a00ffe319cb5b0b37e0607254342
            48e9ae4d1f5fb2328699bf53b06c2072aca472cbf98e3abc000663ff6f32f744
            d1f72bf44936669ef31354569920b3cf35204c6e811684c6d47e05868ffb67c0
            572bf8f26e8c174997ea5e74ce7e14856a5c0377095226f7d8355f4c6f2bdbc4
            df2a651535cfb3c4599ac8cc50c34e815193447bd731e99cf4cd209e7f6174f0
            36a53d3b1208f213108f8b9f175d74e34009419edbaf7af37ab97204e07e25dd
            7348f56b00fa5e332269a5614748300858aeb1f9a9b5dd52089d7b6a07f0b3e0
            60472c52b9776fc862ca55c38ed2a258fc8e2144b6702aebed40ab9587de2078
            6c4d9eeb3d340d217dddfaf03b258ded698f22873642c35126181f242fa2b064
            00c30abcc406c71017aec8c1bc1998f3c68860d07e1b39212e8d52d38c7d716c
            01b80e78563bfd555cb5fe2d710bc9c3bc6504dc8e3ff80102aea38c2f30faef
            f40d7af681131be57042535def85413c60af867cf019735b4cafca1f5f96447d
            1849bfb8d5f941dcd2a131253a746234486b48c4348785554d9b1dbfe97893f5
            4ff39eb66c1b92b0f2721b6c5cfef4c19451680932c775665f51f20cfb57d7e9
            9fb2fbcb92e127c1ce08cb18fd9effec752cfadfc0d4c23883b43983952e4997
            608b4cf1ba4896b8290b7808803f8cbbcca2b83a327ae383437160876b7eb39f
            0566933de2c4bf10d1ab91d793aaab606347f7f2a2
            )
    
    $ORIGIN _openpgpkey.fedoraproject.org.
    ; 91E97D7C4A5E96F17F3E888F6A2FAEA2352C64E5
    ; Fedora EPEL (7) <epel>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 1141 (
            99020d0452ae6884011000b5529857c0ca8201aacf507fd9b0e16c95a6de4d53
            b6a439396273bde9ffab81907bc40ac139279093b07dd22a9227ce7f73bd8e02
            7e0e5d8bd3eb781f09e5e926ce4cede99790fb0d4165928eef7d956f80a92366
            8d85a199194b44697438eee02308fbefa7485ef70c34597348f8f4d0ddb102a8
            cc6e39675769f669b004e60aba569a8fbc55d5c9fad56bfb9a9688035667fa87
            6b7845da627eaf2a4c7b07154df1a42cfe4fafdd196286438d9941f4da2e70cc
            8d3a00b266340327af9086d7ea2655b731af6a76293dc596e17d110cf729a9f1
            4d664eb8df123896c67e63612bf58bb94bff31d25cbeb66988a24684d30c1b75
            4bbbe3461366309eb2ba185a2460e73b90db17cac49a15e44f8487eb58c060c9
            9fa1df5a14609ee751470bee278e73b5856d2ae94ac3c410a5dd924d6adc4100
            c96915a69cca285ff3c04d38c2044c41f5d933dfc0ebbaf93b2241ccb6c22a96
            400e40c76c5f57774e8bfa044d970e3206d331712ddb8919b57073feab21cf79
            4f4e798f7e84cf41eb8f17694c638e1f146a30ae66f5f9456dac71b439d2ef0e
            fb5aacd6ec78c5ac3d15793c76be78e31aa7211c44297c3a453b36fc2d316181
            889dc913547e5418df958b3b32cd57ea55dde437260d75505c1e95234ba9f41f
            b8544673ecdcc243631e1e723ef9bda1d4750487ea27ab0a18f19bb4f357a3b1
            11311ca95b2473d338b4b70011010001b4284665646f7261204550454c202837
            29203c6570656c406665646f726170726f6a6563742e6f72673e890238041301
            020022050252ae6884021b0f060b090807030206150802090a0b041602030102
            1e01021780000a09106a2faea2352c64e5c7c60ffe2ca6aa6c4e3b4333baa9ca
            d28b1caaee66dbee5a2aade517ef4fdc30f4651414c678659197e27517838f6a
            8b10cb6b2591f1d746fece64c8e8b70b4b97ffa8a7cda632460f8187bd1baea2
            942b5e05d41aeb3e2dbd79a29f1be1ae305b577411b66bb3ad3e6c37cae2a6ab
            d129bfa07498fe84e4a49dbd2452a895ad19c03ac114bfc03cd7244347a79ade
            f48b26dc2632769cb418b440e70a387cf079b8b75484b1140ef87628afff7cc8
            e36b7a342fe48dcfc3746836729f9031094f5107710771379c160a32e6e9918f
            df4166b79b47534efb8801ae2bc87ddbc3e1f24d7cd0475f08521fd00218236f
            1e75d58c7405f621d60292a44487a1f05af69f976d7c9105ed2117f066d7ec56
            ec5a8dbd91732b56036a1cdc839df0015683ee6e041db4964991564091a1eb32
            ef00cadc13eb643878daa6248d5a59b650348e63598a9ce912ae08aebf17df6a
            b61153f6024ed7bfb246e2fa52fdbfbe0cd547544677054b2b09548c63c08d87
            e5a4243058c0004b18aff6e3f260f1dc12b4ab6d11b4c2321f011defbacd4e48
            ec77c591d9a5715f9904ec0cfef355138bcd5f7c477270140c49e3fc6a78e378
            bc23c553a3ac4f795643cc3b8a5e68858f79c26ba37ebea593cf6413325d393c
            dc9fb0c16d4aeedf7090306299dc4bed463b02ec50db2f4d7bab14be65503f1c
            69327e2bb8729815f155276ea97978067ed4b97a0f
            )
    
    $ORIGIN _openpgpkey.fedoraproject.org.
    ; 8C3BE96AF2309184DA5C0DAE3B49DF2A0608B895
    ; EPEL (6) <epel>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 1132 (
            99020d044bd22942011000cb1a7523db8655296ee588537f240e4282dce53672
            aceec060edf55b356af2884dd445b9f5257beb7701ed90ac98f7afe27d9d4b77
            7944da0385eb56c6676096c0935e7bbe92a7d67e8ac3fc4505db1b98f08ffe01
            33d1caee9864b6a15527c55b6368df4e371fc51bc633c601c1717c871d020d95
            11023bdfd71452409ee2028e7ca9c1e75edc4e02f42601b47dcfa43f87f0fe63
            f3e1a08269d4e57854d1c26c2b3d33b1d4541a600b9dcf0dc4d442ec1c81e63a
            a50828f5e4f578b352655ac9e4172d8af97551c0fa3fa0881a491e4f680d86a8
            a77513e78145c65d6ed0ce879f7be7d542a2529bb4eadaaf68a95678e89f7ae3
            0682448b51c92a6e5b977164edf858ceb0b30d813f63250026c7e25de5baab69
            5e8dc1bdf9e4f870051f71dc32ec1d6ae971bfbcd829709f36849f82ba447e8d
            84c78226fc8dd676dc0c13f19b9f076add5273800c4b54585ccc31f03648e621
            d3c094d27dfe2c518e2a7876066d5bc55c042acda0bf8b843ac87e3be72dd46e
            bc11f2f4fdec085ec567271a1e53163fa4622ed1e710c19c6d9f10a501e2d9d4
            5e535c32afd42082e194fa3a925d86abd7211cb1d4466aca6934867cf906b06e
            6fdae1da13d744c4b4f02b852a079706efa930d88ed7d5f76942ff68b080eea0
            6201bd5eb3b857c755dfd15f9bba0b15fbd36e0e66bf843f13a016a3f8248e6b
            7191cba56f202f8c5b58010011010001b4214550454c20283629203c6570656c
            406665646f726170726f6a6563742e6f72673e89023604130102002005024bd2
            2942021b0f060b090807030204150208030416020301021e01021780000a0910
            3b49df2a0608b8951fc60ffc0b18fbfda8edfd7b26fd365af07ca754128d6d1f
            129dae1373f9762b3b8c950d305944f4aebcbdb26a879222140e2a134f7c4813
            f6676c6ec81c7e6a07c66195727fa56e1796b12bdc82b5eecf480bb7c4c618a1
            8644e0282d7a6e52c6f51cdb4a10ec0f438cf5b90da73b3e612d1c83395d08d8
            bc1857c0631888c1294305114c454ec2fb5ce664ac083f59b4c7d8f5b786b7d2
            5ad71103b118a2723707cd1ddfd7dfc2ced29b229b4b93e6663a8e3ee4ef5761
            74b4c84a2219dc070a288d664f1317ed1e92a1cca561f1f7433bfdcfd72d4593
            70a29fc51b08ef4c58c14d476edf57308036510f963b0703edc4cf817bb9ba05
            a7438e128bf86328b80446e0a999ed8531b8b9bf67ca2ba9219ddc90f1ebde75
            5c0d419c2fa9500ff9e498d54878b60fc75b873ce4a559fba88c0620cd11fa2c
            bec8760e051c7040d2da3b156f1d4171483b236224500e4f68fc3f9fad55dabc
            c01d0f0d21fcc91a67e07749f5162ab9abb83a48e5bb13967f4b91692fff4947
            02661fb5fca0443a672f047611640e4997f9da90283119bda96903b3aaaa4e70
            72af9722eeccee6ec9b3176a2a4b1314c62570655eb6db0127d76bfc86612006
            895ac6cfd084ebaefa74c966f05facdd75c4d77419ad79a396873a8f03a58e77
            c09234c65e3881ece50b11279df7a3115ed7cb9345b901bf6977bb4a0d1e901c
            8eeb8075df0a8a519d9d9555
            )
    

    I.e., three entries for 1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec entry.

    RFC 7929 explicitly does not mention how to handle this situation. The only relevant part is Appendix A. on page 18. If you read it you may get a bit different result:

    $ gpg2  --export-options export-minimal,no-export-attributes --export 'epel@fedoraproject.org' | wc -c
    3414
    #^^^ this is the value at the end of the first line (number of octets)
    
    $ gpg2  --export-options export-minimal,no-export-attributes --export 'epel@fedoraproject.org' \
        |hexdump -e '"\t" /1 "%.2x"' -e '/32 "\n"' 
    # this will give you the value of the key
    

    When you concat two previous results you will get:

    ; 8C3BE96AF2309184DA5C0DAE3B49DF2A0608B895
    ; EPEL (6) <epel><mailto:epel>>
    ; 91E97D7C4A5E96F17F3E888F6A2FAEA2352C64E5
    ; Fedora EPEL (7) <epel><mailto:epel>>
    ; 94E279EB8D8F25B21810ADF121EA45AB2F86D6A1
    ; Fedora EPEL (8) <epel><mailto:epel>>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 3414 (
            99020d045cf7cefb011000c93882169651ae7719e9bc99e4c50cf60ada1623b8
            287559e8725add97cde4563a92429fb6760c6e1b99948800d47d81da450cf12b
            0f1e7ee427c31cd4f6467bd27802c6d99b4161a65267d24e189aa4ecf4d34d7c
            f9ea3930569b776bdd886a35cbee759b6b110e937ca9d09aa97928eb973232e2
            d7ae88c91c8baf440ff1ee2a8ead17f26bcc773b10b83c4825e698039cff2954
            ad252d89dc0c440237be83f6e6e16505a121217fcc923e7bcd3a57bd61cdfc8b
            fccb779909bf962fa544a536e54b24f5d59a7f8347ff06473083c0915278b83e
            07fee4b8f70a969f28936064fb8546e279c17b72b84b0a6951cef251f269113f
            aff84ff177b4d0cf5997833440d5154913147354ebf876f8edfdf0c358fdf3a1
            c8a68d2b79e713483a409d5d387df59571c0465a453ad5addd599953426758b7
            9876f6a9e047dff6a4d6649848ec2ab45a6f0380b5295e0365926aaebc4ecfd9
            6e4402bd84e40a8a4280db7f0fc6896751bc758a78d18c44b9c623742ea17dd4
            a409570fbbf5dd5759c4dc9dbe97b2a5b7ff01f472ac744a864523ddc535a589
            0cdc335913bec2e4951eebfd27c5bb7811351905380b8182463fc73b3db9159c
            ee88fca80bd63c3eecc5ba033f0ec7e45764bc8631599b5bf4cc73c85a25f28b
            328528dd3564f273e2521a0e0b4fe423b8fd8a516b071054e5d52d1782130167
            3cc671ba8c9b3f22cbd1a50011010001b4284665646f7261204550454c202838
            29203c6570656c406665646f726170726f6a6563742e6f72673e890238041301
            02002205025cf7cefb021b0f060b090807030206150802090a0b041602030102
            1e01021780000a091021ea45ab2f86d6a166a00ffe319cb5b0b37e0607254342
            48e9ae4d1f5fb2328699bf53b06c2072aca472cbf98e3abc000663ff6f32f744
            d1f72bf44936669ef31354569920b3cf35204c6e811684c6d47e05868ffb67c0
            572bf8f26e8c174997ea5e74ce7e14856a5c0377095226f7d8355f4c6f2bdbc4
            df2a651535cfb3c4599ac8cc50c34e815193447bd731e99cf4cd209e7f6174f0
            36a53d3b1208f213108f8b9f175d74e34009419edbaf7af37ab97204e07e25dd
            7348f56b00fa5e332269a5614748300858aeb1f9a9b5dd52089d7b6a07f0b3e0
            60472c52b9776fc862ca55c38ed2a258fc8e2144b6702aebed40ab9587de2078
            6c4d9eeb3d340d217dddfaf03b258ded698f22873642c35126181f242fa2b064
            00c30abcc406c71017aec8c1bc1998f3c68860d07e1b39212e8d52d38c7d716c
            01b80e78563bfd555cb5fe2d710bc9c3bc6504dc8e3ff80102aea38c2f30faef
            f40d7af681131be57042535def85413c60af867cf019735b4cafca1f5f96447d
            1849bfb8d5f941dcd2a131253a746234486b48c4348785554d9b1dbfe97893f5
            4ff39eb66c1b92b0f2721b6c5cfef4c19451680932c775665f51f20cfb57d7e9
            9fb2fbcb92e127c1ce08cb18fd9effec752cfadfc0d4c23883b43983952e4997
            608b4cf1ba4896b8290b7808803f8cbbcca2b83a327ae383437160876b7eb39f
            0566933de2c4bf10d1ab91d793aaab606347f7f2a299020d0452ae6884011000
            b5529857c0ca8201aacf507fd9b0e16c95a6de4d53b6a439396273bde9ffab81
            907bc40ac139279093b07dd22a9227ce7f73bd8e027e0e5d8bd3eb781f09e5e9
            26ce4cede99790fb0d4165928eef7d956f80a923668d85a199194b44697438ee
            e02308fbefa7485ef70c34597348f8f4d0ddb102a8cc6e39675769f669b004e6
            0aba569a8fbc55d5c9fad56bfb9a9688035667fa876b7845da627eaf2a4c7b07
            154df1a42cfe4fafdd196286438d9941f4da2e70cc8d3a00b266340327af9086
            d7ea2655b731af6a76293dc596e17d110cf729a9f14d664eb8df123896c67e63
            612bf58bb94bff31d25cbeb66988a24684d30c1b754bbbe3461366309eb2ba18
            5a2460e73b90db17cac49a15e44f8487eb58c060c99fa1df5a14609ee751470b
            ee278e73b5856d2ae94ac3c410a5dd924d6adc4100c96915a69cca285ff3c04d
            38c2044c41f5d933dfc0ebbaf93b2241ccb6c22a96400e40c76c5f57774e8bfa
            044d970e3206d331712ddb8919b57073feab21cf794f4e798f7e84cf41eb8f17
            694c638e1f146a30ae66f5f9456dac71b439d2ef0efb5aacd6ec78c5ac3d1579
            3c76be78e31aa7211c44297c3a453b36fc2d316181889dc913547e5418df958b
            3b32cd57ea55dde437260d75505c1e95234ba9f41fb8544673ecdcc243631e1e
            723ef9bda1d4750487ea27ab0a18f19bb4f357a3b111311ca95b2473d338b4b7
            0011010001b4284665646f7261204550454c20283729203c6570656c40666564
            6f726170726f6a6563742e6f72673e890238041301020022050252ae6884021b
            0f060b090807030206150802090a0b0416020301021e01021780000a09106a2f
            aea2352c64e5c7c60ffe2ca6aa6c4e3b4333baa9cad28b1caaee66dbee5a2aad
            e517ef4fdc30f4651414c678659197e27517838f6a8b10cb6b2591f1d746fece
            64c8e8b70b4b97ffa8a7cda632460f8187bd1baea2942b5e05d41aeb3e2dbd79
            a29f1be1ae305b577411b66bb3ad3e6c37cae2a6abd129bfa07498fe84e4a49d
            bd2452a895ad19c03ac114bfc03cd7244347a79adef48b26dc2632769cb418b4
            40e70a387cf079b8b75484b1140ef87628afff7cc8e36b7a342fe48dcfc37468
            36729f9031094f5107710771379c160a32e6e9918fdf4166b79b47534efb8801
            ae2bc87ddbc3e1f24d7cd0475f08521fd00218236f1e75d58c7405f621d60292
            a44487a1f05af69f976d7c9105ed2117f066d7ec56ec5a8dbd91732b56036a1c
            dc839df0015683ee6e041db4964991564091a1eb32ef00cadc13eb643878daa6
            248d5a59b650348e63598a9ce912ae08aebf17df6ab61153f6024ed7bfb246e2
            fa52fdbfbe0cd547544677054b2b09548c63c08d87e5a4243058c0004b18aff6
            e3f260f1dc12b4ab6d11b4c2321f011defbacd4e48ec77c591d9a5715f9904ec
            0cfef355138bcd5f7c477270140c49e3fc6a78e378bc23c553a3ac4f795643cc
            3b8a5e68858f79c26ba37ebea593cf6413325d393cdc9fb0c16d4aeedf709030
            6299dc4bed463b02ec50db2f4d7bab14be65503f1c69327e2bb8729815f15527
            6ea97978067ed4b97a0f99020d044bd22942011000cb1a7523db8655296ee588
            537f240e4282dce53672aceec060edf55b356af2884dd445b9f5257beb7701ed
            90ac98f7afe27d9d4b777944da0385eb56c6676096c0935e7bbe92a7d67e8ac3
            fc4505db1b98f08ffe0133d1caee9864b6a15527c55b6368df4e371fc51bc633
            c601c1717c871d020d9511023bdfd71452409ee2028e7ca9c1e75edc4e02f426
            01b47dcfa43f87f0fe63f3e1a08269d4e57854d1c26c2b3d33b1d4541a600b9d
            cf0dc4d442ec1c81e63aa50828f5e4f578b352655ac9e4172d8af97551c0fa3f
            a0881a491e4f680d86a8a77513e78145c65d6ed0ce879f7be7d542a2529bb4ea
            daaf68a95678e89f7ae30682448b51c92a6e5b977164edf858ceb0b30d813f63
            250026c7e25de5baab695e8dc1bdf9e4f870051f71dc32ec1d6ae971bfbcd829
            709f36849f82ba447e8d84c78226fc8dd676dc0c13f19b9f076add5273800c4b
            54585ccc31f03648e621d3c094d27dfe2c518e2a7876066d5bc55c042acda0bf
            8b843ac87e3be72dd46ebc11f2f4fdec085ec567271a1e53163fa4622ed1e710
            c19c6d9f10a501e2d9d45e535c32afd42082e194fa3a925d86abd7211cb1d446
            6aca6934867cf906b06e6fdae1da13d744c4b4f02b852a079706efa930d88ed7
            d5f76942ff68b080eea06201bd5eb3b857c755dfd15f9bba0b15fbd36e0e66bf
            843f13a016a3f8248e6b7191cba56f202f8c5b58010011010001b4214550454c
            20283629203c6570656c406665646f726170726f6a6563742e6f72673e890236
            04130102002005024bd22942021b0f060b090807030204150208030416020301
            021e01021780000a09103b49df2a0608b8951fc60ffc0b18fbfda8edfd7b26fd
            365af07ca754128d6d1f129dae1373f9762b3b8c950d305944f4aebcbdb26a87
            9222140e2a134f7c4813f6676c6ec81c7e6a07c66195727fa56e1796b12bdc82
            b5eecf480bb7c4c618a18644e0282d7a6e52c6f51cdb4a10ec0f438cf5b90da7
            3b3e612d1c83395d08d8bc1857c0631888c1294305114c454ec2fb5ce664ac08
            3f59b4c7d8f5b786b7d25ad71103b118a2723707cd1ddfd7dfc2ced29b229b4b
            93e6663a8e3ee4ef576174b4c84a2219dc070a288d664f1317ed1e92a1cca561
            f1f7433bfdcfd72d459370a29fc51b08ef4c58c14d476edf57308036510f963b
            0703edc4cf817bb9ba05a7438e128bf86328b80446e0a999ed8531b8b9bf67ca
            2ba9219ddc90f1ebde755c0d419c2fa9500ff9e498d54878b60fc75b873ce4a5
            59fba88c0620cd11fa2cbec8760e051c7040d2da3b156f1d4171483b23622450
            0e4f68fc3f9fad55dabcc01d0f0d21fcc91a67e07749f5162ab9abb83a48e5bb
            13967f4b91692fff494702661fb5fca0443a672f047611640e4997f9da902831
            19bda96903b3aaaa4e7072af9722eeccee6ec9b3176a2a4b1314c62570655eb6
            db0127d76bfc86612006895ac6cfd084ebaefa74c966f05facdd75c4d77419ad
            79a396873a8f03a58e77c09234c65e3881ece50b11279df7a3115ed7cb9345b9
            01bf6977bb4a0d1e901c8eeb8075df0a8a519d9d9555
            )
    

    This is also a valid result. I contacted the author of RFC 7929 Paul Wouters to clarify this and he suggested that the first option is preferred. But when working on implementation, you should keep in mind that the second is an option as well.