Fedora People

ZimbraLogHostname is not configured - error

Posted by Luis Bazan on February 25, 2021 06:50 PM

[root@mail ~]# cat /etc/centos-release

CentOS Stream release 8

[zimbra@mail ~]$ zmcontrol -v

Release 8.8.15_GA_3953.RHEL8_64_20200629025823 UNKNOWN_64 FOSS edition, Patch 8.8.15_P19.

Log in as the zimbra user

[root@mail ~]# su - zimbra

[zimbra@mail ~]$

Now run this command to set the hostname in the Logs configuration.

Remember to change the domain name to yours.

[zimbra@mail ~]$ zmprov mcf zimbraLogHostname mail.ibtechpa.com

[zimbra@mail ~]$ exit

logout

Switch to root user.

[root@mail ~]#

Update the log configuration with this command.

[root@mail ~]# /opt/zimbra/libexec/zmsyslogsetup

updateSyslogNG: Updating /etc/syslog-ng/syslog-ng.conf...done.

Last step restart zimbra services



With this you can see all your statistics and logs from the administrator gui.





RODANDO WILDFLY NO PODMAN

Posted by Daniel Lara on February 25, 2021 11:06 AM

 Bom, primeiro vamos fazer o pull da imagem:


$ podman pull jboss/wildfly


Agora vamos subir nosso contêiner:


$ podman run -d --name wildfly -p 8080:8080 -p 9990:9990 jboss/wildfly




Agora vamos acessar o nosso contêiner e criar um usuário para o console de administração, caso queira:

$ podman exec -it wildfly /bin/bash




Vamos criar um usuário:

$ /opt/jboss/wildfly/bin/add-user.sh



Só acessar via browser: http://<ip ou hostname>:8080



Ou, acesso via console admin: http://<ip ou hostname>:9990



Guia de Referencia: 











QElectroTech version 0.80

Posted by Remi Collet on February 25, 2021 10:12 AM

RPM of QElectroTech version 0.80, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux ≥ 8.

A bit more than 1 year after the version 0.70 release, the project have just released a new major version of their electric diagrams editor.

Official web site : http://qelectrotech.org/ and version announcement.

Installation by YUM :

yum --enablerepo=remi install qelectrotech

RPM (version 0.80-1) are available for Fedora ≥ 32 and Enterprise Linux 8 (RHEL, CentOS, ...)

Updates are also on the road to official repositories

Notice :a Copr / Qelectrotech repository also exists, which provides "development" versions (0.90-DEV for now).

A sneak peek at Fedora Zine

Posted by Fedora Community Blog on February 25, 2021 08:00 AM

So my Outreachy internship is winding to a close, as is the creation of the first-ever edition of our very own Fedora Zine!

It has been a crazy journey so far and I have thoroughly enjoyed working on this awesome project, especially getting to see and work with all of these great submissions from the community. I have learned so much; from how to balance my design visually, how to pair fonts and use other typographic effects, how to use guides for a perfectly aligned design and also that you should read your printing specs very, very carefully before getting to work on a project☺. A huge thank you goes to my mentor, Marie Nordin, who has been incredibly helpful in guiding me through this whole process!

Now is the time for me to give you all a sneak-peek at what we have been working on for the past two and a half months.

Pages

In the gallery below you can see a small sample of the different pages of the zine:

  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I started off by re-working all the awesome Fedora team infographics that Smera Goel created in the previous round of Outreachy internships. I adjusted the designs to fit into the new size page/template.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">A call for contributions seemed like an obvious thing to do and I wanted to make sure it was unique and eye-catching.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I remixed a graphic art submission to highlight the four foundations of Fedora.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">Fedora’s vision & mission statements are an important base for Fedora. I created a spread for each paired with cool photo submissions.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I overlaid photo and traditional art submissions with other text and graphic elements.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">I overlaid photo and traditional art submissions with other text and graphic elements.</figcaption></figure>
  • <figure><figcaption class="wp-block-jetpack-slideshow_caption gallery-caption">And of course, a credits page to acknowledge all the wonderful contributions from the community in this edition of the zine!</figcaption></figure>

And a quick flip through a mock-up of the zine:

<figure class="wp-block-video"><video controls="controls" src="https://communityblog.fedoraproject.org/wp-content/uploads/2021/02/zine-mock-up-flip-through-1.mp4"></video></figure>

Submissions for this edition of the zine are now closed, but you can still contribute! Follow the steps here to submit your art – and it may be featured in future editions of the zine!

The post A sneak peek at Fedora Zine appeared first on Fedora Community Blog.

GITLAB NO PODMAN

Posted by Daniel Lara on February 24, 2021 12:41 PM

 Crie o diretório "gitlab" no seu Home:


$ mkdir gitlab

Agora, dentro do "gitlab", vamos criar 3 diretórios "logs", "data" e "config":

$ mkdir gitlab/config
$ mkdir gitlab/logs
$ mkdir gitlab/data

Agora vamos rodar o Gitlab no Podman:

$ sudo podman run --privileged -dit \
            --name gitlab \
            -p80:80 \
            -p443:443 \
            -p22022:22 \
            -v ${PWD}/gitlab/config:/etc/gitlab \
            -v ${PWD}/gitlab/logs/:/var/log/gitlab \
            -v ${PWD}/gitlab/data/:/var/opt/gitlab \
            gitlab/gitlab-ce:latest


Feito, já esta rodando:

Agora é só acessar o Gitlab:




Syslog-ng on BSDs

Posted by Peter Czanik on February 24, 2021 11:33 AM

My FOSDEM presentation in the BSD devroom showcased what is new in sudo and syslog-ng and explained how to install or compile these software yourself on FreeBSD. Not only am I a long time FreeBSD user (started with version 1.0 in 1994) I also work on keeping the syslog-ng port in FreeBSD up to date. But soon after my presentation I was asked what I knew about other BSDs. And – while I knew that all BSDs have syslog-ng in their ports system – I realized I had no idea about the shape of those ports.

For this article I installed OpenBSD, DragonFlyBSD and NetBSD to check syslog-ng on them. Admittedly, they are not in the best shape: they contain old versions, some do not even start or are unable to collect local log messages.

OpenBSD

OpenBSD ports have version 3.12 of syslog-ng. Some Linux distributions have an even earlier version of syslog-ng and they work just fine. Unfortunately, it is not the case here: logging in OpenBSD changed and it means that local log messages cannot be collected by syslog-ng 3.12. Support for collecting local log messages was added in a later syslog-ng version: https://github.com/syslog-ng/syslog-ng/pull/1875

Installation of this ancient syslog-ng version is really easy, just use pkg_add:

openbsd68# pkg_add syslog-ng
quirks-3.441 signed on 2021-02-13T20:25:37Z
syslog-ng-3.12.1p7: ok
The following new rcscripts were installed: /etc/rc.d/syslog_ng
See rcctl(8) for details.

Collecting log messages over the network works perfectly, so as a workaround, you might want to keep using syslogd from the base system as well while forwarding log messages to syslog-ng using the network.

DragonFlyBSD

Once upon a time DragonFlyBSD was forked from FreeBSD. While they took a different route from FreeBSD they also stayed close to the original. DragonFlyBSD ports build on FreeBSD ports even though there are some additional applications and other smaller differences. This means that syslog-ng is up to date in DragonFlyBSD ports, - which in this case means version 3.29. Installation is easy, using the same command as on FreeBSD:

pkg install syslog-ng

Problems start when you actually try to start syslog-ng:

dragon# /usr/local/etc/rc.d/syslog-ng forcestart
Starting syslog_ng.
[2021-02-17T08:59:13.598727] system(): Error detecting platform, unable to define the system() source. Please send your system information to the developers!; sysname='DragonFly', release='5.8-RELEASE'
Error parsing config, syntax error, unexpected LL_ERROR, expecting '}' in /usr/local/etc/syslog-ng.conf:19:14-19:20:
14      options { chain_hostnames(off); flush_lines(0); threaded(yes); };
15      
16      #
17      # sources
18      #
19----> source src { system();
19---->              ^^^^^^
20      	     udp(); internal(); };
21      
22      #
23      # destinations
24      #


syslog-ng documentation: https://www.syslog-ng.com/technical-documents/list/syslog-ng-open-source-edition
contact: https://lists.balabit.hu/mailman/listinfo/syslog-ng

While system() source works on FreeBSD, where this configuration was prepared, it does not work on DragonFlyBSD. You need to edit /usr/local/etc/syslog-ng.conf and replace system() source with the following lines:

     unix-dgram("/var/run/log");
     unix-dgram("/var/run/logpriv" perm(0600));
     file("/dev/klog" follow-freq(0) program-override("kernel"));

This is based on the earlier FreeBSD configuration and seems to work. I have filed an issue at the syslog-ng GitHub repo, so in a future release it might work automatically.

I also tried to build syslog-ng from ports myself, but right now it is broken. The sysutils/syslog-ng port is still a metaport referring to another port, but that version has already been deleted. The syslog-ng port was reorganized recently, and it seems like not everything was followed up on the DragonFlyBSD side perfectly.

NetBSD

NetBSD also has a quite ancient version of syslog-ng: 3.17.2. Installation of the package is easy, just:

pkgin install syslog-ng

Syslog-ng works and can collect local log messages out of box as well, with a catch. NetBSD seems to have switched to RFC5424 syslog format, just as FreeBSD 12.0, so local log messages collected by syslog-ng’s system() source look a kind of funny:

Feb 17 12:43:07 localhost 1 2021-02-17T12:43:07.935565+01:00 localhost sshd 2160 - - Server listening on :: port 22.
Feb 17 12:43:07 localhost 1 2021-02-17T12:43:07.936064+01:00 localhost sshd 2160 - - Server listening on 0.0.0.0 port 22.

Also, the system() source seems to have missed kernel logging. To fix this, open syslog-ng.conf in your favorite text editor, remove the system() source and add these two lines instead:

        unix-dgram("/var/run/log" flags(syslog-protocol));
        file("/dev/klog" flags(kernel) program_override("kernel"));

This makes sure that local logs are parsed correctly and that kernel messages are collected by syslog-ng as well.

What is next

In this blog I identified many problems related to syslog-ng in various BSD port systems. I also provided some workarounds, but of course these are not real solutions. I cannot promise anything, as I am not an active user or developer of any of these BSD systems and I am also short on time. However, I’m planning to fix as many of these problems at the best effort level, as time allows.

 

If you have any questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

POWER9, ARM64 and 64k page sizes

Posted by Daniel Pocock on February 23, 2021 10:40 PM
IBM POWER9

I've recently had discussions with other developers in the Fedora world about the default 64k page size on POWER9. The vast majority of GNU/Linux users have a 4k page size. There is now a change proposal for the ppc64le page size on Fedora 35 and a related discussion on the devel mailing list.

Why and when the non-x86 architectures are relevant

With each new generation of x86 processors from Intel and AMD there is a larger quantity of opaque microcode that independent developers are unable to audit or fix.

When a vendor has such an incredible market share, it is inevitable that problems like this will arise.

Investing some time and effort on alternative architectures is good insurance: when the day comes that this microcode is compromised, some percentage of users will jump ship.

Getting started on POWER9 and ARM64 is easier than ever

For POWER9, please see my recent blog about the Talos II Quickstart.

Similar blogs have recently appeared about ARM64, for example, this Fedora Magazine article about SolidRun HoneyComb LX2K.

Choosing between POWER9 and ARM64

ARM64 may be a better choice if you are very sensitive to the heat emissions and energy consumption costs.

POWER9 may be a better choice if you need a lot of compute power.

While neither ARM64 nor POWER9 have microcode comparable to Intel or AMD products, it is still important to look at the overall system. For example, Raptor's Talos II motherboard has the FSF Respects Your Freedom (RYF) certification but if you order the optional SATA controller, you end up with some proprietary firmware.

Why 64k page sizes are an issue

The GNU/Linux kernel for these platforms can be compiled with either 4k or 64k page size. The distribution chooses which of these options to select. The kernel created by the distribution is included in the installation disk for the distribution.

One acute consequence of this is the relationship between Btrfs sectorsize and kernel page size. Btrfs filesystems can only be used on systems with the same page size. The Btrfs driver is being improved to remove this restriction but for users of Fedora 34 and older systems, this is a very inconvenient issue. If you need to move Btrfs filesystems between systems with different page sizes then they simply won't work.

It appears that nobody tests the kernel and amdgpu drivers on these non-standard page sizes before each official release. Consequently, if there is a problem, it is only discovered by users after the upstream release. This means that users on these platforms are always a step behind users on other platforms.

Improving support for 64k page sizes

Personally, I'm quite keen to see the 64k page size succeed.

I believe that is only possible when these platforms have critical mass and when some of the upstream developers use these machines on a daily basis.

Until we get to that point, I feel it is a chicken-and-egg problem: things don't work, so people buy in more slowly, so there are less people to report bugs and/or fix things.

Automated CI testing of the kernel and amdgpu code may also help to catch some issues before official releases.

To summarize, I have an open mind about how to go about this. Please feel free to share your ideas and experiences in the discussion or through your blog.

Workarounds

Anybody who buys one of these machines today can still use it almost immediately.

One option is to use a distribution with a 4k page size or compile your own kernel with a 4k page size.

Another idea is to simply avoid using Btrfs for another six months: if you use the installation system of your preferred distribution to create filesystems, check that it isn't using Btrfs. You may need to manually override it to use ext4 for the moment.

If you don't need a modern GPU, some of the previous generation, such as the Radeon RX 580, seem to be working fine on any page size. These cards are available with up to 8GB VRAM. The performance of those cards is more than adequate for many workstation users.

The page size issue should not deter anybody from buying the hardware today. The community is always here to help.

How-to: Writing a C shared library in rust

Posted by Tony Asleson on February 23, 2021 03:59 PM
The ability to write a C shared library in rust has been around for some time and there is quite a bit of information about the subject available. Some examples: Exposing C and Rust APIs: some thoughts from librsvg Creating C/C++ APIs in Rust Rust Out Your C by Carol (Nichols || Goulding) (youtube video) Exporting a GObject C API from Rust code and using it from C, Python, JavaScript and others Rust Once, Run Everywhere All this information is great, but what I was looking for was a simple step-by-step example which also discussed memory handling and didn’t delve into the use of GObjects.

Concrete Blonde – “Happy Birthday” (song of the day)

Posted by Fedora Cloud Working Group on February 23, 2021 01:32 PM

Concrete Blonde had a lot of standout tracks on Free, but this one is timeless. “Happy Birthday” is a great song any day of the year, but...

The post Concrete Blonde – “Happy Birthday” (song of the day) appeared first on Dissociated Press.

New “How Do You Fedora” video series to interview members of the community

Posted by Fedora Community Blog on February 23, 2021 08:00 AM

A common answer to the question “What’s your favorite part about Fedora?” is often “the community”. Well, what’s so special about it?

The Fedora community shares the common values of the “Four Foundations”: Freedom, Friends, Features and First. Beyond that, although there are many great minds, not all of them think alike! Everyone contributes different approaches to problems, interesting ideas, and diverse perspectives. There is a place in Fedora for anyone who wants to help. 

That’s why we’re launching a new video series on the Fedora Youtube channel profiling some of Fedora’s various contributors and how they use Fedora. The goal is to get to know some community members better, especially in a time where in-person community events might not be practical.

But we already have “HDYF” articles!

Longtime community members may be aware of the “How Do You Fedora” series on the Fedora Magazine. This new series is intended to be a continuation of the written articles in a new format, not to replace it. Instead, this series serves as an option for interviewees who feel as if their personality might shine better in this different format, who prefer to “show, not tell”, or would simply like to participate in a video interview instead of a written one.

What Can You Expect?

Videos will be posted on the Fedora YouTube channel. Watch interviewees answer your burning questions about their Fedora habits, show off their setups, and more! Hopefully you’ll leave each video knowing something new about a fellow contributor.

Our first conversation will be with Marie Nordin, Fedora Community Action and Impact Coordinator. Next we plan on interviewing Matthew Miller, Fedora Project Leader.

I have some input!

We’d love your feedback and ideas about what you’d like to see included in the series, so send an email to Gabbie (gchang@redhat.com) if you’d like to get involved. Contact us on the feedback form to express your interest in becoming an interviewee for a written or video interview.

Happy watching! 

The post New “How Do You Fedora” video series to interview members of the community appeared first on Fedora Community Blog.

توزیع AlmaLinux 8.3 RC1 منتشر شد

Posted by Fedora fans on February 23, 2021 06:30 AM
AlmaLinux

AlmaLinuxپس از انتشار نسخه ی بتا از توزیع AlmaLinux، هم اکنون تیم توسعه این توزیع خبر انتشار اولین نسخه ی کاندیدای انتشار یا همان Release Candidate که به اختصار به آن RC گفته می شود را اعلام کرد.

AlmaLinux توزیعی از لینوکس می باشد که توسط تیم Cloud Linux به همراه جامعه کاربری ایجاد و توسعه داده می شود. AlmaLinux توزیعی در کلاس سازمانی می باشد و هدف آن جایگزین شدن با CentOS می باشد.

AlmaLinux 8.3 RC1 اولین نسخه ی کاندیدای انتشار می باشد که هم اکنون در دسترس و قابل دانلود می باشد. جهت دانلود آن می توانید با استفاده از لینک پایین و بر اساس موقعیت جغرافیایی خود نزدیک ترین mirror را جهت دانلود انتخاب کنید:

https://mirrors.almalinux.org

جهت اطلاعات بیشتر در مورد AlmaLinux 8.3 RC1 می توانید نکات انتشار آن را مطالعه کنید:

https://almalinux.org/blog/almalinux-8-3-rc-release-notes

The post توزیع AlmaLinux 8.3 RC1 منتشر شد first appeared on طرفداران فدورا.

Future of libsoup

Posted by Patrick Griffis on February 23, 2021 05:00 AM

The libsoup library implements HTTP for the GNOME platform and is used by a wide range of projects including any web browser using WebKitGTK. This past year we at Igalia have been working on a new release to modernize the project and I’d like to share some details and get some feedback from the community.

History

Before getting into what is changing I want to briefly give backstory to why these changes are happening.

The library has a long history, that I won’t cover all of, being created in 2000 by Helix Code, which became Ximian, where it was used in projects such as an email client (Evolution).

While it has been maintained to some degree for all of this time it hasn’t had a lot of momementum behind it. The library has maintained its ABI for 13 years at this point with ad-hoc feature additions and fixes being often added on top. This has resulted in a library that has multiple APIs to accomplish the same task, confusing APIs that don’t follow any convention common within GNOME, and at times odd default behaviors that couldn’t be changed.

What’s Coming

We are finally breaking ABI and making a new libsoup 3.0 release. The goal is to make it a smaller, more simple, and focused library.

Making the library smaller meant deleting a lot of duplicated and deprecated APIs, removing rarely used features, leveraging additions to GLib in the past decades, and general code cleanup. As of today the current codebase is roughly at 45,000 lines of C code compared to 57,000 lines in the last release with over 20% of the project deleted.

Along with reducing the size of the library I wanted to improve the quality of the codebase. We now have improved CI which deploys documentation that has 100% coverage, reports code coverage for tests, tests against Clang’s sanitizers, and the beginnings of automated code fuzzing.

Lastly there is ongoing work to finally add HTTP/2 support improving responsiveness for the whole platform.

There will be follow up blog posts going more into the technical details of each of these.

Release Schedule

The plan is to release libsoup 3.0 with the GNOME 41 release in September. However we will be releasing a preview build with the GNOME 40 release for developers to start testing against and porting to. All feedback would be welcomed.

Igalia plans on helping the platform move to 3.0 and will port GStreamer, GVFS, WebKitGTK and may help with some applications so we can have a smooth transition.

For more details on WebKitGTK’s release plans there is a mailinglist thread over it.

The previous release libsoup 2.72 will continue to get bug fix releases for the forseable future but no new 2.7x releases will happen.

How to Help

You can build the current head of libsoup to test now. Installing it does not conflict with version 2.x however GObject-Introspection based applications may accidentally import the wrong version (Python for example needs gi.require_version('Soup', '2.4') for the old API) and you cannot load both versions in the same process.

A migration guide has been written to cover some of the common questions as well as improved documentation and guides in general.

All bug reports and API discussions are welcomed on the bug tracker.

You can also join the IRC channel for direct communication (be patient for timezones): ircs://irc.gimp.net/libsoup.

Master, main and abuse

Posted by Daniel Pocock on February 22, 2021 09:55 PM

Free and open source software communities recently spent a lot of time and effort on renaming the master branches in Git repositories to main, or some other name, due to the association of the word master with the horror of slavery.

I plan to tackle the slavery issue in a separate blog. In this blog, my target is the misappropriation of the word abuse.

If we are sincere about abandoning the word master, we also need to stop using the word abuse, except in those situations where it is legitimate to use that word.

Abuse has a clear meaning. In the last week, we've seen women speak up about rape in Australia's parliament and bullying on the set of Buffy the Vampire Slayer.

Buffy the Vampire Slayer

These are incredibly serious accusations.

The Buffy accusations are remarkably similar to the accusations against Matthias Kirschner, President of FSFE. The free software community elected me as a community representative in that organization. In 2018, after observing the culture of threats and blackmail, I resigned in disgust. Each new revelation about FSFE only confirms that I made the right decision to jump ship.

Yet the accusations from the Australian parliament are even more disturbing. Having visited there on multiple occasions, I couldn't help contemplating the possibility that I may have visited the same office where this crime took place.

In the photo below, the sense of male entitlement is obvious: I'm wandering around Australia's capitol in a t-shirt. It isn't any ordinary t-shirt, I rowed that race three times and on every occasion, the cox was a woman (although it wasn't Sally). The woman on the left is Senator Stott-Despoja, Australia's youngest woman in parliament and subsequently Australia's ambassador for women and girls. How shocking would it be if the crime took place in the same room where we took this photo?

The Debian Project is one of the oldest GNU/Linux distributions. In the 27 years of its existence, so-called leaders have never published a consolidated financial report. When people asked about the Google $300,000 hidden underneath $300,000 from the Handshake Foundation, leaders whined about abuse. When oligarchs behave like this and use the word abuse to deflect questions about corruption, they are trivializing real victims of abuse.

The people who hid that money from the rest of us simply have no right to use the word abuse. Ever.

Natasha Stott-Despoja, Daniel Pocock, Parliament House, Canberra, Australia

Lad culture on film: when I found a rat in Australia's parliament

During my undergraduate days, I was fortunate to roam the parliament building with my SLR. Journalists would die to get into these places but the only bodies I ever found there were rats. After hearing about the bravery of the women speaking up this week, I felt now is the right time to share this.

There is nothing political about this blog or the photos. It simply leaves me feeling ashamed to be a man.

Australian Parliament House, Canberra, Billiard Room, Dead Rat

The best practice method to install VirtualBox on Fedora 32/33 (and later)

Posted by Izhar Firdaus on February 22, 2021 02:55 PM

In any linux distribution, there will be multiple methods to achieve a certain goal. You might have encountered many guides out there on how to install VirtualBox on Fedora, however, please take note, many of them uses intrusive methods which can be difficult to maintain in long run, and would likely to break after a kernel update.

Everytime I caught a new team member following those guides, I tend to get annoyed, so I think I should write up the best practice method of doing this, which I have practiced on my Fedora installation for years now.

Installing RPMFusion Repository

Packages governed in Fedora official repositories are subject to several legal and technical guideline. This mean, many software which are not 100% FOSS or patent-encumbered would not be available in the official repositories because they would open the sponsors of Fedora to legal liability under the laws of United States.

Unfortunately, many free/open source software are patent incumbered, but while they might be illegal to distribute in the United States, they are perfectly legal to distribute in other countries.

Many of such packages are available in RPMFusion Repositories. Unlike many 3rd party repositories out there, RPMFusion contains the highest quality packages as its contributor also consist of people who package official RPMS in Fedora, and I generally recommend any new users of Fedora to install RPMFusion, especially if they are not US residents.

To install and enable RPMFusion on Fedora, you can install the release package using following command which will install both the free and nonfree repositories:

sudo dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm \
    https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Installing VirtualBox

Once you have RPMFusion installed, you can then install VirtualBox from it. VirtualBox in RPMFusion is very well maintained, would remain functional after kernel updates through an automatic kernel module builder (akmods), and generally carry the latest version of VirtualBox.

sudo dnf install VirtualBox
sudo akmods
sudo modprobe vboxdrv

And you are done and can now launch VirtualBox from the application menu. Please take note however, VirtualBox does not go well together with libvirtd. If you have libvirtd installed, it is suggested for you to disable it using:

sudo systemctl stop libvirtd
sudo systemctl disable libvirtd

Hope this guide will help you in installing VirtualBox in a less intrusive and more maintainable method.

افزایش CPU و RAM ماشین مجازی در KVM

Posted by Fedora fans on February 22, 2021 06:30 AM
CPU-RAM

CPU-RAMبرای افزایش و یا کاهش CPU و RAM یک ماشین مجازی (VM) در مجازی ساز KVM روش های گوناگونی وجود دارد که در این مطلب سعی خواهد شد تا به این موضوع پرداخته شود.

تغییر منابع با استفاده از Virtual Machine Manager :

Virtual Machine Manager یک نرم افزار گرافیکی برای مدیریت KVM server می باشد که بوسیله ی آن می توانید ماشین های مجازی را کنترل و مدیریت کرد. برای افزایش و یا کاهش CPU و RAM با استفاده از این ابزار کافیست تا VM مورد نظر خود را خاموش کنید و سپس به قسمت تنظیمات آن بروید و منابع مورد نظر خود را تغییر دهید.

Virtual_Machine_Manager

Virtual_Machine_Manager

تغییر منابع با استفاده از virsh :

virsh یک نرم افزار خط فرمانی است که بوسیله ی آن می توانید ماشین های مجازی خود در KVM را مدیریت کنید. برای افزایش و یا کاهش CPU و یا RAM یک ماشین مجازی کافیست تا ابتدا با دستور زیر لیست ماشین های مجازی خود را بدست آورید:

# virsh list --all

 

 

سپس با دستور زیر ماشین مجازی خود را خاموش کنید:

# virsh shutdown <My VM name>

سپس فایل XML ماشین مجازی مورد نظر را با دستور زیر جهت ویرایش باز کنید:

# virsh edit <My VM name>

اکنون برای تغییر CPU مقدار خط زیر را تغییر دهید که بجای عدد 2 باید عدد مورد نظر خود را وارد کنید:

<vcpu placement=’static’>2</vcpu>

برای تغییر مقدار RAM نیز باید خطوط زیر را ویرایش کنید که بجای 2097152 باید مقدار مورد نظر خود را وارد کنید:

<memory unit=’KiB’>2097152</memory>
<currentMemory unit=’KiB’>2097152</currentMemory>

پس از ویرایش مقادیر مورد نظر خود فایل را ذخیره کنید و سپس ماشین مجازی را با دستور زیر روشن کنید:

# virsh start <My VM name>

 

The post افزایش CPU و RAM ماشین مجازی در KVM first appeared on طرفداران فدورا.

Episode 259 – What even is open source anymore?

Posted by Josh Bressers on February 22, 2021 12:01 AM

Josh and Kurt talk about the question “what is open source?” Why do we think it’s broken today, and what sort of ideas about what should come next.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2313-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_259_What_even_is_open_source_anymore.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_259_What_even_is_open_source_anymore.mp3</audio>

Show Notes

What is waking my HDD up in Linux

Posted by Lukas "lzap" Zapletal on February 22, 2021 12:00 AM

What is waking my HDD up in Linux

When my disks wake up during the day, I am angry. I want silence, so I started investigating which process makes them to do that. I suspect that something is browsing Samba share, but to confirm I created this simple SystemTap script:

# cat syscall_open.stp 
#!/usr/bin/env stap
#
# System-wide strace-like tool for catching file open syscalls.
#
# Usage: stap syscall_open filename_regexp
#
probe syscall.open* {
  if (filename =~ @1) {
    printf("%s(%d) opened %s\n", execname(), pid(), filename)
  }
}

It’s as easy as starting this up and waiting until the process is found. It accepts regular expression, not a glob:

# dnf install systemtap systemtap-runtime
# stap syscall_open.stp '/mnt/int/data.*'

This will work on all SystemTap operating systems, I tested this on Fedora and any EL distribution should work too.

Ikebe Shakedown delivers cinematic instrumental funk

Posted by Fedora Cloud Working Group on February 21, 2021 06:28 PM

Ikebe Shakedown is another Bandcamp discovery. The band specializes in cinematic soul, an instrumental brand of soul/funk that feels like it should be straight out...

The post Ikebe Shakedown delivers cinematic instrumental funk appeared first on Dissociated Press.

Underground Chamber is a ride deep into the mind of Buckethead

Posted by Fedora Cloud Working Group on February 21, 2021 03:40 PM

Buckethead’s Underground Chamber is the fourth release in his “Pikes” series, and something like his 33rd studio release overall. Underground Chamber is too good to...

The post Underground Chamber is a ride deep into the mind of Buckethead appeared first on Dissociated Press.

Configuring an OpenWRT Switch to work with SSID VLANS on a UAP-AC-PRO

Posted by Jon Chiappetta on February 21, 2021 02:33 PM

On the OpenWRT Switch page, I have set LAN port 1 (along with a backup LAN port 2 but you can just use a single port) as the VLAN trunk port (tagged) to allow it to carry the traffic through to the VLAN access ports (untagged) [home = VLAN 3 && guest = VLAN 4]. This will create the sub-interfaces eth0.3 and eth0.4 which will contain the separated ethernet Layer 2 traffic from the WiFi clients (ARP, DHCP via dnsmasq, mDNS, etc).

Note: Make sure to tag the CPU along with the LAN ports and ignore the untagged VLAN 5, I’m using it as an isolated management network (firewalled off with iptables at Layer 3).

Linksys WRT32X Switch Setup:

<figure class="wp-block-image size-large"></figure>

You can then go to the Networks section in the UniFi AP Site configuration and add a VLAN-Only Network (set the ID to 3 or 4) and then on the Wireless page create an SSID which uses that Network Name in the WiFi settings.

Note: To achieve a similar setup on a OpenWRT AP, you can use the WAN port tagged on those same VLAN numbers and then on the Interfaces page create an unmanaged interface type from the related VLAN sub-interface listed – this interface can then be assigned to the SSID network under the Wireless networks page.

<figure class="wp-block-image size-large"></figure>

Making hibernation work under Linux Lockdown

Posted by Matthew Garrett on February 21, 2021 08:37 AM
Linux draws a distinction between code running in kernel (kernel space) and applications running in userland (user space). This is enforced at the hardware level - in x86-speak[1], kernel space code runs in ring 0 and user space code runs in ring 3[2]. If you're running in ring 3 and you attempt to touch memory that's only accessible in ring 0, the hardware will raise a fault. No matter how privileged your ring 3 code, you don't get to touch ring 0.

Kind of. In theory. Traditionally this wasn't well enforced. At the most basic level, since root can load kernel modules, you could just build a kernel module that performed any kernel modifications you wanted and then have root load it. Technically user space code wasn't modifying kernel space code, but the difference was pretty semantic rather than useful. But it got worse - root could also map memory ranges belonging to PCI devices[3], and if the device could perform DMA you could just ask the device to overwrite bits of the kernel[4]. Or root could modify special CPU registers ("Model Specific Registers", or MSRs) that alter CPU behaviour via the /dev/msr interface, and compromise the kernel boundary that way.

It turns out that there were a number of ways root was effectively equivalent to ring 0, and the boundary was more about reliability (ie, a process running as root that ends up misbehaving should still only be able to crash itself rather than taking down the kernel with it) than security. After all, if you were root you could just replace the on-disk kernel with a backdoored one and reboot. Going deeper, you could replace the bootloader with one that automatically injected backdoors into a legitimate kernel image. We didn't have any way to prevent this sort of thing, so attempting to harden the root/kernel boundary wasn't especially interesting.

In 2012 Microsoft started requiring vendors ship systems with UEFI Secure Boot, a firmware feature that allowed[5] systems to refuse to boot anything without an appropriate signature. This not only enabled the creation of a system that drew a strong boundary between root and kernel, it arguably required one - what's the point of restricting what the firmware will stick in ring 0 if root can just throw more code in there afterwards? What ended up as the Lockdown Linux Security Module provides the tooling for this, blocking userspace interfaces that can be used to modify the kernel and enforcing that any modules have a trusted signature.

But that comes at something of a cost. Most of the features that Lockdown blocks are fairly niche, so the direct impact of having it enabled is small. Except that it also blocks hibernation[6], and it turns out some people were using that. The obvious question is "what does hibernation have to do with keeping root out of kernel space", and the answer is a little convoluted and is tied into how Linux implements hibernation. Basically, Linux saves system state into the swap partition and modifies the header to indicate that there's a hibernation image there instead of swap. On the next boot, the kernel sees the header indicating that it's a hibernation image, copies the contents of the swap partition back into RAM, and then jumps back into the old kernel code. What ensures that the hibernation image was actually written out by the kernel? Absolutely nothing, which means a motivated attacker with root access could turn off swap, write a hibernation image to the swap partition themselves, and then reboot. The kernel would happily resume into the attacker's image, giving the attacker control over what gets copied back into kernel space.

This is annoying, because normally when we think about attacks on swap we mitigate it by requiring an encrypted swap partition. But in this case, our attacker is root, and so already has access to the plaintext version of the swap partition. Disk encryption doesn't save us here. We need some way to verify that the hibernation image was written out by the kernel, not by root. And thankfully we have some tools for that.

Trusted Platform Modules (TPMs) are cryptographic coprocessors[7] capable of doing things like generating encryption keys and then encrypting things with them. You can ask a TPM to encrypt something with a key that's tied to that specific TPM - the OS has no access to the decryption key, and nor does any other TPM. So we can have the kernel generate an encryption key, encrypt part of the hibernation image with it, and then have the TPM encrypt it. We store the encrypted copy of the key in the hibernation image as well. On resume, the kernel reads the encrypted copy of the key, passes it to the TPM, gets the decrypted copy back and is able to verify the hibernation image.

That's great! Except root can do exactly the same thing. This tells us the hibernation image was generated on this machine, but doesn't tell us that it was done by the kernel. We need some way to be able to differentiate between keys that were generated in kernel and ones that were generated in userland. TPMs have the concept of "localities" (effectively privilege levels) that would be perfect for this. Userland is only able to access locality 0, so the kernel could simply use locality 1 to encrypt the key. Unfortunately, despite trying pretty hard, I've been unable to get localities to work. The motherboard chipset on my test machines simply doesn't forward any accesses to the TPM unless they're for locality 0. I needed another approach.

TPMs have a set of Platform Configuration Registers (PCRs), intended for keeping a record of system state. The OS isn't able to modify the PCRs directly. Instead, the OS provides a cryptographic hash of some material to the TPM. The TPM takes the existing PCR value, appends the new hash to that, and then stores the hash of the combination in the PCR - a process called "extension". This means that the new value of the TPM depends not only on the value of the new data, it depends on the previous value of the PCR - and, in turn, that previous value depended on its previous value, and so on. The only way to get to a specific PCR value is to either (a) break the hash algorithm, or (b) perform exactly the same sequence of writes. On system reset the PCRs go back to a known value, and the entire process starts again.

Some PCRs are different. PCR 23, for example, can be reset back to its original value without resetting the system. We can make use of that. The first thing we need to do is to prevent userland from being able to reset or extend PCR 23 itself. All TPM accesses go through the kernel, so this is a simple matter of parsing the write before it's sent to the TPM and returning an error if it's a sensitive command that would touch PCR 23. We now know that any change in PCR 23's state will be restricted to the kernel.

When we encrypt material with the TPM, we can ask it to record the PCR state. This is given back to us as metadata accompanying the encrypted secret. Along with the metadata is an additional signature created by the TPM, which can be used to prove that the metadata is both legitimate and associated with this specific encrypted data. In our case, that means we know what the value of PCR 23 was when we encrypted the key. That means that if we simply extend PCR 23 with a known value in-kernel before encrypting our key, we can look at the value of PCR 23 in the metadata. If it matches, the key was encrypted by the kernel - userland can create its own key, but it has no way to extend PCR 23 to the appropriate value first. We now know that the key was generated by the kernel.

But what if the attacker is able to gain access to the encrypted key? Let's say a kernel bug is hit that prevents hibernation from resuming, and you boot back up without wiping the hibernation image. Root can then read the key from the partition, ask the TPM to decrypt it, and then use that to create a new hibernation image. We probably want to prevent that as well. Fortunately, when you ask the TPM to encrypt something, you can ask that the TPM only decrypt it if the PCRs have specific values. "Sealing" material to the TPM in this way allows you to block decryption if the system isn't in the desired state. So, we define a policy that says that PCR 23 must have the same value at resume as it did on hibernation. On resume, the kernel resets PCR 23, extends it to the same value it did during hibernation, and then attempts to decrypt the key. Afterwards, it resets PCR 23 back to the initial value. Even if an attacker gains access to the encrypted copy of the key, the TPM will refuse to decrypt it.

And that's what this patchset implements. There's one fairly significant flaw at the moment, which is simply that an attacker can just reboot into an older kernel that doesn't implement the PCR 23 blocking and set up state by hand. Fortunately, this can be avoided using another aspect of the boot process. When you boot something via UEFI Secure Boot, the signing key used to verify the booted code is measured into PCR 7 by the system firmware. In the Linux world, the Shim bootloader then measures any additional keys that are used. By either using a new key to tag kernels that have support for the PCR 23 restrictions, or by embedding some additional metadata in the kernel that indicates the presence of this feature and measuring that, we can have a PCR 7 value that verifies that the PCR 23 restrictions are present. We then seal the key to PCR 7 as well as PCR 23, and if an attacker boots into a kernel that doesn't have this feature the PCR 7 value will be different and the TPM will refuse to decrypt the secret.

While there's a whole bunch of complexity here, the process should be entirely transparent to the user. The current implementation requires a TPM 2, and I'm not certain whether TPM 1.2 provides all the features necessary to do this properly - if so, extending it shouldn't be hard, but also all systems shipped in the past few years should have a TPM 2, so that's going to depend on whether there's sufficient interest to justify the work. And we're also at the early days of review, so there's always the risk that I've missed something obvious and there are terrible holes in this. And, well, given that it took almost 8 years to get the Lockdown patchset into mainline, let's not assume that I'm good at landing security code.

[1] Other architectures use different terminology here, such as "supervisor" and "user" mode, but it's broadly equivalent
[2] In theory rings 1 and 2 would allow you to run drivers with privileges somewhere between full kernel access and userland applications, but in reality we just don't talk about them in polite company
[3] This is how graphics worked in Linux before kernel modesetting turned up. XFree86 would just map your GPU's registers into userland and poke them directly. This was not a huge win for stability
[4] IOMMUs can help you here, by restricting the memory PCI devices can DMA to or from. The kernel then gets to allocate ranges for device buffers and configure the IOMMU such that the device can't DMA to anything else. Except that region of memory may still contain sensitive material such as function pointers, and attacks like this can still cause you problems as a result.
[5] This describes why I'm using "allowed" rather than "required" here
[6] Saving the system state to disk and powering down the platform entirely - significantly slower than suspending the system while keeping state in RAM, but also resilient against the system losing power.
[7] With some handwaving around "coprocessor". TPMs can't be part of the OS or the system firmware, but they don't technically need to be an independent component. Intel have a TPM implementation that runs on the Management Engine, a separate processor built into the motherboard chipset. AMD have one that runs on the Platform Security Processor, a small ARM core built into their CPU. Various ARM implementations run a TPM in Trustzone, a special CPU mode that (in theory) is able to access resources that are entirely blocked off from anything running in the OS, kernel or otherwise.

comment count unavailable comments

Use btrfs compression in Fedora 33

Posted by Lukas "lzap" Zapletal on February 21, 2021 12:00 AM

Use btrfs compression in Fedora 33

Btrfs have been available in Fedora for quite some time and starting from Fedora 33, new installations of Workstation edition use it by default. Btrfs is pretty capable file system with lots of options, let’s take a look on one aspect: transparent per-file compression.

There’s little bit of misunderstanding how this works and some people recommend to mount with compress option. This is actually not necessary and I would actually strongly suggest NOT to use this option. See, this option makes btrfs to attempt to compress all files that are being written. If the beginning of a file cannot be effectively compressed, it’s marked as “not for compression” and this is never attempted again. This can be even forced via a different option. This looks nice on paper.

The problem is, not all files are good candidates for compression. Compression takes time and it can dramatically worsen performance, things like database files or virtual machine images should never be compressed. Performance of libvirt/KVM goes terribly down by order of magnitude if an inefficient backing store is used (qcow2).

I suggest to keep the default mount options Anaconda installer deploys, note there is none related to compression. Instead, use per-file (per-directory) feature of btrfs to mark files and directories to be compressed. A great candidate is /usr which contains most of the system including binaries or documentation.

One mount option is actually useful which Anaconda does not set by default and that is noatime. Writing access times for copy on write file system can be very inefficient. Note this option implies nodiratime so it’s not necessary to set both.

To enable compression for /usr simply mark this directory to be compressed. There are several compression algorithms available: zlib (slowest, best ratio), zstd (decent ratio, good performance) and lzo (best performance, worse ratio). I suggest to stick with zstd which lies in the middle. There are also compression level options, unfortunately the utility does not allow setting those at the time of writing. Luckily, the default level (3) is reasolable.

# btrfs property set /usr compression zstd

Now, btrfs does not immediately start compressing contents of the directory. Instead, everytime a file is written data in those blocks is compressed. To explicitly compress all files recursively, do this:

# btrfs filesystem defragment -r -v -czstd /usr

Let’s find out how much space have we saved:

# compsize /usr
Processed 55341 files, 38902 regular extents (40421 refs), 26932 inline.
Type       Perc     Disk Usage   Uncompressed Referenced  
TOTAL       50%      1.1G         2.2G         2.3G       
none       100%      337M         337M         338M       
zstd        42%      844M         1.9G         1.9G  

Exactly half of space is saved on a standard installation of Fedora 33 Server when /usr is compressed using zstd algorithm with the default level. Note some files are not compressed, these are too small files when it does not make any sense (a block would be used anyway). Not bad.

To disable compression perform the following command:

# btrfs property set /usr compression ""

Unfortunately, at the time of writing it is not possible to force decompression of a directory (or files), there is no defragment command to do this. If you really need to do this, create a script which reads and writes all files but be careful.

Keep in mind the btrfs property command will force all files to be compressed, even if they do not compress well. This will work pretty well for /usr just make sure there is no 3rd party software installed there writing files. There is also a way to mark files for compression and if they don’t compress well btrfs could give up on it. You can do that by setting chattr +c on files or directories. Unfortunately, you can’t set compression algorithm that way - btrfs will default to slower zlib.

Remember: Do not compress everything, specifically directory /var should definitely not be compressed. If you happened to accidentally mark files within /var to be compressed, you can fix this with:

# find /var -exec btrfs property set {} compression "" \;

Again, this will only mark them not to be compressed, it’s currently not possible to explicitly decompress them. Use the compsize utility to find out how much of data is still compressed.

That’s all for today, I will probably sharing some more btrfs posts.

Remove rsyslog and use journald in Fedora

Posted by Lukas "lzap" Zapletal on February 21, 2021 12:00 AM

Remove rsyslog and use journald in Fedora

I am reinstalling my home server from scratch, I want to start using BTRFS which seems like a great fit for what I am doing (NAS, backups). Installation was smooth, no problems, however I noticed that Fedora Server 33 installed both journald and rsyslogd and journal was configured to do persistent logging.

You know, this is weird. On Red Hat Enterprise Linux 7 and 8, journald is configured in volatile mode and it’s set to forward all logs to syslog. On Fedora 33, it looks like both rsyslog and journald are logging (/var/log/messages and /var/log/journal respectively). No forwarding is going on. This is weird, I am going to file a BZ for folks to investigate.

However, I like to only use journald these days, here is how to do it. Stop the journald:

# systemctl stop systemd-journald

Configure journald, if you want use persistent logging there is actually nothing to configure and just make sure the directory exists:

# mkdir /var/log/journal
# systemd-tmpfiles --create --prefix /var/log/journal

If you want to use volatile logging (only in memory), configure as follows (feel free to modify the maximum memory I am just feeling that few megabytes is okay):

# cat /etc/systemd/journald.conf
[Journal]
Storage=volatile
RuntimeMaxUse=5M

Optionally, delete existing logs if you plan using volatile logging:

# journalctl --rotate
# journalctl --vacuum-size=0

Finally, start up the service:

# systemctl start systemd-journald

You may uninstall rsyslog too:

# systemctl disable --now rsyslog
# dnf remove rsyslog

Done!

Installing Unifi Controller on Fedora 33

Posted by Lukas "lzap" Zapletal on February 21, 2021 12:00 AM

Installing Unifi Controller on Fedora 33

Installing Unifi Controller in Fedora 33 is easy. Step one: install MongoDB from the official site since it is no longer available in Fedora due to licensing reasons. Use EL8 version which appears to work fine:

# dnf install ./mongodb-org-server-4.4.4-1.el8.x86_64.rpm

If you haven’t enabled RPMFusion repository, do it:

# dnf install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Install the controller:

# dnf install unifi

For some reason, unifi has hardcoded path to java alternatives symlink which did not work on the initial start (/usr/lib/jvm/jre-1.8.0/bin/java), reinstalling Java did help:

# dnf reinstall java-1.8.0-openjdk-headless

And enable the service:

# systemctl enable --now unifi

Beware, mongo service does not need to be started, unifi service spawns it’s own process:

# systemctl disable --now mongod

You’re done! Visit https://nuc.home.lan:8443 to manage your site.

Caturday is for sunbeams

Posted by Fedora Cloud Working Group on February 20, 2021 02:40 PM

Starting Caturday right with Willow and Bubby.

The post Caturday is for sunbeams appeared first on Dissociated Press.

Friday’s Fedora Facts: 2021-07

Posted by Fedora Community Blog on February 19, 2021 10:18 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)! Fedora 34 Changes should be 100% code complete on Tuesday. The Beta freeze begins Tuesday.

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
LISAvirtual1-3 Junecloses 23 Feb
</figure>

Help wanted

Prioritized Bugs

<figure class="wp-block-table">
Bug IDComponentStatus
1883609shimASSIGNED
</figure>

Upcoming meetings

Releases

Fedora 34

Schedule

Upcoming key schedule milestones:

  • 2021-02-23 — Change completion deadline (100% code complete)
  • 2021-02-23 — Beta freeze begins
  • 2021-03-16 — Beta release early target
  • 2021-03-23 — Beta release target #1

Changes

Change tracker bug status. See the ChangeSet page for details of approved changes.

<figure class="wp-block-table">
StatusCount
ASSIGNED5
MODIFIED19
POST1
ON_QA30
CLOSED3
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
1929940dogtak-pkiNEWProposed(Beta)
1916094pipewireON_QAProposed(Beta)
1928542xorg-x11-drv-nouveauNEWProposed(Beta)
</figure>

Fedora 35

Changes

<figure class="wp-block-table">
ProposalTypeStatus
Autoconf-2.71System-WideFESCo #2579
POWER 4k page sizeSystem-WideAnnounced
rpmautospec – removing release and changelog fields from spec filesSystem-WideAnnounced
</figure>

Changes approved, rejected, or withdrawn will be removed from this table the next week. See the ChangeSet page for a full list of approved changes.

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-07 appeared first on Fedora Community Blog.

One from the vaults: World Destruction by Time Zone

Posted by Fedora Cloud Working Group on February 19, 2021 02:27 PM

Needed a bit of adrenaline on top of my caffeine today, pulled this one out of the vaults for a quick boost. “World Destruction” is...

The post One from the vaults: World Destruction by Time Zone appeared first on Dissociated Press.

DevConf2021.cz - Presentation and Demo

Posted by Linux System Roles on February 19, 2021 12:00 PM

There was a presentation entitled “Managing Standard Operating Envs with Ansible” given at DevConf2021.cz. Demo files and links to videos can be found at DevConf2021.cz

PHP version 7.4.16RC1 and 8.0.3RC1

Posted by Remi Collet on February 19, 2021 06:02 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.0.3RC1 are available as SCL in remi-test repository and as base packages in the remi-php80-test repository for Fedora 32-34 and Enterprise Linux.

RPM of PHP version 7.4.16RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32-34 or remi-php74-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.3 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

x86_64emblem-notice-24.png builds now use Oracle Client version 19.9 (version 21.1 will be used soon)

emblem-notice-24.pngEL-8 packages are built using RHEL-8.3

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.0.0RC4 is also available as Software Collections

Software Collections ( php74, php80)

Base packages (php)

Helper script for easy cherry picks with git

Posted by Lukas "lzap" Zapletal on February 19, 2021 12:00 AM

Helper script for easy cherry picks with git

After many, many manual cherry picks, I’ve decided to put together a short script. It’s fully interactive and hopefully self-explanatory.

[lzap@box foreman]$ git xcp
Looks like you want to cherry-pick, huh?!
Branch you want to pick INTO: 2.4-stable
Branch you want to pick FROM: develop
Allright, allright, here is the menu:
bb1a931b6 Fixes #31064 - sequence helper macro
089e232c0 Fixes #31830 - support children in SkeletonLoader component
ed70741fa Fixes #31882 - set ENC var hostname to shortname
6c3794cf1 Refs #31720 - Apply @ezr-ondrej suggestions from code review
d43eba522 Refs #31720 - address comments and add links
809e6eec5 Refs #31720 - Apply tbrisker suggestions
0cc6abbb4 Fixes #31720 - add first draft doc
648f72365 Fixes #31873 - Expose edit permissions in api index layout
9ad6b1a25 Refs #30215 - mark string for translation
24d2df594 Fixes #31855 - add ellipsis with tooltip for long setting values
Commits separated by space or enter to give up: 0cc6abbb4 809e6eec5 d43eba522
Hit ENTER to push, Ctrl-C to interrupt. See ya!

The script remembers the INTO and FROM selection (stores it in the .git directory) and it uses git stash to work even when there are some uncommited changes.

Influential women

Posted by Daniel Pocock on February 18, 2021 10:50 PM

When people ask me about success engaging women in some of the mentoring programs for free, open source software, I never feel comfortable taking credit for that. I feel that it comes down to one simple thing: collaborating with a number of successful and influential women in a variety of different places. Today is the tenth anniversary of the passing of Sally Shaw. Sally had made monumental contributions to the success of the Yarra Yarra Rowing Club (YYRC), even while fighting cancer, raising a family and managing projects for IBM.

Around the same time I met Sally, I had also taken on one of my first web hosting clients, a newly elected politician in the opposition party, Lynne Kosky. Both the YYRC web site and Lynne's web site were among the first projects in my new content management system (CMS), hosted in a GNU/Linux environment. Compared to Sally, I had far fewer opportunities to meet Lynne, her party was elected into government and she became incredibly busy.

If Lynne were alive today, looking at the way her career progressed, there is every chance she would be the state premier, leading the state's response to the pandemic. Moreover, I suspect that if you swapped these two women, they could blaze a trail in each other's workplace just as easily as the career they had chosen. Sadly, both of their lives were cut short for similar reasons.

The 2010/2011 YYRC Annual Report has been used to recognize Sally's contributions:

Sally Shaw sadly lost her battle with Ovarian Cancer in --- 2011. Sally was a significant contributor to Yarra most notably through her involvement with the negotiations with Carey and the subsequent project management of the current Club House. Sally welcomed new Members to the Club through her involvement with novice coaching and later moved into a role on the Committee serving as Secretary and later as Vice President. Sally was a successful oarswoman, winning the Stokes Salver women's trophy in 2002 during the Winter Sculling Series in addition to regatta wins over a decade of rowing. Sally was a Life Member of Yarra and has a racing shell named in her honour.

Sally was committed to living life fully, despite the challenges of regular hospital visits over the last four years and this provided inspiration for her many rowing and other friends. Sally is survived by her parlner Bruce Ricketts, former President of Yarra Yarra, and their two children Grace and Felix, to whom we extend our love and support.

It is interesting to note that Sally is comemorated on the same page as Hubert Frederico, former president of Rowing Victoria, an organization that has produced numerous Olympic and World champions.

In fact, as I went looking through my archive, it wasn't long before I found Sally in a crew with three-time World Champion and Olympian Jane Robinson.

Sally Shaw, Jane Robinson, mixed eight, Yarra Yarra Rowing Club Sally Shaw, Jane Robinson, mixed eight, Yarra Yarra Rowing Club

The same annual report includes a tremendous list of projects ticked off from the Club's Long Term Plan, most notably, the recently completed Club House. It is incredible to see Sally's impact in so many areas of this document. It is even more incredible to think that she was ticking these things off while fighting cancer.

Melbourne's rowing precinct is at a point where the park meets the city center, on the opposite side of a bridge from the main railway station. With major events taking place in Melbourne throughout the year, there are an incredible number of stakeholders and influences on any construction project in this region. Projects like this test the team's skills in every way from compliance to diplomacy.

YYRC, old club house, boathouse drive YYRC, new club house

A video was made in the old Club House before it was demolished. Sally's tells us about her most memorable moment, in the world of free software it sounds a lot like a Code of Conduct violation.

<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="315" src="https://www.youtube.com/embed/godeqN4qjUs?start=1000" width="560"></iframe>

It looks like my camera captured that too:

YYRC, Sally Shaw, swimming YYRC, Sally Shaw, winning medals

Coxing Head of the Yarra

Sally competed both as a rower and a cox. The Head of the Yarra is a gruelling race where approximately one hundred crews row 8.6km upstream:

Head of the Yarra course

It was originally held in late January, the peak of the Australian summer but they now hold it in November to reduce the number of deaths.

It is particularly challenging for the cox because it is not a straight course, in fact, it is every cox's nightmare. This photo captures Sally skillfully steering a combined Yarra/Richmond crew through the treacherous "Big bend", crushing the hopes of another crew as they run into the bank:

YYRC, Head of the Yarra

and then passing a junior crew as they come out of the corner:

YYRC, Head of the Yarra

Meanwhile, Archive.org has captured the original web site we created for Lynne Kosky. Lynne resigned from public office shortly before Sally passed away and Lynne's battle with cancer only become public in 2011. Like Sally, Lynne had ticked off a huge list of projects: Minister for Finance, Minister for Education and the poisoned chalice, Minister for Public Transport.

Lynne Kosky's first web site

Both of these women contributed greatly to the community and to projects that are prominently visible in the city of Melbourne today. Yet their greatest legacy may be the impact they have had on people around them and how we see the potential of women in Australia.

One of the mentors I've worked with called me one day to ask about a female intern spending too much time on political pursuits. I assured him I've seen this before. Despite distractions, or maybe because of them, the intern completed more work than many other interns, male or female.

On the tenth anniversary of Sally's passing, there are news reports about the status of women in other countries, such as Japan, where women have been granted permission to watch men making decisions for them. When I hear about people who claim to represent free software mistreating female employees on sick leave, I imagine them inflicting pain on women like Sally or Lynne. Having a point of reference like this makes it easier to empathize with the victims in those cases.

If you want to comemorate these women or any other victims of cancer, please do not throw IBM employees into the river. A good idea is to simply ask some of the women you work with for suggestions. If you are in Melbourne, you can hire the YYRC Club House for an event or join a Learn to Row program.

Felix, Grace, Sally Shaw, YYRC

User Experience (UX) + Free Software = ❤

Posted by Máirín Duffy on February 18, 2021 08:07 PM

Today I gave a talk at DevConf.cz, which I previously gave as a keynote this past November at SeaGL, about UX and Free Software using the ChRIS project as an example. This is a blog-formatted version of that talk, although you can view a video of it here from SeaGL if you’d rather a video.

Let’s talk about a topic that is increasingly critical as time goes on, that is really important for those of us who work on free software and care really deeply about making a positive impact in the world. Let’s talk about user experience (UX) and free software, using the ChRIS Project as a case study.

What is the ChRIS Project?

The ChRIS project is an open source, free software platform developed at Boston Children’s Hospital in partnership with other organizations – Red Hat (my employer,) Boston University, the Massachusetts Open Cloud, and others.

The overarching goal of the ChRIS project is to make all of the amazing free software in the medical space more accessible and usable to researchers and practitioners in the field.

This is just a quick peek under the hood – because I’m saying ChRIS is a “platform,” but it can be unclear what exactly that means. ChRIS’ core is the backend, we call it “CUBE” – various UI are attached to that, which we’ll cover in a bit. The backend currently connects to OpenStack and OpenShift running on the Massachusetts Open Cloud (MOC.) It’s a container-based system, so the backend – which is also connected to data storage – pulls data from a medical institution, pushes the data into a series of containers that it chains together to construct a full pipeline.

Each container is a free software medical tool that performs some kind of analysis. All of them follow an input/output model – you push data into the container, it does the compute, you pull the data out of it, and pass that output on to the next container in the chain or back to up to the user via a number of front ends.

This is just a quick overview so you understand what ChRIS is. We’ll go into a little more detail later.

Who am I?

So who am I and what do I know about any of this anyway?

I’m a UX practitioner and I have been working at Red Hat for 16 years now. I specialize in working with upstream free software communities, typically embedded in those communities. ChRIS is one of the upstream projects I work in.

I’m also a long-term free software user myself. I’ve been using it since I was in high school and discovered Linux – I fell in love with it and how customizable it is and how it provides an opportunity to co-create my own computing environment.

Principles of UX and Software Freedom

From that experience and background, having worked on the UX on many free software projects over the years, I’ve come up with two simple principles of UX and software freedom.

The first one is that software freedom is critical to good UX.

The second, which I want to focus particularly here, is that good UX is critical to software freedom.

(If you are curious about the first principle, there is a talk I gave at DevConf.us some time ago that you can watch.)

3 Questions to Ask Yourself

So when we start thinking about how good UX is critical to software freedom, I want you to ask yourself these three questions:

  1. If a tree falls in the forest and no one is around to hear it… does it make a sound? (You’ve probably heard some form of this before.)
  2. If your software has features and no one can use those features… does it really have those features?
  3. If your software is free software, but only a few people can use it… is it really providing software freedom?

Lots of potential…

Here, in 2021, we have a wealth of free & open source technology available to us. Innovation does not require starting from scratch! For example:

  1. In seconds you can get containerized apps running on any system. (https://podman.io/getting-started/)
  2. In minutes you can deploy a Kubernetes cluster on your laptop. (https://www.redhat.com/sysadmin/kubernetes-cluster-laptop)
  3. In minutes you can deploy a deep learning model on Kubernetes. (https://opensource.com/article/20/9/deep-learning-model-kubernetes)

This is amazing – in free software, we’ve made so much progress in the past decade or so. You can work in any domain and startup a new software project, and all of the underlying infrastructure and plumbing you need is already available, off-the-shelf, free software licensed, for you to use.

You can focus on the bits, the new innovation, that you really care about, and avoid reinventing the wheel on the foundation stuff you need.

… and too much complication to easily realize that potential.

The problem – well, take a look at this. This is the cloud native landscape put out by the Cloud Native Computing Foundation.

I don’t mean to pick on cloud technology at all – you’ll see this level of complication in any technical domain, I think. It’s…. a little complicated, right? There’s just so much. So many platforms, tools, standards, ways of doing things.

Technologists themselves have a hard time keeping up with this.

How do we expect medical experts and clinicians to keep up with that, when even software developers have a difficult time keeping up?

The thing is, there’s a lot of potential here – there really are so many free software tools in the medical space. Some of them have been around for years.

By default, they tend to be developed and released as free software, because many are created by researchers and academic labs that want to collaborate and share.

But you know, as a medical practitioner – how do you actually make use of them? There’s a few reasons they end up being complicated to use:

  • They’re often built by researchers who don’t typically have a software development background.
  • They’re usually built for use in a specific study or under a specific lab environment without wider deployment in mind.
  • There tends to be a lack of standardization in the tools and lack of integration between them.
  • Depending on the computation involved, they may require a more sophisticated operating environment than most clinical practitioners have access to.
  • There’s a high barrier to entry.

Even though these tools are free software and publically available, these aren’t tools your typical medical practitioner could pick up and start using in their practice.

Free software and hacking the Gibson

We have to remember that these are very smart people in the medical field. Neuroscientists and brain surgeons, for example. They’re smart, but they can’t “hack the Gibson.”

A good UX does not require your users to be hackers.

Unfortunately, traditionally and historically, free software has kind of required users to be hackers in order to make the best use of it.

Bridging the gap between free software and frontline usage

So how do we bridge this gap between all of this amazing free software and clinical practice, so this free software can make a positive difference in the world, so it could feasibly positively impact medical outcomes?

Good UX bridges the gap. This is why good UX is so critical to software freedom.

If your software is free software, but only a few people can use it, are you really providing software freedom?

I’m telling you no, not really. You need a good UX to be able to do that – to allow more than a few people to be able to use it, and to be able to provide software freedom.

What are these tools, anyway?

What are all of these amazing free software tools in the medical space?

This is a very quick map I made, based on a 5-10 minute survey of research papers and conference proceedings in various technology-related medical groups. These are all free software tools.

This barely scratches the surface of what is available.

I want to talk about two of them in particular today: COVID-Net and Free Surfer. They are both tools now available for use on the ChRIS platform.

Free Surfer

Freesurfer is an open source suite of tools that focuses on processing brain MRI images.

It’s been around for a long time but it’s not really in clinical use.

This is a free software tool that has a ton of potential to impact medicine. This is a screenshot of a 3D animation created in Free Surfer running on the ChRIS platform. The workflow here involved taking 2D images from an MRI, that are taken in slices across the brain. Running on ChRIS, Freesurfer constructed a 3D volume out of those flat 2D images, and then segmented the brain into all its different structures, color-coding them so you can tell them apart from one another.

How might this be used clinically? You could have a clinician who’s not sure what’s wrong with a patient. Instead of just reviewing the 2D slices, she may pan around this color coded 3D structure and notice one of the structures in the brain is larger than is typical. That might be a clue that gets the patient to a quicker diagnosis and treatment.

This is just a hypothetical example so you can see some of the potential of this free software tool.

Another example of a great free software tool in this space is COVID-Net.

This is a free software project that is developed by a company called DarwinAI in partnership with the University of Waterloo. It uses a neural network to analyze CT and Xray chest scans and provide a probability if a patient has healthy lungs, covid, or pneumomia.

It’s open source and available to the general public.

The potential here is to provide an alternative way of triaging patients when perhaps COVID test results are backed up or too slow especially during a surge in COVID cases.

These are just two projects we’ve worked with in the ChRIS project.

How do we get these tools to the frontline, though?

How do we get amazing tools like these in the frontlines, in medical institutions? How do we provide the necessary UX to bridge the gap?

Dr. Ellen Grant, who is Director of the Fetal-Neonatal Neuroimaging and Developmental Science Center at Boston Children’s Hospital, came up with a list of three basic user experience requirements these tools need for clinicians to be able to use them:

  1. They have to be reproducible.
  2. They have to be rapid.
  3. They have to be easy.

Requirement #1: Reproducible

First, let’s talk about reproducibility. In the medical space, you’re interacting with scientists and medical researchers trying to find evidence that supports the effectiveness of new technology or methods. So if a new method comes out – let’s say it’s a new machine learning model – and you’re reading a study showing support for its effectiveness.

If you want the best possible shot at the technique achieving a similar level of effectiveness with your own data, you’ve got to use the same version of the code, in as similar a running environment as possible as in the study. You want to eliminate any variables – like the operating environment – that might skew the output.

Here’s a screenshot of setting up Freesurfer. This is just not something we can expect medical practitioners to go through in order to reproduce an environment.

How do we make free software tools more reproducible for clinicians?

I’ll use the COVID-Net tool as an example. We worked with the COVID-Net team and they packaged it into a ChRIS plugin container. The ChRIS plugin container contains the actual code and includes a small wrapper on top with metadata and whatnot. (Here is the template for that, we call it the ChRIS cookiecutter.)

Once a tool has been containerized as a ChRIS plugin, it can run on the ChRIS platform, which gives you a number of UX benefits including reproducibility. A clinician can just pick that tool from a list of tools from within the ChRIS UI, push their data to it, and get the results, and ChRIS manages the rest.

Taking a few steps back – we have a broader vision for reproducibility via ChRIS here.

This is a screenshot of a prototype of what we call the ChRIS Store.

We envision making all of these amazing free software medical tools as easy to install and run on top of ChRIS as it is to install and run apps on a phone from an app store. So this is an example of a tool containerized for ChRIS – you’d be able to take a look at the tool in the ChRIS Store, deploy it to your ChRIS server, and use it in your analysis.

Even if a tool is a little hard to install, run, and reproduce in the same exact way on its own, for the small cost of packaging and pushing it into the ChRIS plugin ecosystem, it becomes much easier to share, deploy, and reproduce that tool across different ChRIS servers.

Instead of requiring medical researchers and practitioners need to use the linux terminal, to compile code, and to set up environments with exact specifications, we envision them being able to browse through these tools, like in an app store, and be able to easily run them on their ChRIS server. That would mean they would get much more reproducibility out of these tools.

Requirement #2: Rapid

The second requirement from Dr. Grant is rapidness. These tools need to be quick. Why?

Well, for example, we’re still in a pandemic right now. As COVID cases surge, hospitals run out of capacity and need to turn over beds quickly. Computations that take hours or days to run will just not be used by clinicians, who do not have that kind of time. So the tools need to be fast.

Or for a non-pandemic case… you might have a patient who needs to travel far for specialized care – if results could come back in minutes, it could save a sick patient from having to stay in a hotel away from home and wait days for results and to move forward in their treatment.

Some of these computations take a long time, so couldn’t we throw some computing power at them to get the results back quicker?

ChRIS lays the foundation that will enable you to do that. ChRIS can run or orchestrate workloads on a single system, HPC, or a cloud, or across those combined. You can get really rapid results, and ChRIS gives you all the basic infrastructure to do it, so individual organizations don’t have to figure out how to set this up on their own from scratch.

For example – this is a screenshot of the ChRIS UI – it shows how you build these pipelines or analyses in ChRIS. The full pipeline is represented by the graph on the left, and each of the circles or “nodes” on the graph is a container running on ChRIS. Each of these containers is running a free software tool that was containerized for ChRIS.

The blue highlighted container in the graph is running Freesurfer. In this particular pipeline, ChRIS has spun up different copies of the same chain of containers as to run probably on different pieces of the data output by that blue Freesurfer node.

You can get this kind of orchestration and computing power just based on the infrastructure you get from ChRIS.

This is a diagram to show another view of it.

You have the ChRIS store at the top with a plugin (P) getting loaded into the ChRIS Backend.

You have the data source – typically a hospital PACS server with medical imaging data, and image data (I).

ChRIS orchestrates the movement of the data and deployment of these containers into different computing environments – maybe one of these here is an external cloud, for example. ChRIS retrieves the data from the data source and pushes it into the containers, and retrieves the container pipeline’s output and stores it, presenting it to the end user in the UI. Again, each one of those containers represents a node on the pipeline graph we saw in the previous slide, and the same pipeline can consist of nodes running in different computing environments.

One of those compute environments that ChRIS utilizes today is the Massachusetts Open Cloud.

This is Dr. Orran Krieger, he is the principal investigator for the Massachusetts Open Cloud at Boston University.

The MOC is a publicly-owned, non-commercial cloud. They collaborate with the ChRIS project, and we have a test deployment of ChRIS that we are using for COVID-Net user testing right now that runs on top of some of the powerpc hardware in the MOC.

The MOC partnership is another way we are looking to make rapid compute in a large cloud deployment accessible for medical institutions – a publicly-owned cloud like the MOC means institutions will not have to sign over their rights to a commercial, proprietary cloud who might not have their best interests at heart.

Requirement #3: Easy

Finally, the last UX requirement we have from Dr. Grant is “easy.”

What we’ve done in the ChRIS project is to create and assemble all of the infrastructure and plumbing needed to connect to powerful computing infrastructures for rapid compute. And we’ve created a container-based structure and are working on creating an ecosystem where all of these great free software tools are easily deployable and reproducible and you can get the same exact version and env as studied by researchers showing evidence of effectiveness.

One of the many visions we have for this: a medical researcher could attend a conference, learn about a new tool, and while sitting in the audience (perhaps by scanning a QR code provided by the presenters) access the same tool being presented in the ChRIS store. They could potentially deploy to their own ChRIS on their own data to try it out, same day.

This all needs to be reproducible, and it needs to be easy. I’m going to show you some screenshots of the ChRIS and COVID-Net UIs we’ve built in making running and working with these tools easier.

This is an example of the ChRIS feed list in the Core ChRIS UI. Each of these feeds (what we call custom pipelines) is running on ChRIS. Each pipeline is essentially a composition of various containerized free software tools chained together in an end to end workflow, kicked off with a specific set of data that is pushed through and transformed along the way.

This UI is not geared at clinicians, but is more aimed at researchers with some knowledge of the types of transformations the tools create in the data – for example, brain segmentation – who want to create compositions of different tools to explore the data. They would compose these pipelines in this interface, experiment with them, and once they have created one they have tested and believe is effective, they can save it and reuse it over and over on different data sets.

While you are creating this pipeline, or if you are looking to add on to a pre-existing workflow, you can add additional “nodes” – which are containers running a particular free software tool inside – using this interface. You can see the list of available tools in the dialog there.

As you add nodes to your pipeline, they run right away. This is a view of a specific pipeline, and you can see the node container highlighted in blue here has a status display on the bottom showing that it is currently still computing. When the output is ready, it appears down there as well, per-node, and it syncs the data out and passes it on to the next node to start working on.

Again, this is an interface geared towards a researcher with familiarity analyzing radiological images – but not necessarily the skill set to compile and run them from scratch on the command line. This allows them to select the tools and bring them into a larger integrated analysis pipeline, to experiment with the types of output they get and try the same analysis out on different data sets to test it. They are more likely looking at broad data sets to see trends across them.

A practicing clinician needs vastly simplified interfaces compared to this. They aren’t inventing these pipelines – they are consuming them for a very specific patient image, to see if a specific patient has COVID, for example.

As we collaborate with the COVID-Net team, we are focused on creating a single-purpose UI that used just one specific pipeline – the COVID-Net analysis pipeline – and could allow a clinician to simply select the patient image, click and go, and get the predictive analysis results.

The first step in our collaboration was containerizing the COVID-Net tool as a ChRIS plugin. That took just a few days.

Then together over this past summer, in maybe 2-3 months, we built this very streamlined UI aimed at just this specific case of a clinician running the COVID-Net prediction on a patient lung scan and getting the results back. Underneath this UI, is a pipeline, just like the one we just looked at in the core UI – but clinicians will never see that pipeline underneath – it’ll just be working silently in the background for them.

The user simply types in a patient MRN – medical record number – to look up the scans for that patient at the top of the screen, selects the scans they want to submit, and hits analyze. Underneath that data gets pushed into a new COVID-Net pipeline.

They’ll get the analysis results back after just a minute or two, and it looks like this. These are predictive analyses – so here the COVID-Net model believes this patient has about a 75% chance of having normal, healthy lungs and around a 25% or so chance of having COVID.

If they would like to explore this a little further, maybe confirm on the scan themselves to double check the model, they can click on the view button and pull up a full radiology viewer.

Using this viewer, you can take a closer look at the scan, pan, zoom, etc. – all the basic functionality a radiology viewer has.

This is an example of the model we see for ChRIS to provide simplified, easy ways of accessing the rapid compute and reproducible tool workflows we talked about: Standing up streamlined, focused interfaces on top of the ChRIS backend – which provides the platform, plumbing, tooling to quickly stand up a new UI – so clinicians don’t have to develop their own workflows, they can consume tested and vetted workflows created by experts in the medical data analysis field.

To sum it all up –

This is how we are working to meet these three core UX requirements for frontline medical use.

We’re looking to make these free software tools reproducible using the ChRIS container model, rapid by providing access to better computing power, and easy by enabling the development of custom streamlined interfaces to access the tools in a more consumable way.

In other words, the main requirement for these free software tools to get into the hands of front line medical workers is a great user experience.

Generally, for free software to matter, for us to make a difference in the world, for users to be able to enjoy software freedom – we have to provide a great user experience so they can access it.

So in review – the two principals of software freedom and UX:

  1. Software freedom is critical to good UX.
  2. Good UX is critical to software freedom.

I got HoneyComb

Posted by Marcin 'hrw' Juszkiewicz on February 18, 2021 06:09 PM

Few years ago SolidRun released MACCHIATObin board. Nice fast cpu, PCI Express slot, several network ports. I did not buy it because it supported only 16 GB of memory and I wanted to be able to run OpenStack.

Time has passed, HoneyComb LX2 system appeared on AArch64 market. More cores, more memory. Again I haven’t bought it — my Ryzen 5 upgrade costed less than HoneyComb price is.

And when someone asked me for some serious AArch64 system to buy I was suggesting HoneyComb.

Let us look at hardware

So what do we have here?

  • 16 Cortex-A72 cores
  • 2 SO-DIMM slots (up to 64GB ram in total)
  • USB 2.0 and 3.0 ports (as ports and/or headers)
  • standard ATX power socket (no 12V AUX needed)
  • 3 fan connectors (one with PWM, two with 12V)
  • front panel connectors like on x86-64 motherboards
  • M.2 slot for NVME (pcie x4)
  • PCI Express slot (open x8 one so x16 card fits)
  • MicroSD slot (for firmware)
  • 4 SFP+ ports for 10GbE networking
  • 1 GbE port
  • 4 SATA ports
  • serial console via microUSB port
  • power/reset buttons
<figure id="__yafg-figure-1"> HoneyComb board layout <figcaption>HoneyComb board layout</figcaption> </figure>

Lot of networking and there is even version with 100GbE port added: ClearFog CX LX2.

So how I got it?

I wrote that I did not bought it, right? Jon Nettleton (from SolidRun) contacted me recently and asked:

Morning. do you have any interest in a HoneyComb? I have some old stock boards available to the community. I figured it may help you out with your UEFI Qemu work.

We discussed about SBSA/SBBR stuff and I sent him an email with address information and shipping notes.

Some days passed and board arrived. I added spare NVME and two sticks of Kingston HyperX 2933 CL17 memory and it was ready to go (microsd card keeps firmware):

<figure id="__yafg-figure-2"> HoneyComb board ready to go <figcaption>HoneyComb board ready to go</figcaption> </figure>

Let’s run something

Debian ‘bullseye’ booted right away. Again I used pendrive from my EBBR compliant RockPro64. Started without problems.

Network ports issue

Ok, there was one problem — on-board ethernet ports do not work yet with mainline nor distribution kernels so I had to dig out my old USB based network card.

There are patches for Linux kernel to get all ports running. May get merged into 5.13 kernel if things go nicely.

Plans?

I plan few things for HoneyComb:

  • check several distributions how they handle AArch64 systems
  • improve SBSA ACS code as HoneyComb is almost SBSA level 3 compliant (there are some places where error/warning messages break output)
  • build, deploy and test OpenStack
  • test software
  • check how it works as AArch64 desktop (like I did with APM Mustang 6 years ago)

New badge: DevConf.cz 2021 Attendee !

Posted by Fedora Badges on February 18, 2021 05:52 PM
DevConf.cz 2021 AttendeeYou attended the 2021 iteration of DevConf.cz, a yearly open source conference in Czechia!

Creating XDG custom url scheme handler

Posted by Izhar Firdaus on February 18, 2021 02:49 PM

If you develop system tools or desktop software on Linux that also have an accompanying web application, you might want to have a way for the web application to launch the tool with some parameters specified through a web based link. For example, a link with dnf://inkscape as url, might be used to launch Gnome Software, and display the description of Inkscape, so that user may choose to install it or not.

In Linux, registering a custom URL handler can be done using XDG desktop file, of which it is configured to open x-scheme-handler MimeType.

To achieve this, you can simply create a .desktop in ~/.local/share/applications/, or /usr/local/share/applications, and configure it with MimeType=x-scheme-handler/<your-custom-proto>. If nothing went wrong with the setup, you should be able to open links with the custom url protocol afterwards.

For example, if you have a script dnfurl which takes dnf://<package-name> as its first parameter and launch Gnome Software with the package name, you can create a .desktop file with this content:

[Desktop Entry]
Version=1.0
Type=Application
Name=dnfurl
Exec=dnfurl %U
Terminal=false
NoDisplay=true
MimeType=x-scheme-handler/dnf

After installing the .desktop file, do run update-desktop-database to update the related indexes/cache. If you installed the .desktop file in ~/.local/share/applications/, you will have to run update-desktop-database ~/.local/share/applications.

If you interested with the example dnfurl program above, you can check it out at this git repository: github.com/kagesenshi/dnfurl. Or if you are in Fedora, you can install it from copr by running:

dnf copr enable izhar/dnfurl
dnf install dnfurl

Have fun~.

Free Software and Open Source: Get involved

Posted by Ingvar Hagelund on February 18, 2021 01:21 PM

Contributing to Free Software using Open Source methodics may look like intimidating deep expert work. But it doesn’t have to be that. Most Free Software communities are friendly to newcomers, and welcome all kind of contributions.

Reporting bugs

Hitting a bug is an opportunity, not a nasty problem. When you hit a bug, it should be reported, and with a bit of luck, it may even be fixed. Reporting the bug in an open forum also makes other users find the bug, give attention to it, and they may in turn be able to help out working around or fixing it. Reporting bugs is the most basic, but still of the most valuable contributions you may do. Finding bugs are finding real problems. Reporting bugs are helping fixing them, for you, and for other users. You may not complain to your coworker on a bug unless it is reported upstream.

While reporting bugs, remember to collect as much information as possible on the issue, including logs, runtime envionment, hardware, operating system version, etc. While collecting this information, make sure you don’t send any traceable private information that may be used by rouge parties, like ip adresses, hostnames, passwords, customer details, database names, etc.

Bugs in operating system packages

Bugs in components delivered by a Linux distribution (Ubuntu, Debian, Fedora, Red Hat, SuSE, etc), should be reported through their bug reporting interface. Remember to search for the bug before posting yet another duplicate bug. Perhaps a workaround already exists.

So the next time something strange happens to your haproxy, nginx, varnish, or your firefox browser crashes or has unexpected behaviour, collect data from your logs, and open a bug report.

  • Red Hat / EPEL / Fedora users should report bugs through https://bugzilla.redhat.com/
  • Similarly, OpenSuSE users may search for and report bugs at https://bugzilla.opensuse.org
  • Ubuntu users may have luck looking at https://help.ubuntu.com/community/ReportingBugs
  • As Ubuntu’s upstream is Debian, you may search for bugs, fixes and workarounds using their tools at https://www.debian.org/Bugs/Reporting

    These tools have detailed guidelines on the details on how to search, report, and follow up the bugs.

    For an example of an end user bug report with an impressive follow up from a dedicated package maintainer, have a look at https://bugzilla.redhat.com/show_bug.cgi?id=1914917

    Reporting upstream bugs

    Using software directly from the upstream project is growing more usual, specially as container technology has matured, enabling developers to use software components without interfering with the underlying operating system. Reporting and follow up bugs becomes even more important, as such components may not be filtered and quality assured by operating system security teams.

    Find your component’s upstream home page or project development page, usually on Github, Savannah, Gitlab, or similar code repo service. These services have specialised issue trackers made for reporting and following up bugs and other issues. Some projects only has good old mailing lists. They may require you to subscribe to the list before you are allowed to report anything.

    Following up the report, you may be asked for test cases and debugging. You will learn a lot in the process. Do not be shy to ask for help, or admitting that you don’t understand or need guidance. Everybody started somewhere. Even you may learn to use the GNU debugger (gdb) in time.

    Non code commits

    Similarly to reporting bugs, non code commits may be low-hanging fruit to you, but may be crucial to a project’s success. If you can write technical documentation, howtos, or do translations to your native language, such contributions to Free Software are extremely welcome. Even trivial stuff like fixing typos in a translated piece of software should be reported. No fix is too small. I once did a single word commit to GPG: A single word typo fix in their Norwegian translation. Also, write blog posts. Don’t have a blog yet? Get one. Free blog platforms are thirteen to a dozen.

    Use source code tools

    Admit it: You already use git in your day job. Using it for documentation or translation should be trivial. If you have not done so already, learn how to clone a project on github (just google it), grep through the source for what you like to fix or add, make a branch with your contribution, and ask for a pull request (again, just google it). If you changes are not merged at once, be patient, ask for the maintainer’s advice, and listen to their guidelines. Be proud of your contribution, but humble in your request.

    Feature requests

    Usage of a piece of software is not given from the start. Perhaps you have ideas to how a piece of code may be used in some other way, or there is some piece missing that is obvious to you, though not reported in the project’s future roadmap. Don’t be shy to ask. Report a feature request. Usually this is done the same way as reporting a bug. The worst you can get is that they are not interested, or a request for you to produce the missing code. Which you may do.

    Join a project

    If your work require it, and/or your interests and free time to spend allows for it, join a Free Software project.

    Distribution work

    Upstream distributions like Fedora, Debian, and OpenSuse (not to mention Arch and Gentoo) are always looking for volunteers, and have sub projects for packagers, documentation, translation, and even marketing. As long time players in the field, they have great documentation for getting started. Remember to be patient, ask for advice, follow guidelines. Be proud of your contributions, but humble in your requests.

    Upstream projects

    If you want to join a project, show your interest. Join the project’s social and technical forums. Subscribe to their development email lists. Join their IRC channels. Lurk for a while, absorbing the project’s social codes. Some projects are technoraties, and may seem hostile to newbie suggestions without code to back them up. Others are welcoming and supportive. Do some small work showing what you are capable of. Fix things in their wiki documentation. Create pull requests for simple fixes. Join in their discussion. Grow your fame. Stay humble. Listen the long time players.

    Release your own

    Made a cool script at work? A build recipe for some special case? An Ansible playbook automating som often-visited task? A puppet module? Ask your manager for permission to release it as Free Software. Put GPLv3 or some other OSS license on it, and put it on Github. Make a blog post about it. Tell about it in social media. Congratulations, you are now an open source project maintainer. Also, Google will find it, and so will other users.

  • Different OpenGPG DNS entries for the same email

    Posted by Miroslav Suchý on February 18, 2021 10:06 AM

    In previous blogpost, I wrote How to generate OpenPGP record for DNS (TYPE61). You may get puzzled what to do when you have different GPG keys with the same email. E.g. EPEL GPG keys are:

    pub   rsa4096 2019-06-05 [SCE]
          94E279EB8D8F25B21810ADF121EA45AB2F86D6A1
    uid           Fedora EPEL (8) <epel>
    pub   rsa4096 2013-12-16 [SCE]
          91E97D7C4A5E96F17F3E888F6A2FAEA2352C64E5
    uid           Fedora EPEL (7) <epel>
    pub   rsa4096 2010-04-23 [SCE]
          8C3BE96AF2309184DA5C0DAE3B49DF2A0608B895
    uid           EPEL (6) <epel>
    

    Three different GPG keys with the same email. How should we put it in DNS?

    This is actually nothing unusual in DNS. You can have multiple DNS entries normally. E.g.:

    ;; ANSWER SECTION:
    seznam.cz.        274    IN    A    77.75.74.172
    seznam.cz.        274    IN    A    77.75.74.176
    seznam.cz.        274    IN    A    77.75.75.172
    seznam.cz.        274    IN    A    77.75.75.176 
    

    When you run the command suggested in the previous blogpost, you will get:

    $ gpg2  --export-options export-dane --export epel@fedoraproject.org
    $ORIGIN _openpgpkey.fedoraproject.org.
    ; 94E279EB8D8F25B21810ADF121EA45AB2F86D6A1
    ; Fedora EPEL (8) <epel>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 1141 (
            99020d045cf7cefb011000c93882169651ae7719e9bc99e4c50cf60ada1623b8
            287559e8725add97cde4563a92429fb6760c6e1b99948800d47d81da450cf12b
            0f1e7ee427c31cd4f6467bd27802c6d99b4161a65267d24e189aa4ecf4d34d7c
            f9ea3930569b776bdd886a35cbee759b6b110e937ca9d09aa97928eb973232e2
            d7ae88c91c8baf440ff1ee2a8ead17f26bcc773b10b83c4825e698039cff2954
            ad252d89dc0c440237be83f6e6e16505a121217fcc923e7bcd3a57bd61cdfc8b
            fccb779909bf962fa544a536e54b24f5d59a7f8347ff06473083c0915278b83e
            07fee4b8f70a969f28936064fb8546e279c17b72b84b0a6951cef251f269113f
            aff84ff177b4d0cf5997833440d5154913147354ebf876f8edfdf0c358fdf3a1
            c8a68d2b79e713483a409d5d387df59571c0465a453ad5addd599953426758b7
            9876f6a9e047dff6a4d6649848ec2ab45a6f0380b5295e0365926aaebc4ecfd9
            6e4402bd84e40a8a4280db7f0fc6896751bc758a78d18c44b9c623742ea17dd4
            a409570fbbf5dd5759c4dc9dbe97b2a5b7ff01f472ac744a864523ddc535a589
            0cdc335913bec2e4951eebfd27c5bb7811351905380b8182463fc73b3db9159c
            ee88fca80bd63c3eecc5ba033f0ec7e45764bc8631599b5bf4cc73c85a25f28b
            328528dd3564f273e2521a0e0b4fe423b8fd8a516b071054e5d52d1782130167
            3cc671ba8c9b3f22cbd1a50011010001b4284665646f7261204550454c202838
            29203c6570656c406665646f726170726f6a6563742e6f72673e890238041301
            02002205025cf7cefb021b0f060b090807030206150802090a0b041602030102
            1e01021780000a091021ea45ab2f86d6a166a00ffe319cb5b0b37e0607254342
            48e9ae4d1f5fb2328699bf53b06c2072aca472cbf98e3abc000663ff6f32f744
            d1f72bf44936669ef31354569920b3cf35204c6e811684c6d47e05868ffb67c0
            572bf8f26e8c174997ea5e74ce7e14856a5c0377095226f7d8355f4c6f2bdbc4
            df2a651535cfb3c4599ac8cc50c34e815193447bd731e99cf4cd209e7f6174f0
            36a53d3b1208f213108f8b9f175d74e34009419edbaf7af37ab97204e07e25dd
            7348f56b00fa5e332269a5614748300858aeb1f9a9b5dd52089d7b6a07f0b3e0
            60472c52b9776fc862ca55c38ed2a258fc8e2144b6702aebed40ab9587de2078
            6c4d9eeb3d340d217dddfaf03b258ded698f22873642c35126181f242fa2b064
            00c30abcc406c71017aec8c1bc1998f3c68860d07e1b39212e8d52d38c7d716c
            01b80e78563bfd555cb5fe2d710bc9c3bc6504dc8e3ff80102aea38c2f30faef
            f40d7af681131be57042535def85413c60af867cf019735b4cafca1f5f96447d
            1849bfb8d5f941dcd2a131253a746234486b48c4348785554d9b1dbfe97893f5
            4ff39eb66c1b92b0f2721b6c5cfef4c19451680932c775665f51f20cfb57d7e9
            9fb2fbcb92e127c1ce08cb18fd9effec752cfadfc0d4c23883b43983952e4997
            608b4cf1ba4896b8290b7808803f8cbbcca2b83a327ae383437160876b7eb39f
            0566933de2c4bf10d1ab91d793aaab606347f7f2a2
            )
    
    $ORIGIN _openpgpkey.fedoraproject.org.
    ; 91E97D7C4A5E96F17F3E888F6A2FAEA2352C64E5
    ; Fedora EPEL (7) <epel>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 1141 (
            99020d0452ae6884011000b5529857c0ca8201aacf507fd9b0e16c95a6de4d53
            b6a439396273bde9ffab81907bc40ac139279093b07dd22a9227ce7f73bd8e02
            7e0e5d8bd3eb781f09e5e926ce4cede99790fb0d4165928eef7d956f80a92366
            8d85a199194b44697438eee02308fbefa7485ef70c34597348f8f4d0ddb102a8
            cc6e39675769f669b004e60aba569a8fbc55d5c9fad56bfb9a9688035667fa87
            6b7845da627eaf2a4c7b07154df1a42cfe4fafdd196286438d9941f4da2e70cc
            8d3a00b266340327af9086d7ea2655b731af6a76293dc596e17d110cf729a9f1
            4d664eb8df123896c67e63612bf58bb94bff31d25cbeb66988a24684d30c1b75
            4bbbe3461366309eb2ba185a2460e73b90db17cac49a15e44f8487eb58c060c9
            9fa1df5a14609ee751470bee278e73b5856d2ae94ac3c410a5dd924d6adc4100
            c96915a69cca285ff3c04d38c2044c41f5d933dfc0ebbaf93b2241ccb6c22a96
            400e40c76c5f57774e8bfa044d970e3206d331712ddb8919b57073feab21cf79
            4f4e798f7e84cf41eb8f17694c638e1f146a30ae66f5f9456dac71b439d2ef0e
            fb5aacd6ec78c5ac3d15793c76be78e31aa7211c44297c3a453b36fc2d316181
            889dc913547e5418df958b3b32cd57ea55dde437260d75505c1e95234ba9f41f
            b8544673ecdcc243631e1e723ef9bda1d4750487ea27ab0a18f19bb4f357a3b1
            11311ca95b2473d338b4b70011010001b4284665646f7261204550454c202837
            29203c6570656c406665646f726170726f6a6563742e6f72673e890238041301
            020022050252ae6884021b0f060b090807030206150802090a0b041602030102
            1e01021780000a09106a2faea2352c64e5c7c60ffe2ca6aa6c4e3b4333baa9ca
            d28b1caaee66dbee5a2aade517ef4fdc30f4651414c678659197e27517838f6a
            8b10cb6b2591f1d746fece64c8e8b70b4b97ffa8a7cda632460f8187bd1baea2
            942b5e05d41aeb3e2dbd79a29f1be1ae305b577411b66bb3ad3e6c37cae2a6ab
            d129bfa07498fe84e4a49dbd2452a895ad19c03ac114bfc03cd7244347a79ade
            f48b26dc2632769cb418b440e70a387cf079b8b75484b1140ef87628afff7cc8
            e36b7a342fe48dcfc3746836729f9031094f5107710771379c160a32e6e9918f
            df4166b79b47534efb8801ae2bc87ddbc3e1f24d7cd0475f08521fd00218236f
            1e75d58c7405f621d60292a44487a1f05af69f976d7c9105ed2117f066d7ec56
            ec5a8dbd91732b56036a1cdc839df0015683ee6e041db4964991564091a1eb32
            ef00cadc13eb643878daa6248d5a59b650348e63598a9ce912ae08aebf17df6a
            b61153f6024ed7bfb246e2fa52fdbfbe0cd547544677054b2b09548c63c08d87
            e5a4243058c0004b18aff6e3f260f1dc12b4ab6d11b4c2321f011defbacd4e48
            ec77c591d9a5715f9904ec0cfef355138bcd5f7c477270140c49e3fc6a78e378
            bc23c553a3ac4f795643cc3b8a5e68858f79c26ba37ebea593cf6413325d393c
            dc9fb0c16d4aeedf7090306299dc4bed463b02ec50db2f4d7bab14be65503f1c
            69327e2bb8729815f155276ea97978067ed4b97a0f
            )
    
    $ORIGIN _openpgpkey.fedoraproject.org.
    ; 8C3BE96AF2309184DA5C0DAE3B49DF2A0608B895
    ; EPEL (6) <epel>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 1132 (
            99020d044bd22942011000cb1a7523db8655296ee588537f240e4282dce53672
            aceec060edf55b356af2884dd445b9f5257beb7701ed90ac98f7afe27d9d4b77
            7944da0385eb56c6676096c0935e7bbe92a7d67e8ac3fc4505db1b98f08ffe01
            33d1caee9864b6a15527c55b6368df4e371fc51bc633c601c1717c871d020d95
            11023bdfd71452409ee2028e7ca9c1e75edc4e02f42601b47dcfa43f87f0fe63
            f3e1a08269d4e57854d1c26c2b3d33b1d4541a600b9dcf0dc4d442ec1c81e63a
            a50828f5e4f578b352655ac9e4172d8af97551c0fa3fa0881a491e4f680d86a8
            a77513e78145c65d6ed0ce879f7be7d542a2529bb4eadaaf68a95678e89f7ae3
            0682448b51c92a6e5b977164edf858ceb0b30d813f63250026c7e25de5baab69
            5e8dc1bdf9e4f870051f71dc32ec1d6ae971bfbcd829709f36849f82ba447e8d
            84c78226fc8dd676dc0c13f19b9f076add5273800c4b54585ccc31f03648e621
            d3c094d27dfe2c518e2a7876066d5bc55c042acda0bf8b843ac87e3be72dd46e
            bc11f2f4fdec085ec567271a1e53163fa4622ed1e710c19c6d9f10a501e2d9d4
            5e535c32afd42082e194fa3a925d86abd7211cb1d4466aca6934867cf906b06e
            6fdae1da13d744c4b4f02b852a079706efa930d88ed7d5f76942ff68b080eea0
            6201bd5eb3b857c755dfd15f9bba0b15fbd36e0e66bf843f13a016a3f8248e6b
            7191cba56f202f8c5b58010011010001b4214550454c20283629203c6570656c
            406665646f726170726f6a6563742e6f72673e89023604130102002005024bd2
            2942021b0f060b090807030204150208030416020301021e01021780000a0910
            3b49df2a0608b8951fc60ffc0b18fbfda8edfd7b26fd365af07ca754128d6d1f
            129dae1373f9762b3b8c950d305944f4aebcbdb26a879222140e2a134f7c4813
            f6676c6ec81c7e6a07c66195727fa56e1796b12bdc82b5eecf480bb7c4c618a1
            8644e0282d7a6e52c6f51cdb4a10ec0f438cf5b90da73b3e612d1c83395d08d8
            bc1857c0631888c1294305114c454ec2fb5ce664ac083f59b4c7d8f5b786b7d2
            5ad71103b118a2723707cd1ddfd7dfc2ced29b229b4b93e6663a8e3ee4ef5761
            74b4c84a2219dc070a288d664f1317ed1e92a1cca561f1f7433bfdcfd72d4593
            70a29fc51b08ef4c58c14d476edf57308036510f963b0703edc4cf817bb9ba05
            a7438e128bf86328b80446e0a999ed8531b8b9bf67ca2ba9219ddc90f1ebde75
            5c0d419c2fa9500ff9e498d54878b60fc75b873ce4a559fba88c0620cd11fa2c
            bec8760e051c7040d2da3b156f1d4171483b236224500e4f68fc3f9fad55dabc
            c01d0f0d21fcc91a67e07749f5162ab9abb83a48e5bb13967f4b91692fff4947
            02661fb5fca0443a672f047611640e4997f9da90283119bda96903b3aaaa4e70
            72af9722eeccee6ec9b3176a2a4b1314c62570655eb6db0127d76bfc86612006
            895ac6cfd084ebaefa74c966f05facdd75c4d77419ad79a396873a8f03a58e77
            c09234c65e3881ece50b11279df7a3115ed7cb9345b901bf6977bb4a0d1e901c
            8eeb8075df0a8a519d9d9555
            )
    

    I.e., three entries for 1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec entry.

    RFC 7929 explicitly does not mention how to handle this situation. The only relevant part is Appendix A. on page 18. If you read it you may get a bit different result:

    $ gpg2  --export-options export-minimal,no-export-attributes --export 'epel@fedoraproject.org' | wc -c
    3414
    #^^^ this is the value at the end of the first line (number of octets)
    
    $ gpg2  --export-options export-minimal,no-export-attributes --export 'epel@fedoraproject.org' \
        |hexdump -e '"\t" /1 "%.2x"' -e '/32 "\n"' 
    # this will give you the value of the key
    

    When you concat two previous results you will get:

    ; 8C3BE96AF2309184DA5C0DAE3B49DF2A0608B895
    ; EPEL (6) <epel><mailto:epel>>
    ; 91E97D7C4A5E96F17F3E888F6A2FAEA2352C64E5
    ; Fedora EPEL (7) <epel><mailto:epel>>
    ; 94E279EB8D8F25B21810ADF121EA45AB2F86D6A1
    ; Fedora EPEL (8) <epel><mailto:epel>>
    1a355c3f6ac5389917041321fdddee2c0ffc4a38f78adec159a015ec TYPE61 \# 3414 (
            99020d045cf7cefb011000c93882169651ae7719e9bc99e4c50cf60ada1623b8
            287559e8725add97cde4563a92429fb6760c6e1b99948800d47d81da450cf12b
            0f1e7ee427c31cd4f6467bd27802c6d99b4161a65267d24e189aa4ecf4d34d7c
            f9ea3930569b776bdd886a35cbee759b6b110e937ca9d09aa97928eb973232e2
            d7ae88c91c8baf440ff1ee2a8ead17f26bcc773b10b83c4825e698039cff2954
            ad252d89dc0c440237be83f6e6e16505a121217fcc923e7bcd3a57bd61cdfc8b
            fccb779909bf962fa544a536e54b24f5d59a7f8347ff06473083c0915278b83e
            07fee4b8f70a969f28936064fb8546e279c17b72b84b0a6951cef251f269113f
            aff84ff177b4d0cf5997833440d5154913147354ebf876f8edfdf0c358fdf3a1
            c8a68d2b79e713483a409d5d387df59571c0465a453ad5addd599953426758b7
            9876f6a9e047dff6a4d6649848ec2ab45a6f0380b5295e0365926aaebc4ecfd9
            6e4402bd84e40a8a4280db7f0fc6896751bc758a78d18c44b9c623742ea17dd4
            a409570fbbf5dd5759c4dc9dbe97b2a5b7ff01f472ac744a864523ddc535a589
            0cdc335913bec2e4951eebfd27c5bb7811351905380b8182463fc73b3db9159c
            ee88fca80bd63c3eecc5ba033f0ec7e45764bc8631599b5bf4cc73c85a25f28b
            328528dd3564f273e2521a0e0b4fe423b8fd8a516b071054e5d52d1782130167
            3cc671ba8c9b3f22cbd1a50011010001b4284665646f7261204550454c202838
            29203c6570656c406665646f726170726f6a6563742e6f72673e890238041301
            02002205025cf7cefb021b0f060b090807030206150802090a0b041602030102
            1e01021780000a091021ea45ab2f86d6a166a00ffe319cb5b0b37e0607254342
            48e9ae4d1f5fb2328699bf53b06c2072aca472cbf98e3abc000663ff6f32f744
            d1f72bf44936669ef31354569920b3cf35204c6e811684c6d47e05868ffb67c0
            572bf8f26e8c174997ea5e74ce7e14856a5c0377095226f7d8355f4c6f2bdbc4
            df2a651535cfb3c4599ac8cc50c34e815193447bd731e99cf4cd209e7f6174f0
            36a53d3b1208f213108f8b9f175d74e34009419edbaf7af37ab97204e07e25dd
            7348f56b00fa5e332269a5614748300858aeb1f9a9b5dd52089d7b6a07f0b3e0
            60472c52b9776fc862ca55c38ed2a258fc8e2144b6702aebed40ab9587de2078
            6c4d9eeb3d340d217dddfaf03b258ded698f22873642c35126181f242fa2b064
            00c30abcc406c71017aec8c1bc1998f3c68860d07e1b39212e8d52d38c7d716c
            01b80e78563bfd555cb5fe2d710bc9c3bc6504dc8e3ff80102aea38c2f30faef
            f40d7af681131be57042535def85413c60af867cf019735b4cafca1f5f96447d
            1849bfb8d5f941dcd2a131253a746234486b48c4348785554d9b1dbfe97893f5
            4ff39eb66c1b92b0f2721b6c5cfef4c19451680932c775665f51f20cfb57d7e9
            9fb2fbcb92e127c1ce08cb18fd9effec752cfadfc0d4c23883b43983952e4997
            608b4cf1ba4896b8290b7808803f8cbbcca2b83a327ae383437160876b7eb39f
            0566933de2c4bf10d1ab91d793aaab606347f7f2a299020d0452ae6884011000
            b5529857c0ca8201aacf507fd9b0e16c95a6de4d53b6a439396273bde9ffab81
            907bc40ac139279093b07dd22a9227ce7f73bd8e027e0e5d8bd3eb781f09e5e9
            26ce4cede99790fb0d4165928eef7d956f80a923668d85a199194b44697438ee
            e02308fbefa7485ef70c34597348f8f4d0ddb102a8cc6e39675769f669b004e6
            0aba569a8fbc55d5c9fad56bfb9a9688035667fa876b7845da627eaf2a4c7b07
            154df1a42cfe4fafdd196286438d9941f4da2e70cc8d3a00b266340327af9086
            d7ea2655b731af6a76293dc596e17d110cf729a9f14d664eb8df123896c67e63
            612bf58bb94bff31d25cbeb66988a24684d30c1b754bbbe3461366309eb2ba18
            5a2460e73b90db17cac49a15e44f8487eb58c060c99fa1df5a14609ee751470b
            ee278e73b5856d2ae94ac3c410a5dd924d6adc4100c96915a69cca285ff3c04d
            38c2044c41f5d933dfc0ebbaf93b2241ccb6c22a96400e40c76c5f57774e8bfa
            044d970e3206d331712ddb8919b57073feab21cf794f4e798f7e84cf41eb8f17
            694c638e1f146a30ae66f5f9456dac71b439d2ef0efb5aacd6ec78c5ac3d1579
            3c76be78e31aa7211c44297c3a453b36fc2d316181889dc913547e5418df958b
            3b32cd57ea55dde437260d75505c1e95234ba9f41fb8544673ecdcc243631e1e
            723ef9bda1d4750487ea27ab0a18f19bb4f357a3b111311ca95b2473d338b4b7
            0011010001b4284665646f7261204550454c20283729203c6570656c40666564
            6f726170726f6a6563742e6f72673e890238041301020022050252ae6884021b
            0f060b090807030206150802090a0b0416020301021e01021780000a09106a2f
            aea2352c64e5c7c60ffe2ca6aa6c4e3b4333baa9cad28b1caaee66dbee5a2aad
            e517ef4fdc30f4651414c678659197e27517838f6a8b10cb6b2591f1d746fece
            64c8e8b70b4b97ffa8a7cda632460f8187bd1baea2942b5e05d41aeb3e2dbd79
            a29f1be1ae305b577411b66bb3ad3e6c37cae2a6abd129bfa07498fe84e4a49d
            bd2452a895ad19c03ac114bfc03cd7244347a79adef48b26dc2632769cb418b4
            40e70a387cf079b8b75484b1140ef87628afff7cc8e36b7a342fe48dcfc37468
            36729f9031094f5107710771379c160a32e6e9918fdf4166b79b47534efb8801
            ae2bc87ddbc3e1f24d7cd0475f08521fd00218236f1e75d58c7405f621d60292
            a44487a1f05af69f976d7c9105ed2117f066d7ec56ec5a8dbd91732b56036a1c
            dc839df0015683ee6e041db4964991564091a1eb32ef00cadc13eb643878daa6
            248d5a59b650348e63598a9ce912ae08aebf17df6ab61153f6024ed7bfb246e2
            fa52fdbfbe0cd547544677054b2b09548c63c08d87e5a4243058c0004b18aff6
            e3f260f1dc12b4ab6d11b4c2321f011defbacd4e48ec77c591d9a5715f9904ec
            0cfef355138bcd5f7c477270140c49e3fc6a78e378bc23c553a3ac4f795643cc
            3b8a5e68858f79c26ba37ebea593cf6413325d393cdc9fb0c16d4aeedf709030
            6299dc4bed463b02ec50db2f4d7bab14be65503f1c69327e2bb8729815f15527
            6ea97978067ed4b97a0f99020d044bd22942011000cb1a7523db8655296ee588
            537f240e4282dce53672aceec060edf55b356af2884dd445b9f5257beb7701ed
            90ac98f7afe27d9d4b777944da0385eb56c6676096c0935e7bbe92a7d67e8ac3
            fc4505db1b98f08ffe0133d1caee9864b6a15527c55b6368df4e371fc51bc633
            c601c1717c871d020d9511023bdfd71452409ee2028e7ca9c1e75edc4e02f426
            01b47dcfa43f87f0fe63f3e1a08269d4e57854d1c26c2b3d33b1d4541a600b9d
            cf0dc4d442ec1c81e63aa50828f5e4f578b352655ac9e4172d8af97551c0fa3f
            a0881a491e4f680d86a8a77513e78145c65d6ed0ce879f7be7d542a2529bb4ea
            daaf68a95678e89f7ae30682448b51c92a6e5b977164edf858ceb0b30d813f63
            250026c7e25de5baab695e8dc1bdf9e4f870051f71dc32ec1d6ae971bfbcd829
            709f36849f82ba447e8d84c78226fc8dd676dc0c13f19b9f076add5273800c4b
            54585ccc31f03648e621d3c094d27dfe2c518e2a7876066d5bc55c042acda0bf
            8b843ac87e3be72dd46ebc11f2f4fdec085ec567271a1e53163fa4622ed1e710
            c19c6d9f10a501e2d9d45e535c32afd42082e194fa3a925d86abd7211cb1d446
            6aca6934867cf906b06e6fdae1da13d744c4b4f02b852a079706efa930d88ed7
            d5f76942ff68b080eea06201bd5eb3b857c755dfd15f9bba0b15fbd36e0e66bf
            843f13a016a3f8248e6b7191cba56f202f8c5b58010011010001b4214550454c
            20283629203c6570656c406665646f726170726f6a6563742e6f72673e890236
            04130102002005024bd22942021b0f060b090807030204150208030416020301
            021e01021780000a09103b49df2a0608b8951fc60ffc0b18fbfda8edfd7b26fd
            365af07ca754128d6d1f129dae1373f9762b3b8c950d305944f4aebcbdb26a87
            9222140e2a134f7c4813f6676c6ec81c7e6a07c66195727fa56e1796b12bdc82
            b5eecf480bb7c4c618a18644e0282d7a6e52c6f51cdb4a10ec0f438cf5b90da7
            3b3e612d1c83395d08d8bc1857c0631888c1294305114c454ec2fb5ce664ac08
            3f59b4c7d8f5b786b7d25ad71103b118a2723707cd1ddfd7dfc2ced29b229b4b
            93e6663a8e3ee4ef576174b4c84a2219dc070a288d664f1317ed1e92a1cca561
            f1f7433bfdcfd72d459370a29fc51b08ef4c58c14d476edf57308036510f963b
            0703edc4cf817bb9ba05a7438e128bf86328b80446e0a999ed8531b8b9bf67ca
            2ba9219ddc90f1ebde755c0d419c2fa9500ff9e498d54878b60fc75b873ce4a5
            59fba88c0620cd11fa2cbec8760e051c7040d2da3b156f1d4171483b23622450
            0e4f68fc3f9fad55dabcc01d0f0d21fcc91a67e07749f5162ab9abb83a48e5bb
            13967f4b91692fff494702661fb5fca0443a672f047611640e4997f9da902831
            19bda96903b3aaaa4e7072af9722eeccee6ec9b3176a2a4b1314c62570655eb6
            db0127d76bfc86612006895ac6cfd084ebaefa74c966f05facdd75c4d77419ad
            79a396873a8f03a58e77c09234c65e3881ece50b11279df7a3115ed7cb9345b9
            01bf6977bb4a0d1e901c8eeb8075df0a8a519d9d9555
            )
    

    This is also a valid result. I contacted the author of RFC 7929 Paul Wouters to clarify this and he suggested that the first option is preferred. But when working on implementation, you should keep in mind that the second is an option as well.

    Fedora Community Outreach Revamp Update #4

    Posted by Fedora Community Blog on February 18, 2021 08:00 AM

    We launched the Community Outreach Revamp in July of 2020. The goal of the Revamp is to identify what makes Fedora’s Outreach teams struggle, create a clear plan to move forward based on community feedback, and execute that plan. All of these efforts focus on creating a cohesive, sustainable, and empowering Outreach program for Fedora.

    As of January 2021 the Revamp is now a Fedora Objective! With the Fedora Council approving the objective after community feedback, the Revamp becomes a medium-term goal of the Council. The co-leads of the Revamp, Mariana Balla and Sumantro Mukherjee, are Council members through the completion of the objective. We will provide updates on the Community Outreach Revamp at the regular Council meetings. More details about the Community Outreach Revamp as an Objective can be found on the wiki page.

    On Saturday, 20 February 2021, we will give a talk on the Community Outreach Revamp during the DevConf.CZ conference. The co-leads, as well as Marie Nordin (FCAIC), will share the vision of the Revamp, the progress that has been made so far together with some very interesting findings, and what the expectations are upon the completion of this initiative. Join us to find out more and to ask your questions on Saturday from 2:00pm to 2:25pm CET. This session is part of the Fedora track sessions.

    The post Fedora Community Outreach Revamp Update #4 appeared first on Fedora Community Blog.

    A pre-supplied "custom" keyboard layout for X11

    Posted by Peter Hutterer on February 18, 2021 01:57 AM

    Last year I wrote about how to create a user-specific XKB layout, followed by a post explaining that this won't work in X. But there's a pandemic going on, which is presumably the only reason people haven't all switched to Wayland yet. So it was time to figure out a workaround for those still running X.

    This Merge Request (scheduled for xkeyboard-config 2.33) adds a "custom" layout to the evdev.xml and base.xml files. These XML files are parsed by the various GUI tools to display the selection of available layouts. An entry in there will thus show up in the GUI tool.

    Our rulesets, i.e. the files that convert a layout/variant configuration into the components to actually load already have wildcard matching [1]. So the custom layout will resolve to the symbols/custom file in your XKB data dir - usually /usr/share/X11/xkb/symbols/custom.

    This file is not provided by xkeyboard-config. It can be created by the user though and whatever configuration is in there will be the "custom" keyboard layout. Because xkeyboard-config does not supply this file, it will not get overwritten on update.

    From XKB's POV it is just another layout and it thus uses the same syntax. For example, to override the +/* key on the German keyboard layout with a key that produces a/b/c/d on the various Shift/Alt combinations, use this:


    default
    xkb_symbols "basic" {
    include "de(basic)"
    key <AD12> { [ a, b, c, d ] };
    };
    This example includes the "basic" section from the symbols/de file (i.e. the default German layout), then overrides the 12th alphanumeric key from left in the 4th row from bottom (D) with the given symbols. I'll leave it up to the reader to come up with a less useful example.

    There are a few drawbacks:

    • If the file is missing and the user selects the custom layout, the results are... undefined. For run-time configuration like GNOME it doesn't really matter - the layout compilation fails and you end up with the one the device already had (i.e. the default one built into X, usually the US layout).
    • If the file is missing and the custom layout is selected in the xorg.conf, the results are... undefined. I tested it and ended up with the US layout but that seems more by accident than design. My recommendation is to not do that.
    • No variants are available in the XML files, so the only accessible section is the one marked default.
    • If a commandline tool uses a variant of custom, the GUI will not reflect this. If the GUI goes boom, that's a bug in the GUI.

    So overall, it's a hack[2]. But it's a hack that fixes real user issues and given we're talking about X, I doubt anyone notices another hack anyway.

    [1] If you don't care about GUIs, setxkbmap -layout custom -variant foobar has been possible for years.
    [2] Sticking with the UNIX principle, it's a hack that fixes the issue at hand, is badly integrated, and weird to configure.

    systemd user services and systemctl –user

    Posted by Josef Strzibny on February 18, 2021 12:00 AM

    Most users are familiar with the system-wide systemd services managed by the systemctl command. But systemd services can be made for and entirely controlled by regular unprivileged users.

    Regular services on Fedora and other systems with systemd as an init system, are usually found at /etc/systemd/system/ and managed with the root privileges. Think your packaged NGINX web server or the PostgreSQL database. We usually make our own, too, to run our long-running applications.

    These services have one other thing in common. Once enabled, they start and stop with the system boot and shutdown:

    $ sudo systemctl enable nginx.service
    

    That will be an essential distinction. Let’s see the other type of systemd service: one designed to be run by unprivileged users.

    If we drop a systemd service in ~/.config/systemd/user, it will be picked up by systemd as a user service. The big advantage is that the particular user now manages this service without the need for sudo.

    It does require the --user flag when invoking any of the systemctl commands:

    $ systemctl --user enable myuser.service
    $ systemctl --user daemon-reload
    $ systemctl --user start myuser.service
    

    We can even use the global /etc/systemd/user/ location, making the service available to all users at once (/usr/lib/systemd/user/ would be typically used by an RPM package on Fedora).

    But what’s the real reason for having user services? To answer that, we have to realize when the enabled service starts and stops. If we enable a user service, it starts on user login, and runs as long as there is a session open for that user. Once the last session dies, the service stops.

    That makes it perfect for all kinds of desktop software, and if you open /etc/systemd/user/ you might find there a lot of GNOME, Evolution, and other desktop services.

    But what if we want to run a long-running service under an unprivileged user, like a web application service that persists across restarts? For this use-case, we have a User= directive that we can use in system services:

    $ cat /etc/systemd/system/sharing.service
    [Unit]
    Description=Admin's Sharing Server
    
    [Service]
    Type=simple
    User=admin
    WorkingDirectory=/home/admin/shared
    ExecStart=/usr/bin/python3 -m http.server 8080
    Restart=always
    
    [Install]
    WantedBy=multi-user.target
    

    If the above sharing service runs only when the admin is logged in, it would be quite limiting. So it’s a system service with the User directive within [Service] section. But it feels wrong not giving the admin an option to cancel this service.

    To this end, we have to workaround it by enabling this in a sudoers file:

    START="/bin/systemctl start sharing.service"
    STOP="/bin/systemctl stop sharing.service"
    ENABLE="/bin/systemctl enable sharing.service"
    DISABLE="/bin/systemctl disable sharing.service"
    
    echo 'admin  ALL=(ALL) NOPASSWD: $START, $STOP, $ENABLE, $DISABLE' | tee /etc/sudoers.d/admin
    chmod 0440 /etc/sudoers.d/admin
    

    Downtempo and chill but still interesting: Massive Attack v Mad Professor

    Posted by Fedora Cloud Working Group on February 17, 2021 11:05 PM

    Don’t let the cover fool you, this album is full of chill. No Protection (1995) by Massive Attack v. Mad Professor is nearly 50 minutes...

    The post Downtempo and chill but still interesting: Massive Attack v Mad Professor appeared first on Dissociated Press.

    Collecting logs from Windows using syslog-ng

    Posted by Peter Czanik on February 17, 2021 01:42 PM

    Normally I cover free and open-source software in the syslog-ng blog. But recently quite a few members of the community reached out to me and asked about collecting logs from Windows. So, I prepared a quick overview of the topic. The good news is, that syslog-ng supports collecting logs from Windows in multiple ways. The not so good news is that Windows support is only available in the commercial version of syslog-ng.

    There are multiple ways for collecting log messages from Windows. You can either install syslog-ng agents on Windows hosts, or you can use the Windows Event Collector (WEC) component of syslog-ng PE.

    Note, that a third version was also available for a while but discontinued due to lack of users: running a syslog-ng server on Windows.

    Syslog-ng agent for Windows

    Windows support was originally added in the form of syslog-ng agent for Windows. If you want to read logs from files instead of relying purely on Windows eventlog, then you need to use syslog-ng agent for Windows. As its name implies, the agent is using the syslog protocol to forward log messages from a Windows host.

    There are many ways to configure the application. By default, you can use an MMC snap-in to configure the syslog-ng agent for Windows. On larger installations it is also possible to configure the agent through an XML file or through domain group policy.

    The drawback of using the syslog-ng agent for Windows is that it is yet another application to push through security and operations teams and install on each of your hosts. If all you need is collecting Windows eventlog then Windows Event Collector is a good alternative.

    Windows Event Collector

    WEC first appeared as part of the syslog-ng PE 7.0.6 release. It can collect Windows eventlog messages pushed through encrypted HTTP messages to the Windows Event Collector. As you can see, forwarding logs from text files is not supported, only Windows eventlog. On the other hand this also means that there is no need to install anything on Windows hosts. WEC is an optional component of syslog-ng PE running on Linux. It is installed as part of syslog-ng PE but you need to configure and enable it separately.

    wec

    WEC collects log messages in XML format and forwards them to syslog-ng PE through a socket. The XML parser of syslog-ng then turns the log messages into name-value pairs. Once you have name-value pairs it is a lot easier to filter log messages and save selected fields into your database.

    WEC clustering

    Starting with the latest version of syslog-ng PE you can also enable WEC clustering. Using multiple WEC instances was an option earlier as well,but starting with syslog-ng PE 7.0.23 WEC can also collect eventlog forwarded through a load balancer. This helps with any scalability issues you might encounter when using WEC with a large number of endpoints.

    wec clustering

    How to get ILI9486 Raspberry Pi 3.5” LCD to work with Fedora ARM

    Posted by Izhar Firdaus on February 17, 2021 06:29 AM

    Fedora ARM provides a full fledged Linux distro with rich ecosystem of server packages available for the ARM ecosystem.

    However, ever since Fedora released an official ARM spin, specialized Fedora derivative for Raspberry Pi such as Fedberry and Pidora pretty much went out of development, and getting Rpi accessories to work with Fedora ARM, especially those that does not work with upstream Linux.org kernel, is not quite obvious,

    In this tutorial, we would go about in getting ILI9486/XPT2046 3.5” LCD, which I bought from Lazada to work with Fedora ARM on Rpi3, up until the point where a framebuffer device is available for attaching CLI console to the display. The process provided by this tutorial in theory should also work with several other LCD devices, but because I only have this 3.5” LCD, so I can only test against it.

    Installing Fedora ARM with downstream Raspbian Kernel

    To start, you will need to use dwrobel’s spin of Fedora ARM image instead of the main official image. This image basically replaced the default Fedora kernel which is from upstream Linux.org kernel, to a kernel provided by Raspberry Pi Foundation. It is important for you to use this kernel to get the LCD to work, because a lot of kernel modules required to work with Rpi accessories are not yet available in upstream kernel for various reasons.

    You will notice that the image download page splitted the file into two, fedora-server-aarch64-f33-20201230-sda.raw.xz.aa and fedora-server-aarch64-f33-20201230-sda.raw.xz.ab. You will need to merge these files into a single image before you can work with it:

    # merge files
    cat fedora-server-aarch64-f33-20201230-sda.raw.xz.aa \
        fedora-server-aarch64-f33-20201230-sda.raw.xz.ab \
        > fedora-server-aarch64-f33-20201230-sda.raw.xz
    
    # verify checksum
    sha512sum -c fedora-server-aarch64-f33-20201230-sda.raw.xz.sha512sum
    

    After successful merging, you can then copy this image into your sdcard. Please take note, Fedora ARM Image Installer does not quite work correctly with this image.

    # decompress
    xz -d fedora-server-aarch64-f33-20201230-sda.raw.xz 
    
    # dd into sdcard
    dd if=fedora-server-aarch64-f33-20201230-sda.raw of=/dev/mmcblk0 
    

    Afterwards, you may want to resize the root partition to make use of the whole sdcard, to do this, disconnect and reconnect the sdcard to reload, and then lets run parted.

    umount /dev/mmcblk0* # just to be sure
    parted /dev/mmcblk0
    

    Then use resizepart to resize the btrfs partition.

    [root@chihaya ~]# parted /dev/mmcblk0                                     
    GNU Parted 3.3
    Using /dev/mmcblk0
    Welcome to GNU Parted! Type 'help' to view a list of commands.
    (parted) print free                                                       
    Model: SD SN64G (sd/mmc)
    Disk /dev/mmcblk0: 63.9GB
    Sector size (logical/physical): 512B/512B
    Partition Table: msdos
    Disk Flags: 
    
    Number  Start   End     Size    Type     File system  Flags
            1024B   4194kB  4193kB           Free Space
     1      4194kB  516MB   512MB   primary  fat16        boot, lba
     2      516MB   3016MB  2500MB  primary  btrfs
            3016MB  63.9GB  60.8GB           Free Space
    
    (parted) resizepart 2                                                     
    End?  [3016MB]? 63.9GB                                                    
    (parted) q
    Information: You may need to update /etc/fstab.
    
    [root@chihaya ~]#        
    

    The btrfs partition can now be mounted for adding your SSH key, for remote network access, which can be added using the following steps.

    # mount
    mkdir /tmp/disk
    mount /dev/mmcblk0p2 /tmp/disk/
    
    # add ssh key
    mkdir /tmp/disk/root/root/.ssh/
    cat /path/to/ssh/public/key > /tmp/disk/root/root/.ssh/authorized_keys
    chmod og-rwx -R /tmp/disk/root/root/.ssh/
    
    # remove root password 
    sed -i 's/root:x:/root::/' /tmp/disk/root/etc/passwd
    
    

    Static network IP can be configured using

    cat > /tmp/disk/root/etc/sysconfig/network-scripts/ifcfg-eth0 << EOF
    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=none
    DNS1=8.8.8.8
    DNS2=8.8.4.4
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=no
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME="Default"
    ONBOOT=yes
    IPADDR=10.42.0.210
    PREFIX=24
    GATEWAY=10.42.0.1
    EOF
    

    Finally, unmount the sdcard.

    umount /tmp/disk
    

    If nothing went wrong with the steps, you can now hook up your Rpi to the network, boot it up and connect to it.

    Setting up LCD framebuffer device

    To setup the LCD device, you will need to install additional device tree overlays for the LCD. The LCD-show github repository from goodtft provides the necessary files, however, the script provided by the repo is designed to work only on raspbian based platforms. In Fedora ARM, we’ll have to do this manually with the following steps.

    SSH into the Rpi

    ssh 10.42.0.210 -l root
    

    In /boot/config.txt, you will need to add the following lines before [pi4] section of the config (implicit [all] section).

    hdmi_force_hotplug=1
    dtparam=i2c_arm=on
    dtparam=spi=on
    dtoverlay=tft35a:rotate=90
    

    Download the 3.5” LCD overlay:

    wget https://github.com/goodtft/LCD-show/raw/master/usr/tft35a-overlay.dtb \
       -O /boot/efi/overlays/tft35a.dtbo
    

    Reboot the device.

    With the steps above, it should be enough to get the LCD working with a framebuffer device at /dev/fb1. The LCD should be showing black screen after the reboot.

    You can then use this framebuffer device either to run a console, or to use with your PyGame application, or to run X server.

    To test the framebuffer device, you can run the following command, and the screen should be showing random black and white dots.

    cat /dev/urandom > /dev/fb1
    

    You can also, configure the LCD device as the default kernel console output adding fbcon=map:10 fbcon=font:ProFont6x11 to the kernel boot option in /boot/efi/cmdline.txt, and reboot the device.

    I have yet to try getting X GUI running on this, but I will definitely share once I got time to try and figure it out.

    Hope this guide would be helpful to those who own such LCD and do share if you found this guide useful.

    Tuesday Tortitude

    Posted by Fedora Cloud Working Group on February 17, 2021 01:36 AM

    Willow is a great TV buddy. And an excellent cat in general. Just look at that face.

    The post Tuesday Tortitude appeared first on Dissociated Press.

    Cockpit 238

    Posted by Cockpit Project on February 17, 2021 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly.

    Here are the release notes from Cockpit version 238.

    Updates: List outdated software that needs a restart

    Cockpit now uses Tracer to discover outdated services and applications after each software update.

    In some cases, software updates might not require a reboot or any services to be restarted.

    Update no restart

    When necessary, Cockpit will prompt to restart services or schedule a system reboot.

    Update with restart

    Currently, Tracer is only supported in Fedora. In distributions where Tracer is not available, Cockpit will reboot after software updates, as it has previously done.

    Web server: Preserve permissions of administrator-provided certificates

    Cockpit’s web server supports and encourages using your own TLS certificate and key in /etc/cockpit/ws-certs.d/. certificates.

    For enhanced compatibility with other software, Cockpit has been changed to only adjust permissions for certificates it creates and manages itself. These specific files are 0-self-signed.cert and 10-ipa.cert. If you do provide your own certificate, you must ensure these files are readable by the cockpit-ws user or group, in addition to other software using the certificates.

    System: Performance page shows busiest CPU cores

    CPU cores metrics

    Machines: VM disk creation supports a custom path

    Custom paths are now supported when adding disks to a VM. Supported file types include disk files (qcow, qcow2, and raw images) and CD/DVD ISOs (which will be attached as a CD-ROM device).

    Machines

    Try it out

    Cockpit 237 is available now:

    Is CentOS Dead? The reports of its demise are greatly exaggerated.

    Posted by Izhar Firdaus on February 16, 2021 01:31 PM

    End of last year, CentOS project announced that they are shifting their focus to CentOS Stream.

    Not surprisingly, this triggered a major outlash from users worldwide, especially from those who barely understand the change, and merely react to perception raised by various online media, who are not even contributors to the Fedora nor CentOS project. The general tone is, “RedHat have killed CentOS”, “CentOS is dead”, and similar perception.

    However, this is far from truth. The focus to CentOS Stream is primarily:

    • an announcement that CentOS will no longer release point releases (8,1, 8,2, 8,3..)
    • smoothen out the flow for community contributions (“community” in this sense are people who work on improving CentOS, such as fixing bugs, who are not Red Hat employees)

    No more point release

    Historically, CentOS tracks RHEL, of which a new CentOS release is created, after a new RHEL release is launched. This make sense at the early days of CentOS, where it is primarily a rebranded rebuild of RHEL. But take note, the point releases of RHEL recent years, are primarily a snapshot of a specific state of the RHEL updates repositories, akin to a mid-release Fedora respin. This allows sysadmin to create a new installation with latest set of packages with latest bugfixes from the start, rather than installing an old point release, and yum update afterwards. A legacy from the era where internet was a fraction of the speed available today.

    If you are a sysadmin that regularly run yum update on your server, basically nothing will change for you. If you use containers and always ensure you containers runs yum update during build, nothing will change for you too. If you only enable security updates, you too, will continue to have the same experience.

    Empowering the Community in C(ommunity)Ent(erprise) Linux

    This is something which I believe many users would barely appreciate, but is something very important to us who involve in Open Source OS level development, or involve in supporting our clients commercially. The traditional CentOS is not that open to community, because it is trying to keep bug-to-bug compatibility to RHEL. This is a problem when:

    • If you found a bug in a package coming from CentOS, and you want to contribute fixes to the bug, in the traditional flow, you would have to get this bug fixed upstream in RHEL, as it could not be accepted in CentOS.
    • If you created some packages, and want these packages to be part of the main distribution channels (eg: AppStreams), there are no clear way for this.

    While majority of users who involve in application deployment, or userspace application development barely have any use case for above flow, for those who are involved with large enterprises and large projects that work closely with RHEL/CentOS ecosystem would face frustrations in the processes. This is especially true for projects such as OpenStack, OKD, oVirt, and companies such as Intel and Facebook, who uses, fix bugs and improve CentOS. The new change of project governance now would make it easier to the community to get involved.

    Remember here, a “community” in an Open Source project are people who contributes to a project, which can be as doing something as simple as promoting the project, supporting/helping others in using the project in community forums, or something more complex such as fixing bugs and building new features. A user who only consume, while some may argue is part of the community, barely holds currency in influencing the direction of the project. One could not expect a consumer-first direction like what can be seen in commercial softwares, because in commercial softwares, the consumers are contributing back in the form of payment to the organization that produce the software, and that is a big influence. When the threshold of contributors drop in an Open Source project, or start to be mainly handled by a single person or organization, a project is more likely to die, or to turn to proprietary model in order to sustain itself.

    Stability

    Some argue that the switch to rolling release model of CentOS Stream would imply CentOS would now no longer be stable. But what exactly is “stable”?. In software development, the word “stable” can mean different things. To a consumer, “stable” is usually mean something would not crash, nor have issues. However, to developers, “stable” means a state of no changes nor updates.

    Being “stable” in the sense of no updates, may not translate to “stable” of no issues. Modern software development usually applies agile methodology to develop software. An initial version of a software may not be issue-proof in its early releases, and would require constant updates to be rolled out to fix issues. However, constant updates would mean, the software is not “no-changes stable”. Traditional CentOS and RHEL is very resistant to changes due to its approach of minimizing updates, however, this results in an ecosystem where new bugfixes to softwares could not be easily rolled out, and also create an environment that is difficult to work with for modern agile developers who may require latest version of components, which carry latest bugfixes and improvements. Unlike the old days where software were developed in very slow waterfall model, modern application development demands faster adoption of updates, and there is a need to address this requirement.

    Having said that, “no change stable” is also important, however, not in the sense of barely having changes. There can be changes, but changes are managed to minimize service distruption. How software developers traditionally manage this stability is through good release engineering which splits software releases into major and minor release. Within a major release, users of a software can expect no changes in the public interfaces of the software, and that all other software that depend on it would continue working, across all minor releases of the software. Minor releases would only be doing bugfixes, and new feature introduction, without breaking existing functionalities.

    Circa 2013, there was an initiative in Fedora in creating a better, and more agile Fedora, called the Fedora.next Initiative. One of the result of that is Fedora Modularity, which is a new architecture where Fedora would have a base OS layer, which are maintained on its own stream, and “modules” where users may choose a specific major release of a software component, and only receive updates related to that major release. CentOS/RHEL AppStream repositories, introduced in RHEL8/CentOS8, was born from this initiative.

    With AppStreams, you can now choose to, for example, enable only PostgreSQL 9 and subscribe to its updates and bugfixes, without risking your system updating to PostgreSQL 12, which is also available in the modularity repos. This create a platform that would allow better “issue-proof stability”, while keeping balance with “no-changes stability”.

    With fixes landing in CentOS first before going into RHEL, it also opens up better opportunity for the community to receives fixes early, and bugfix contributions from the community would be easier to be accepted. Additionally, Fedora still play a role in ensuring stability of CentOS/RHEL, because introduction of new major features and major component versions will still land in Fedora first, instead of CentOS. Fedora still serve as the development ground for the next major release of CentOS/RHEL, through the Fedora ELN (El-nino / Enterprise Linux Next, whichever you fancy), Buildroot.

    One might think that with CentOS receive fixes first before RHEL, CentOS is now a beta for RHEL, however, this is nonsense. Under that analogy, would the traditional setup where RHEL receives fixes first before CentOS, make RHEL a beta version of CentOS?. We all know that is false.

    What’s with the EOL?

    The announcement from CentOS of EOL-ing traditional CentOS is mainly about that they are stopping the effort to produce point-releases of CentOS 8. CentOS 8 will still continue to be supported until RHEL 8 full-support EOL, which is until 2024. CentOS 9 will still be released alongside RHEL 9, and that too, will be supported until RHEL 9 full-support EOL.

    Hope this clears up the FUD about CentOS Stream that have been going around for a while now. You may also check out this presentation for additional information.

    Don’t worry, CentOS will still continue to be a great Community-driven Enterprise Linux distribution.

    fwupd 1.5.6

    Posted by Richard Hughes on February 16, 2021 12:16 PM

    Today I released fwupd 1.5.6 which the usual smattering of new features and bugfixes. These are some of the more interesting ones:

    With the help of a lot of people we added support for quite a bit of new hardware. The slightly odd GD32VF103 as found in the Longan Nano is now supported, and more of the DFU ST devices with huge amounts of flash. The former should enable us to support the Pinecil device soon and the latter will be a nice vendor announcement in the future. We’ve also added support for RMI PS2 devices as found in some newer Lenovo ThinkPads, the Starlabs LabTop L4 and the new System76 Keyboard. We’ve refactored the udev and usb backends into self contained modules, allowing someone else to contribute new bluetooth peripheral functionality in the future. There are more than a dozen teams of people all working on fwupd features at the moment. Exciting times!

    One problem that has been reported was that downloads from the datacenter in the US were really slow from China, specifically because the firewall was deliberately dropping packets. I assume compressed firmware looks quite a lot like a large encrypted message from a firewalls’ point of view, and thus it was only letting through ~20% of the traffic. All non-export controlled public firmware is now also mirrored onto the IPFS, and we experimentally fall back to peer-to-peer downloads where the HTTP download failed. You can prefer IPFS downloads using fwupdmgr --ipfs update although you need to have a running ipfs daemon on your local computer. If this works well for you, let me know and we might add support for downloading metadata in the future too.

    We’ve fully integrated the fwupd CI with oss-fuzz, a popular fuzzing service from Google. Generating horribly corrupt firmware files has found a few little memory leaks, files that cause fwupd to spin in a loop and even the odd crash. It was a lot of work to build each fuzzer into a small static binary using a 16.04-based container but it was well worth all the hard work. All new PRs will run the same fuzzers checking for regressions which also means new plugins now also have to implement building new firmware (so the test payload can be a few tens of bytes, not 32kB), rather than just parsing it.

    On some Lenovo hardware there’s a “useful” feature called Boot Order Lock that means whatever the OS adds as a BootXXXX entry the old bootlist gets restored on next boot. This breaks firmware updates using fwupdx64.efi and until we can detect BOL from a kernel interface we also check if our EFI entry has been deleted by the firmware on next boot and give the user a more helpful message than just “it failed”. Also, on some Lenovo hardware we’re limiting the number of UEFI updates to be deployed on one reboot as they appear to have slightly quirky capsule coalesce behavior. In the same vein we’re also checking the system clock is set approximately correct (as in, not set to before 2020…) so we can tell the user to check the clock on the machine rather than just failing with a obscure certificate error.

    Now there are systems that can be switched to coreboot (and back to EDK2 again) we’ve polished up the “switch-branch” feature. We’re also checking both BIOSWE and BLE before identifying systems that can be supported. We’re also including the lockdown status in uploaded UEFI reports and added SBAT metadata to the fwupd EFI binary, which will be required for future versions of shim and grub – so for distro fwupd binaries the packager will need to set meson build options like -Defi_sbat_distro_id=. There are examples in the fwupd source tree.

    Git repo branch name changes

    Posted by Fedora Community Blog on February 16, 2021 08:00 AM

    The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.

    The Fedora Project Vision

    In line with the Fedora vision, we just completed some changes to the git branch names used on src.fedoraproject.org and elsewhere. We removed the “master” branch for those repositories. For rpms and containers, the default branch is now named “rawhide”, with a symref (alias) of “main”. For flatpaks, “stable” is the default/only branch. The fedpkg tool is updated on all supported released to accommodate this change.

    For now module repos are unchanged. We are awaiting improvements in the branch/repo requesting tool to allow module owners to request only those specific named branch streams, since “main” and “rawhide” don’t make sense in that context.

    For a list of other impacted repositories, see the change proposal. Of course, other repos have been migrated by their owners independently.

    If you have a repo checked out with the master branch still, you can run: git fetch && git switch main

    This work is part of a larger effort across the technology industry to be more inclusive in the language we use. See Rich Bowen’s Nest With Fedora keynote, for example. If you encounter any trouble, please file a ticket in the infrastructure issue tracker.

    The post Git repo branch name changes appeared first on Fedora Community Blog.