Fedora People

Fedora program update: 2021-03

Posted by Fedora Community Blog on January 22, 2021 09:21 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. The mass rebuild is delayed until Monday. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases Announcements Long-term fail-to-build-from-source packages will be retired […]

The post Fedora program update: 2021-03 appeared first on Fedora Community Blog.

Open source is still not a business model

Posted by Ben Cotton on January 22, 2021 01:07 PM

If you thought 2021 was going to be the year without big drama in the world of open source licensing, you didn’t have to wait long to be disappointed. Two stories have already sprung up in the first few weeks of the year. They’re independent, but related. Both of them remind us that open source is a development model, not a business model.

Elasticsearch and Kibana

A few years ago, it seemed like I couldn’t go to any sysadmin/DevOps conference or meetup without hearing about the “ELK stack“. ELK stands for the three pieces of software involved: Elasticsearch, Logstash, and Kibana. Because it provided powerful aggregation, search, and visualization of arbitrary log files, it became very popular. This also meant that Amazon Web Services (AWS) saw value in providing an Elasticsearch service.

As companies moved more workloads to AWS it made sense to pay AWS for Amazon Elasticsearch Service instead of paying Elastic. This represented what you might call a revenue problem for Elastic. So they decided to follow MongoDB’s lead and change their license to the Server Side Public License (SSPL).

The SSPL is essentially a “you can’t use it, AWS” license. This makes it decidedly not open source. Insultingly, Elastic’s announcement and follow-up messaging include phrases like “doubling down on open”, implying that the SSPL is an open source license. It is not. It a source-available license. And, as open source business expert VM Brasseur writes, it creates business risk for companies that use Elasticsearch and Kibana.

Elastic is, of course, free to use whatever license it wants for the software it develops. And it’s free to want to make money. But it’s not reasonable to get mad at companies using the software under the license you chose to use for it. Picking a license is a business decision.

Shortly before I sat down to write this post, I saw that Amazon has forked Elasticsearch and Kibana. They will take the last-released versions and continue to develop them as open source projects under the Apache License v2. This is entirely permissible and to be expected when a project makes a significant licensing change. So now Elastic is in danger of a sizable portion of the community moving to the fork and away from their projects. If that pans out, it may end up being more harmful than Amazon Elasticsearch Service ever was.

Nmap Public Source License

The second story actually started in the fall of 2020, but didn’t seem to get much notice until after the new year. The developers of nmap, the widely-used security scanner, began using a new license. Prior to the release of version 7.90, nmap was under a modified version of the GNU General Public License version 2 (GPLv2). This license had some additional “gloss”, but was generally accepted by Linux distributions to be a valid free/open source software license.

With version 7.90, nmap is now under the Nmap Public Source License (NPSL). Version 0.92 of this license contained some phrasing that seemed objectionable. The Gentoo licenses team brought their concerns to the developers in a GitHub issue. Some of their concerns seemed like non-issues to me (and to the lawyers at work I consulted with on this), but one part in particular stood out.

Proprietary software companies wishing to use or incorporate Covered Software within their programs must contact Licensor to purchase a separate license

It seemed clear that the intent was to restrict proprietary software, not otherwise-compliant projects from companies that produce proprietary software. Nonetheless, as it was written, it constituted a violation of the Open Source Definition, and we rejected it for use in Fedora.

To their credit, the developers took the feedback well and quickly released an updated version of the license. They even retroactively licensed affected releases under the updated license. Unfortunately, version 0.93 still contains some problems. In particular, the annotations still express field of endeavor restrictions.

While the license text is the most important part, the annotations still matter. They indicate the intent of the license and guide the interpretation by lawyers and judges. So newer versions of nmap remain unsuitable for some distributions.

Licenses are not for you to be clever

Like with Elastic, I’m sympathetic to the nmap developers’ position. If someone is going to use their project to make money, they’d like to get paid, too. That’s an entirely reasonable position to take. But the way they went about it isn’t right. As noted in the GitHub issue, they’re not copyright attorneys. If they were, the license would be much better.

It seems like the developers are fine with people free-riding profit off of nmap so long as the software used to generate the profit is also open source. In that case, why not just use a professionally-drafted and vetted license like the AGPL? The NPSL is already using the GPLv2 and adding more stuff on top of it, and it’s the more stuff on top of it that’s causing problems.

Trying to write your business model into a software license that purports to be open source is a losing proposition.

The post Open source is still not a business model appeared first on Blog Fiasco.

PHP version 7.4.15RC2 and 8.0.2RC1

Posted by Remi Collet on January 22, 2021 10:09 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.0.2RC1 are available as SCL in remi-test repository and as base packages in the remi-php80-test repository for Fedora 31-33 and Enterprise Linux.

RPM of PHP version 7.4.15RC2 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32-33 or remi-php74-test repository for Fedora 31 and Enterprise Linux.

emblem-notice-24.pngPHP version 7.3 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

Notice: version 7.4.15RC2 is also in Fedora rawhide for QA.

x86_64emblem-notice-24.png builds now use Oracle Client version 19.9

emblem-notice-24.pngEL-8 packages are built using RHEL-8.3

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.0.0RC4 is also available as Software Collections

Software Collections ( php74, php80)

Base packages (php)

Convert your filesystem to Btrfs

Posted by Fedora Magazine on January 22, 2021 08:00 AM

Introduction

The purpose of this article is to give you an overview about why, and how to migrate your current partitions to a Btrfs filesystem. To read a step-by-step walk through of how this is accomplished – follow along, if you’re curious about doing this yourself.

Starting with Fedora 33, the default filesystem is now Btrfs for new installations. I’m pretty sure that most users have heard about its advantages by now: copy-on-write, built-in checksums, flexible compression options, easy snapshotting and rollback methods. It’s really a modern filesystem that brings new features to desktop storage.

Updating to Fedora 33, I wanted to take advantage of Btrfs, but personally didn’t want to reinstall the whole system for ‘just a filesystem change’. I found [there was] little guidance on how exactly to do it, so decided to share my detailed experience here.

Watch out!

Doing this, you are playing with fire. Hopefully you are not surprised to read the following:

During editing partitions and converting file systems, you can have your data corrupted and/or lost. You can end up with an unbootable system and might be facing data recovery. You can inadvertently delete your partitions or otherwise harm your system.

These conversion procedures are meant to be safe even for production systems – but only if you plan ahead, have backups for critical data and rollback plans. As a sudoer, you can do anything without limits, without any of the usual safety guards protecting you.

The safe way: reinstalling Fedora

Reinstalling your operating system is the ‘official’ way of converting to Btrfs, recommended for most users. Therefore, choose this option if you are unsure about anything in this guide. The steps are roughly the following:

  1. Backup your home folder and any data that might be used in your system like /etc. [Editors note: VM’s too]
  2. Save your list of installed packages to a file.
  3. Reinstall Fedora by removing your current partitions and choosing the new default partitioning scheme with Btrfs.
  4. Restore the contents of your home folder and reinstall the packages using the package list.

For detailed steps and commands, see this comment by a community user at ask.fedoraproject.org. If you do this properly, you’ll end up with a system that is functioning in the same way as before, with minimal risk of losing any data.

Pros and cons of conversion

Let’s clarify this real quick: what kind of advantages and disadvantages does this kind of filesystem conversion have?

The good

  • Of course, no reinstallation is needed! Every file on your system will remain the exact same as before.
  • It’s technically possible to do it in-place i.e. without a backup.
  • You’ll surely learn a lot about btrfs!
  • It’s a rather quick procedure if everything goes according to plan.

The bad

  • You have to know your way around the terminal and shell commands.
  • You can lose data, see above.
  • If anything goes wrong, you are on your own to fix it.

The ugly

  • You’ll need about 20% of free disk space for a successful conversion. But for the complete backup & reinstall scenario, you might need even more.
  • You can customize everything about your partitions during the process, but you can also do that from Anaconda if you choose to reinstall.

What about LVM?

LVM layouts have been the default during the last few Fedora installations. If you have an LVM partition layout with multiple partitions e.g. / and /home, you would somehow have to merge them in order to enjoy all the benefits of Btrfs.

If you choose so, you can individually convert partitions to Btrfs while keeping the volume group. Nevertheless, one of the advantages of migrating to Btrfs is to get rid of the limits imposed by the LVM partition layout. You can also use the send-receive functionality offered by btrfs to merge the partitions after the conversion.

See also on Fedora Magazine: Reclaim hard-drive space with LVM, Recover your files from Btrfs snapshots and Choose between Btrfs and LVM-ext4.

Getting acquainted with Btrfs

It’s advisable to read at least the following to have a basic understanding about what Btrfs is about. If you are unsure, just choose the safe way of reinstalling Fedora.

Must reads

Useful resources

Conversion steps

Create a live image

Since you can’t convert mounted filesystems, we’ll be working from a Fedora live image. Install Fedora Media Writer and ‘burn’ Fedora 33 to your favorite USB stick.

Free up disk space

btrfs-convert will recreate filesystem metadata in your partition’s free disk space, while keeping all existing ext4 data at its current location.

Unfortunately, the amount of free space required cannot be known ahead – the conversion will just fail (and do no harm) if you don’t have enough. Here are some useful ideas for freeing up space:

  • Use baobab to identify large files & folders to remove. Don’t manually delete files outside of your home folder if possible.
  • Clean up old system journals: journalctl –vacuum-size=100M
  • If you are using Docker, carefully use tools like docker volume prune, docker image prune -a
  • Clean up unused virtual machine images inside e.g. GNOME Boxes
  • Clean up unused packages and flatpaks: dnf autoremove, flatpak remove –unused,
  • Clean up package caches: pkcon refresh force -c -1, dnf clean all
  • If you’re confident enough to, you can cautiously clean up the ~/.cache folder.

Convert to Btrfs

Save all your valuable data to a backup, make sure your system is fully updated, then reboot into the live image. Run gnome-disks to find out your device handle e.g. /dev/sda1 (it can look different if you are using LVM). Check the filesystem and do the conversion: [Editors note: The following commands are run as root, use caution!]

$ sudo su -
# fsck.ext4 -fyv /dev/sdXX
# man btrfs-convert (read it!)
# btrfs-convert /dev/sdXX

This can take anywhere from 10 minutes to even hours, depending on the partition size and whether you have a rotational or solid-state hard drive. If you see errors, you’ll likely need more free space. As a last resort, you could try btrfs-convert -n.

How to roll back?

If the conversion fails for some reason, your partition will remain ext4 or whatever it was before. If you wish to roll back after a successful conversion, it’s as simple as

# btrfs-convert -r /dev/sdXX

Warning! You will permanently lose your ability to roll back if you do any of these: defragmentation, balancing or deleting the ext2_saved subvolume.

Due to the copy-on-write nature of Btrfs, you can otherwise safely copy, move and even delete files, create subvolumes, because ext2_saved keeps referencing to the old data.

Mount & check

Now the partition is supposed to have btrfs file system. Mount it and look around your files… and subvolumes!

# mount /dev/sdXX /mnt
# man btrfs-subvolume (read it!)
# btrfs subvolume list / (-t for a table view)

Because you have already read the relevant manual page, you should now know that it’s safe to create subvolume snapshots, and that you have an ext2-saved subvolume as a handy backup of your previous data.

It’s time to read the Btrfs sysadmin guide, so that you won’t confuse subvolumes with regular folders.

Create subvolumes

We would like to achieve a ‘flat’ subvolume layout, which is the same as what Anaconda creates by default:

toplevel (volume root directory, not to be mounted by default)
  +-- root (subvolume root directory, to be mounted at /)
  +-- home (subvolume root directory, to be mounted at /home)

You can skip this step, or decide to aim for a different layout. The advantage of this particular structure is that you can easily create snapshots of /home, and have different compression or mount options for each subvolume.

# cd /mnt
# btrfs subvolume snapshot ./ ./root2
# btrfs subvolume create home2
# cp -a home/* home2/

Here, we have created two subvolumes. root2 is a full snapshot of the partition, while home2 starts as an empty subvolume and we copy the contents inside. (This cp command doesn’t duplicate data so it is going to be fast.)

  • In /mnt (the top-level subvolume) delete everything except root2, home2, and ext2_saved.
  • Rename root2 and home2 subvolumes to root and home.
  • Inside root subvolume, empty out the home folder, so that we can mount the home subvolume there later.

It’s simple if you get everything right!

Modify fstab

In order to mount the new volume after a reboot, fstab has to be modified, by replacing the old ext4 mount lines with new ones.

You can use the command blkid to learn your partition’s UUID.

UUID=xx / btrfs subvol=root 0 0
UUID=xx /home btrfs subvol=home 0 0

(Note that the two UUIDs are the same if they are referring to the same partition.)

These are the defaults for new Fedora 33 installations. In fstab you can also choose to customize compression and add options like noatime.

See the relevant wiki page about compression and man 5 btrfs for all relevant options.

Chroot into your system

If you’ve ever done system recovery, I’m pretty sure you know these commands. Here, we get a shell prompt that is essentially inside your system, with network access.

First, we have to remount the root subvolume to /mnt, then mount the /boot and /boot/efi partitions (these can be different depending on your filesystem layout):

# umount /mnt
# mount -o subvol=root /dev/sdXX /mnt
# mount /dev/sdXX /mnt/boot
# mount /dev/sdXX /mnt/boot/efi

Then we can move on to mounting system devices:

# mount -t proc /proc /mnt/proc
# mount --rbind /dev /mnt/dev
# mount --make-rslave /mnt/dev
# mount --rbind /sys /mnt/sys
# mount --make-rslave /mnt/sys
# cp /mnt/etc/resolv.conf /mnt/etc/resolv.conf.chroot
# cp -L /etc/resolv.conf /mnt/etc
# chroot /mnt /bin/bash
$ ping www.fedoraproject.org

Reinstall GRUB & kernel

The easiest way – now that we have network access – is to reinstall GRUB and the kernel because it does all configuration necessary. So, inside the chroot:

# mount /boot/efi
# dnf reinstall grub2-efi shim
# grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
# dnf reinstall kernel-core
...or just renegenerating initramfs:
# dracut --kver $(uname -r) --force

This applies if you have an UEFI system. Check the docs below if you have a BIOS system. Let’s check if everything went well, before rebooting:

# cat /boot/grub2/grubenv
# cat /boot/efi/EFI/fedora/grub.cfg
# lsinitrd /boot/initramfs-$(uname -r).img | grep btrfs

You should have proper partition UUIDs or references in grubenv and grub.cfg (grubenv may not have been updated, edit it if needed) and see insmod btrfs in grub.cfg and btrfs module in your initramfs image.

See also: Reinstalling GRUB 2 and Verifying the Initial RAM Disk Image in the Fedora System Administration Guide.

Reboot

Now your system should boot properly. If not, don’t panic, go back to the live image and fix the issue. In the worst case, you can just reinstall Fedora from right there.

After first boot

Check that everything is fine with your new Btrfs system. If you are happy, you’ll need to reclaim the space used by the old ext4 snapshot, defragment and balance the subvolumes. The latter two might take some time and is quite resource intensive.

You have to mount the top level subvolume for this:

# mount /dev/sdXX -o subvol=/ /mnt/someFolder
# btrfs subvolume delete /mnt/someFolder/ext2_saved

Then, run these commands when the machine has some idle time:

# btrfs filesystem defrag -v -r -f /
# btrfs filesystem defrag -v -r -f /home
# btrfs balance start -m /

Finally, there’s a “no copy-on-write” attribute that is automatically set for virtual machine image folders for new installations. Set it if you are using VMs:

#

chattr +C /var/lib/libvirt/images
$ chattr +C
~/.local/share/gnome-boxes/images

This attribute only takes effect for new files in these folders. Duplicate the images and delete the originals. You can confirm the result with lsattr.

Wrapping up

I really hope that you have found this guide to be useful, and was able to make a careful and educated decision about whether or not to convert to Btrfs on your system. I wish you a successful conversion process!

Feel free to share your experience here in the comments, or if you run into deeper issues, on ask.fedoraproject.org.

Auto-updating XKB for new kernel keycodes

Posted by Peter Hutterer on January 22, 2021 12:58 AM

Your XKB keymap contains two important parts. One is the mapping from the hardware scancode to some internal representation, for example:

  <AB10> = 61;  

Which basically means Alphanumeric key in row B (from bottom), 10th key from the left. In other words: the /? key on a US keyboard.

The second part is mapping that internal representation to a keysym, for example:

  key <AB10> {        [     slash,    question        ]       }; 

This is the actual layout mapping - once in place this key really produces a slash or question mark (on level2, i.e. when Shift is down).

This two-part approach exists so either part can be swapped without affecting the other. Swap the second part to an exclamation mark and paragraph symbol and you have the French version of this key, swap it to dash/underscore and you have the German version of the key - all without having to change the keycode.

Back in the golden days of everyone-does-what-they-feel-like, keyboard manufacturers (presumably happily so) changed the key codes and we needed model-specific keycodes in XKB. The XkbModel configuration is a leftover from these trying times.

The Linux kernel's evdev API has largely done away with this. It provides a standardised set of keycodes, defined in linux/input-event-codes.h, and ensures, with the help of udev [0], that all keyboards actually conform to that. An evdev XKB keycode is a simple "kernel keycode + 8" [1] and that applies to all keyboards. On top of that, the kernel uses semantic definitions for the keys as they'd be in the US layout. KEY_Q is the key that would, behold!, produce a Q. Or an A in the French layout because they just have to be different, don't they? Either way, with evdev the Xkb Model configuration largely points to nothing and only wastes a few cycles with string parsing.

The second part, the keysym mapping, uses two approaches. One is to use a named #define like the "slash", "question" outlined above (see X11/keysymdef.h for the defines). The other is to use unicode directly like this example from  the Devangari layout:

  key <AB10> { [ U092f, U095f, slash, question ] };

As you can see, mix and match is available too. Using Unicode code points of course makes the layouts less immediately readable but on the other hand we don't need to #define the whole of Unicode. So from a maintenance perspective it's a win.

However, there's a third type of key that we care about: functional keys. Those are the multimedia (historically: "internet") keys that most devices have these days. Volume up, touchpad on/off, cycle display connectors, etc. Those keys are special in that they don't have a Unicode representation and they are always mapped to the same fixed functionality. Even Dvorak users want their volume keys to do what it says on the key.

Because they have no Unicode code points, those keys are defined, historically, in XF86keysyms.h:

  #define XF86XK_MonBrightnessUp    0x1008FF02  /* Monitor/panel brightness */

And mapping a key like this looks like this [2]:

  key <I21>   {       [ XF86Calculator        ] };

The only drawback: every key needs to be added manually. This has been done for some, but not for others. And some keys were added with different names than what the kernel uses [3].

So we're in this weird situation where we have a flexible keymap system  but the kernel already tells us what a key does anyway and we don't want to change that. Virtually all keys added in the last decade or so falls into that group of keys, but to actually make use of them requires a #define in xorgproto and an update to the keycodes and symbols in xkeyboard-config. That again introduces discrepancies and we end up in the situation where we're at right now: some keys don't work until someone files a bug, and then the users still need to wait for several components to be released and those releases trickle into the distributions.

10 years ago would've been a good time to make this more efficient. The situation wasn't that urgent then, most of the kernel keycodes added are >255 which means they cannot be used in X anyway. [4] The second best time to do it is now. What we need is basically a pass-through from kernel code to symbol and that's currently sitting in various MRs:

- xkeyboard-config can generate the keycodes/evdev file based on the list of kernel keycodes, so all kernel keycodes are mapped to internal representations by default

- xorgproto has reserved a range within the XF86 keysym reserved range for pass-through mappings, i.e. any KEY_FOO define from the kernel is mapped to XF86XK_Foo with a specific value [5]. The #define format is fixed so it can be parsed.

- xkeyboard-config parses theses XF86 keysyms and sets up a keysym mapping in the default keymap.

This is semi-automatic, i.e. there are helper scripts that detect changes and notify us, hooked into the CI, but the actual work must be done manually. These keysyms immediately become set-in-stone API so we don't want some unsupervised script to go wild on them.

There's a huge backlog of keys to be added (dating to kernels pre-v3.18) and I'll go through them one-by-one over the next weeks to make sure they're correct. But eventually they'll be done and we have a full keymap for all kernel keys to be immediately available in the XKB layout.

The last part of all of this is a calendar reminder for me to do this after every new kernel release. Let's hope this crucial part isn't the first to fail.

[0] 60-keyboard.hwdb has a mere ~1800 lines!
[1] Historical reasons, you don't want to know. *jedi wave*
[2] the XK_ part of the key name is dropped, implementation detail.
[3] This can also happen when a kernel define is renamed/aliased but we cannot easily do so for this header.
[4] X has an 8 bit keycode limit and that won't change until someone develops XKB2 with support for 32-bit keycodes, i.e. never.

[5] The actual value is an implementation detail and no client must care


Cockpit 236

Posted by Cockpit Project on January 22, 2021 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit version 236.

fslist channels: Include properties of changed files

The fslist channel API lists files in a directory, watches for further changes to files, and sends events when changes happen. These notifications now include additional properties about the file: the owner and group name, file size, and time of last modification.

This contribution from leomoty can serve as the basis of a third-party file manager page. Thanks, leomoty!

Internal stabilization work

The Cockpit authors spent a lot of time on refactoring, dropping some obsolete dependencies like jquery-flot, replacing some custom React components with standard PatternFly ones, build system simplifications, and test stabilization. These don’t have a user-visible effect, but will make Cockpit easier to develop in the future.

Try it out

Cockpit 236 is available now:

Fedora Loves Python 2020 report

Posted by Fedora Community Blog on January 21, 2021 03:10 PM
Fedora Loves Python

Inspired by a similar report from the Copr team, I’ve decided to look back at 2020 from the perspective of Python in Fedora (and little bit in RHEL/CentOS+EPEL as well). Here are the things we have done in Fedora (and EL) in 2020. By we I usually mean the Python Maint team at Red Hat and/or the Fedora’s Python […]

The post Fedora Loves Python 2020 report appeared first on Fedora Community Blog.

January Blog Post (New Year New Bloggin!)

Posted by Madeline Peck on January 20, 2021 10:25 PM

It’s been quite awhile since I posted an update on here! December and the beginning of January I spent working on the final pieces for my illustration thesis, as well as the process book that captured my research and how they were all created. You can check out the finished book here at madelinepeck.adobe.com. Super happy with how it turned out and now I can turn to other cool projects!

I’ve been continuing my work for the coloring book. I’ve been experimenting using the bezier tool with the pen vector tool, as it really helps with the roundness and closed lines coloring books tend to have. The pages are always more detailed then I remembered having planned for but that’s alright, perhaps I’ll just need to make some compromises and remember that no one will have access to the secret super version that lives inside my brain.

On the side I’ve also been working on the thumbnails for the Element background wallpaper.

<figure class=" sqs-block-image-figure intrinsic "> Wallpaper ideas.png </figure>

This is what I’ve come up so far for ideas, and I’m looking forward to taking them into the proper sized folder and seeing how I can clean these ideas up for the final version. I also am going to start some other design work for the rest of the app :D

Today I actually also attended the super low key design team video chat, which involved a brain storm session for Fedora 35 that was exciting!

Fedora Design Team Sessions Live: Session #1

Posted by Máirín Duffy on January 20, 2021 10:00 PM

As announced in the Fedora Community Blog, today we had our inaugural Fedora Design Team Live Session 🙂
Thanks for everyone who joined! I lost count at how many folks we had participate, we had at least 9 and we had a very productive F35 wallpaper brainstorming session!

Here’s a quick recap:

1. fedora.element.io background image thumbnail sketches review

4 thumbnail sketches... one of connected houses in the clouds, one with a glowing sound wave city, one with trees reaching to light and networking, another with a path of light leading to a glowing city

Ticket: https://pagure.io/design/issue/705

We took a look at Madeline’s thumbnail sketches for the upcoming Fedora
element.io deployment.

– We looked at Mozilla’s login screen to see what they did:
mozilla.element.io.
-I gave a little background on the project and the concept of the
initial thumbnail sketch i did with lights: the gist is users joining
the chat server are in the dark foreground approaching this glowing city
full of communication / vitality, and the shape/glow of the buildings
skyline is meant to evoke the shape of a sound wave of a voice.
– Madeline talked us through her 4 thumbnail sketches and their concepts
– she made some refinements to the glowing city concept, and also riffed
off of the idea with the neat buildings hanging together with the water
reflection and clouds (the chat is in the cloud!), and the natural/calm
vibe of looking up through the trees
-We all pretty much enjoyed all of them and there was no clear favorite.
– One point that was brought up is that the login dialog will be in the
center of the image so Madeline noted that her final design will need to
work well with a dialog of unknown size floating over top it.
– The thumbnail idea with the trees relates to the F34 wallpaper which
we discussed next.

2. F34 Wallpaper WIP

digital painting watercolor style of a layered forest around a lake with sunlight streaming from the back thru the trees

 

Ticket: https://pagure.io/design/issue/688

– The basic background here is that we’re going for a calm, tranquil
image as a counter to the craziness that has been the past year or so
with the pandemic and other stuff going on. The inspiration here was Ub
Iwerks who invented the multiplane camera so the key element of this
image/composition is the built in layered effect. The technique is meant
to be watercolor-style, and watercolor as a medium heavily relies on
layering itself
– Marie noticed a halo behind the tree on the left, it stands out too
much so I’ll adjust it
– Neal noted we should package up *something* for the night version for
beta if we intend to have time of day wallpaper even if it’s rough, it’s
better than nothing.

3. F35 Brainstorm

Mindmap with various concepts about Mae Jemison

Ticket: https://pagure.io/design/issue/707

I wrote up a summary in the ticket, but essentially we did a
collaborative mindmap, dropping links to images and coming up with
related ideas, basically creating a big map of brain food to keep
building ideas. Next steps are to do some sketches based on the 4 sort
of themes that shook out of the mind map exercise.

Tech notes for future sessions:

 

  • Need to update the join link:
    The link I put on the community blog and here to join dumped folks
    straight to Jitsi without the Matrix chat. Next time we should just use
    the Matrix chat as the main link to join, because the Jitsi window
    doesn’t have a separate chat for link dumping so we need to use the
    Matrix chat for that.
  • Jitsi does not have built-in recording so the session wasn’t recorded.
    It’s my understanding it’s possible to record using OBS, so I will try that next session.

I can’t think of anything else. Feel free to reply here if there’s
something I forgot or if you had some technical or format issues / ideas
/ feedback to make our next session better.

Thanks again to everyone who joined in 🙂 I had a blast!

Standards are boring

Posted by Marcin 'hrw' Juszkiewicz on January 20, 2021 04:33 PM

We have made Arm servers boring.

Jon Masters

Standards are boring. Satisfied users may not want to migrate to other boards the market tries to sell them.

So Arm market is flooded with piles of small board computers (SBC). Often they are compliant to standards only when it comes to connectors.

But our hardware is not standard

It is not a matter of ‘let produce UEFI ready hardware’ but rather ‘let write EDK2 firmware for boards we already have’.

Look at Raspberry/Pi then. It is shitty hardware but got popular. And group of people wrote UEFI firmware for it. Probably without vendor support even.

Start with EBBR

Each new board should be EBBR compliant at start. Which is easy — do ‘whatever hardware’ and put properly configured U-Boot on it. Upstreaming support for your small device should not be hard as you often base on some already existing hardware.

Add 16MB of SPI flash to store firmware. Your users will be able to boot ISO without wondering where on boot media they need to write bootloaders.

Then work on EDK2 for board. Do SMBIOS (easy) and keep your existing Device Tree. You are still EBBR. Remember about upstreaming your work — some people will complain, some will improve your code.

Add ACPI, go SBBR

Next step is moving from Device Tree to ACPI. May take some time to understand why there are so many tables and what ASL is. But as several other systems show it can be done.

And this brings you to SBBR compliance. Or SystemReady ES if you like marketing.

SBSA for future design

Doing new SoC tends to be “let us take previous one and improve a bit”. So this time change it a bit and make your next SoC compliant with SBSA level 3. All needed components are probably already included in your Arm license.

Grab EDK2 support you did for previous board. Look at QEMU SBSA Reference Platform support, look at other SBSA compliant hardware. Copy, reuse their drivers, their code.

Was it worth?

At the end you will have SBSA compliant hardware running SBBR compliant firmware.

Congratulations, your board is SystemReady SR compliant. Your marketing team may write that you are on same list as Ampere with their Altra server.

Users buy your hardware and can install whatever BSD, Linux distribution they want. Some will experiment with Microsoft Windows. Others may work on porting Haiku or other exotic operating system.

But none of them will have to think “how to get this shit running”. And they will tell friends that your device is as boring as it should be when it comes to running OS on it == more sales.

GNOME Software Jailbreak

Posted by Robert Antoni Buj Gelonch on January 20, 2021 03:06 PM

As many users have noticed, you cannot install all the software you want on your computer via gnome-software. This restriction has been imposed by the developers and only allows the installation of software from selected Desktop Environments, which are termed as compatible projects. If you want to install software from MATE, you can do so by modifying the source code as shown in this merge request:

https://gitlab.gnome.org/GNOME/gnome-software/-/merge_requests/307

Or from command line:

$ gsettings set org.gnome.software compatible-projects “[‘GNOME’, ‘KDE’, ‘XFCE’, ‘MATE’]”

Install RHEL 8.3 for free production use in a VM

Posted by Lukas "lzap" Zapletal on January 20, 2021 12:00 AM

Install RHEL 8.3 for free production use in a VM

In January 2021, Red Hat announced that Red Hat Enterprise Linux can be used at no cost for up to 16 production servers. In this article, I want to provide step-by-step instructions on how to install RHEL 8.3 in a VM.

First off, download the official and updated QCOW2 image named rhel-8.3-x86_64-kvm.qcow2 (the name will likely change later as RHEL moves to higher versions). Creating an account on the Red Hat Portal is free, there is an integration with 3rd party authorization services like GitHub, Twitter or Facebook, however for successful host registration username and password needs to be created.

To use RHEL in a cloud environment like Amazon, Azure or OpenStack, simply upload the image and start it. It’s cloud-init ready, make sure to seed the instance with data like usernames, passwords and/or ssh-keys. Note that root account is locked, there is no way to log in without seeding initial information.

To start RHEL in QEMU/KVM, libvirt or oVirt (Red Hat Virtualization), several steps must be performed: root password should be set, cloud-init must be uninstalled and optionally user account(s) with password or ssh key should be created.

$ virt-customize -a rhel-8.3-x86_64-kvm.qcow2 --root-password password:redhat --uninstall cloud-init --hostname rhel8-registered
[   0.0] Examining the guest ...
[  11.5] Setting a random seed
[  11.5] Setting the machine ID in /etc/machine-id
[  11.5] Uninstalling packages: cloud-init
[  28.7] Setting passwords
[  45.2] Finishing off

The utility virt-customize, available in Fedora, RHEL and most other Linux distributions, is extremely flexible allowing pretty much any change from creating users, installing packages, applying updates. We will stick with setting up the root password and hostname for now. Note the utility performs actions on the original image, make sure to have a copy just for case.

$ sudo virt-install \
  --name rhel8-registered \
  --memory 2048 \
  --vcpus 2 \
  --disk /var/lib/libvirt/images/rhel-8.3-x86_64-kvm.qcow2 \
  --import \
  --os-variant rhel8.3

Once the host boots up, login and register the system. Use the same credentials as for accessing the Red Hat Portal:

rhel8# subscription-manager register --username lzap
Password: **********
The system has been registered with ID: XXXXXXXXXXXXXXXXXXXXXXXXXXX

This step can be also done via the virt-customize utility or automated via Ansible. And that’s it! Start installing software or updating the node:

rhel8# dnf update; dnf install vim-enhanced

It is worth nothing that subscription-manager runs a deamon which periodically checks-in and uploads installed packages and some hardware facts about the system. You can review registered systems on the Subscription Portal. From there, package and errata information can be displayed (security vulnerabilities) as well as repositories, modules and system facts. Note that it is not possible to manage hosts via Red Hat Portal, nodes can be unregistered tho.

Although I haven’t tested this, to convert QCOW2 image to VMWare, perform the following after the image was modified via virt-customize:

# qemu-img convert -f qcow2 -O vmdk rhel-8.3-x86_64-kvm.qcow2 rhel-8.3-x86_64-kvm.vmdk

To install on a bare-metal node, download one of the installation DVDs, attach it to the CDROM device or burn and insert physical copy (who does that in 2021 really) and follow the on-screen instructions. Use Red Hat Portal credentials when asked to register the system.

If you plan to manage fleet of RHEL servers, check out Foreman project which is the upstream for Red Hat Satellite management platform. Note that content management features provided by Katello plugin will not work on zero-cost accounts, but other features like provisioning, remote execution and configuration management will work perfectly fine with self-supported Red Hat Enterprise Linux nodes registered directly to Red Hat Portal.

Red Hat Enterprise Linux is a reliable and trusted Linux operating system available free of charge for up to 16 production instances. Feel free to ask on the discussion forums. By the way, Red Hat Portal contains ton of curated and useful stuff, documentation, articles, howtos, discussion and video content.

Demux, mux and cut MP4 in ffmpeg

Posted by Lukas "lzap" Zapletal on January 20, 2021 12:00 AM

Demux, mux and cut MP4 in ffmpeg

With the upcoming COVID-19 open-source conferences season, we record presentations and screencasts almost on daily basis. Sometimes, it’s needed to trim a MP4 video without reencoding the content. It’s easy with ffmpeg, option --s specifies start in format of 00:00:00.000 or just 0 as number of seconds, option -t represents length of the desired section:

ffmpeg -ss 0 -i video-orig.mp4 -t 00:20:51.000 -c copy video-cut.mp4

Sometimes video and audio needs to be separated into individual files (aka demuxed). This can be handy when some audio artifacts need to be removed (e.g. noise or buzz) from the audio track (aka stream). This can be done easily:

ffmpeg -i video-orig.mp4 -an -vcodec copy video-demuxed.m4v
ffmpeg -i video-orig.mp4 -vn -acodec copy audio-demuxed.m4a

Note that m4v and m4a are not well known but standard extensions for MP4 audio and video streams. If you click on such file, most modern operating systems should play the file, assuming it was encoded with a codec a player understands (e.g. h265 or h264 for video and AAC for audio).

Sometimes, audio needs to be extracted into WAV format rather than M4A for further processing. The rate can be either 44100 or 48000 depending on the source, if you don’t know just use 48000:

ffmpeg -i video-orig.mp4 -vn -acodec pcm_s16le -ar 48000 -ac 2 audio-demuxed.wav

After corrections are made, let’s say in Audacity software, audio can be compressed back via AAC codec:

ffmpeg -i audio-edited.wav -c:a aac -b:a 128k audio-edited.m4a

And finally, audio and video can be joined (muxed) back together. Note that at any point, video stream hasn’t been modified (encoded) so no quality is lost during this process. In the example above, only audio was reencoded back into AAC which is fast. Muxing and demuxing is also matter of few seconds:

ffmpeg -i video-demuxed.m4v -i audio-edited.m4a -c:v copy -c:a copy video-final-version.mp4

All of this will also work for other video containers like MKV. This little tool named ffmpeg is indeed a monster.

Sensible integer scale for Gonum Plot

Posted by Fabio Alessandro Locati on January 20, 2021 12:00 AM
Over the years, I found myself multiple times using Gonum Plot. I do find it as a very good and easy to use plotting tool for Go. The problem I found myself, over and over, dealing with is the tickers scale. If you know before-hand the values that can be expected to be created by the application, it is very straightforward, but the majority of times, this is not the case.

Consuming logs from a Kafka topic using syslog-ng

Posted by Peter Czanik on January 19, 2021 12:31 PM

There is no official Kafka source in syslog-ng, but because this question comes up often enough, I created one. It is just a temporary workaround using the program() source, but it works. It involves Java and installing Kafka manually, but it was fast and reliabe in my tests: ingesting 50,000–100,000 messages a second on my laptop in a resource-constrained virtual machine.

Of course, I also tried a more resource-friendly solution, using kafkacat to consume log messages from a Kafka topic. While it worked perfectly on the command line, I could not get it to work with the program() source in syslog-ng.

If you read my blog last week about using templates in the topic() parameter of the Kafka destination, the test environment will look familiar. The only notable difference is that the tool used to consume logs from Kafka is now called within syslog-ng from a program() source.

Before you begin

You do not need the most recent syslog-ng version to use the program() source. Still, I recommend you use recent packages, because they contain many useful bug fixes. You can learn more about where 3rd party syslong-ng packages for major Linux distributions are available at https://www.syslog-ng.com/3rd-party-binaries

Kafka might be available for your Linux distribution of choice, but it was not available in the distributions I checked. For simplicity’s sake, I use the binary distribution from the Kafka website. At the time of writing, the latest available version is kafka_2.13-2.6.0.tgz and it should work equally well on any Linux host with a recent enough (that is, 1.8+) Java. If you use a local Kafka installation, you might need to modify some of the example command lines.

Downloading and starting Kafka

A proper Kafka installation is outside of the scope of my blog. Here, you will follow relevant parts of the Kafka Quickstart documentation. You will download the archive containing Kafka, extract it, and start its components. You will need network access and four terminal windows.

  1. First, download the latest Kafka release and extract it. The exact version might bedifferent:

wget https://downloads.apache.org/kafka/2.6.0/kafka_2.13-2.6.0.tgz
tar xvf kafka_2.13-2.6.0.tgz

At the end of this process, you will see a new directory named kafka_2.13-2.6.0.

  1. From now on, you will need the 3 extra terminal windows, because first you will start two separate daemons in the foreground to see their messages, and two more windows are required to send messages to Kafka and to receive them.

  2. First, start zookeeper in one of the terminal windows. Change to the new Kafka directory and start the application:

cd kafka_2.13-2.6.0/
bin/zookeeper-server-start.sh config/zookeeper.properties
  1. Now you can start the Kafka server in a different terminal window:

cd kafka_2.13-2.6.0/
bin/kafka-server-start.sh config/server.properties

Both applications print lots of data on screen. Normally, the flood of debug information stops after a few seconds and the applications are ready to be used. If there is a problem, you will get back the command line. In this case, you will have to browse through the debug messages and resolve the problem.

Now you can do some minimal functional testing, without syslog-ng involved yet. This way you can make sure that access to Kafka is not blocked by a firewall or other software.

  1. Open yet another terminal window, change to the Kafka directory and start a script to collect messages from a Kafka topic. You can safely ignore the warning message, it appears because the topic does not exist yet.

cd kafka_2.13-2.6.0/
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytest
[2020-12-15 14:41:09,172] WARN [Consumer clientId=consumer-console-consumer-31493-1, groupId=console-consumer-31493] Error while fetching mblog_kafka_source_hack_review.docxetadata with correlation id 2 : {mytest=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
  1. Now you can start a fourth terminal window to send some test messages. Just enter something after the “>” character and press Enter. Moments later, you should see what you have just entered in the third terminal window:

bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytest
>blah
>blahblah
>blahblahblah
>
  1. You can exit with ^D.

Configuring syslog-ng

Now that you have checked that you can send messages to Kafka and pull those messages with another application, it is time to configure syslog-ng. If syslog-ng on your system is configured to include .conf files from the /etc/syslog-ng/conf.d/ directory, create a new configuration file there. Otherwise, append the configuration below to syslog-ng.conf:

source s_kafka {
  program("/root/kafka_2.13-2.6.0/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytest");
};
destination d_fromkafka {
  file("/var/log/fromkafka");
};
log {
  source(s_kafka);
  destination(d_fromkafka);
};

The above configuration snippet consumes log messages from Kafka and writes them to a file under the /var/log/ directory. Make sure that settings in the Kafka source match your environment. Here the Kafka archive is extracted under the /root/ directory and the topic name is the same as in the initial tests: “mytest”.

Testing

Once you have reloaded syslog-ng, you are ready for testing.

  1. Staying in the Kafka directory you can start the producer to send messages to Kafka:

leap152:~/kafka_2.13-2.6.0 # bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytest
>blah
>blahblah
>
  1. You can now check whether messages successfully arrived to syslog-ng by tailing the log file:

leap152:/etc/syslog-ng/conf.d # tail -f /var/log/fromkafka

Jan 15 13:03:25 leap152 blah

Jan 15 13:03:29 leap152 blahblah

  1. As usual, you can exit from the producer using ^D.

What is next?

This blog is enough to get you started and learn the basic concepts of Kafka. On the other hand, this environment is far from anything production-ready. For that, you will need a proper Kafka installation and most likely the Kafka consumer in the syslog-ng configuration also requires additional settings. This setup was fast and reliable in my test environment, but that is not a guarantee that it also works well in a production environment. Let me know your experiences!

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

Call for Projects and Mentors: GSoC 2021

Posted by Fedora Community Blog on January 19, 2021 04:22 AM
Fedora Summer Coding 2019

Google Summer of Code (GSoC) is a global program focused on introducing students to open source software development. Students work on a 10 week programming project with an open source organization during their break from a post secondary academic program. Fedora has had great participation and we would like to continue to be a mentoring […]

The post Call for Projects and Mentors: GSoC 2021 appeared first on Fedora Community Blog.

Fedora 33 : Create a simple GUI Button on Unity 3D.

Posted by mythcat on January 18, 2021 10:00 PM
It is very useful to create applications in the Fedora 33 Linux distribution with the Unity 3D game engine.
In today's tutorial, I will show you how to build the simplest GUI with C# and a dynamic button.
To create a button dynamically you need to use GUI.Button.
Open Unity 3D new project in your Fedora 33 distro.
Add a new Game Object by right click and select Create Empty.
Select Game Object use Add Component to add a New script and name it create_button.
Open it and add this source code:
using UnityEngine;
using UnityEngine.UI;
using System.Collections;

public class create_button : MonoBehaviour {


void OnGUI()
{    
if (GUI.Button(new Rect(10, 10, 300, 20), "Test - Dynamically button"))
{    
Debug.Log("Test button");
}

}
}
If you run the Unity project you will see a basic Unity 3D button, see the next image:

Deploy your own Matrix server on Fedora CoreOS

Posted by Fedora Magazine on January 18, 2021 08:00 AM

Today it is very common for open source projects to distribute their software via container images. But how can these containers be run securely in production? This article explains how to deploy a Matrix server on Fedora CoreOS.

What is Matrix?

Matrix provides an open source, federated and optionally end-to-end encrypted communication platform.

From matrix.org:

Matrix is an open source project that publishes the Matrix open standard for secure, decentralised, real-time communication, and its Apache licensed reference implementations.

Matrix also includes bridges to other popular platforms such as Slack, IRC, XMPP and Gitter. Some open source communities are replacing IRC with Matrix or adding Matrix as an new communication channel (see for example Mozilla, KDE, and Fedora).

Matrix is a federated communication platform. If you host your own server, you can join conversations hosted on other Matrix instances. This makes it great for self hosting.

What is Fedora CoreOS?

From the Fedora CoreOS docs:

Fedora CoreOS is an automatically updating, minimal, monolithic, container-focused operating system, designed for clusters but also operable standalone, optimized for Kubernetes but also great without it.

With Fedora CoreOS (FCOS), you get all the benefits of Fedora (podman, cgroups v2, SELinux) packaged in a minimal automatically updating system thanks to rpm-ostree.

To get more familiar with Fedora CoreOS basics, check out this getting started article:

<figure class="wp-block-embed is-type-wp-embed is-provider-fedora-magazine wp-block-embed-fedora-magazine">
Getting started with Fedora CoreOS
<iframe class="wp-embedded-content" data-secret="0JrF0wKeKf" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/getting-started-with-fedora-coreos/embed/#?secret=0JrF0wKeKf" title="“Getting started with Fedora CoreOS” — Fedora Magazine" width="600"></iframe>
</figure>

Creating the Fedora CoreOS configuration

Running a Matrix service requires the following software:

This guide will demonstrate how to run all of the above software in containers on the same FCOS server. All the services will all be configured to run under podman container engines.

Assembling the FCCT configuration

Configuring and provisioning these containers on the host requires an Ignition file. FCCT generates the ignition file using a YAML configuration file as input. On Fedora Linux you can install FCCT using dnf:

$ sudo dnf install fcct

A GitHub repository is available for the reader, it contains all the configuration needed and a basic template system to simplify the personnalisation of the configuration. These template values use the %%VARIABLE%% format and each variable is defined in a file named secrets.

User and ssh access

The first configuration step is to define an SSH key for the default user core.

variant: fcos                     
version: 1.3.0       
passwd:           
  users:      
    - name: core           
      ssh_authorized_keys:       
        - %%SSH_PUBKEY%%

Cgroups v2

Fedora CoreOS comes with cgroups version 1 by default, but it can be configured to use cgroups v2. Using the latest version of cgroups allows for better control of the host resources among other new features.

Switching to cgroups v2 is done via a systemd service that modifies the kernel arguments and reboots the host on first boot.

systemd:                   
  units:        
    - name: cgroups-v2-karg.service               
      enabled: true       
      contents: |       
        [Unit]       
        Description=Switch To cgroups v2                           
        After=systemd-machine-id-commit.service
        ConditionKernelCommandLine=systemd.unified_cgroup_hierarchy       
        ConditionPathExists=!/var/lib/cgroups-v2-karg.stamp       
        [Service]       
        Type=oneshot       
        RemainAfterExit=yes       
        ExecStart=/bin/rpm-ostree kargs --delete=systemd.unified_cgroup_hierarchy       
        ExecStart=/bin/touch /var/lib/cgroups-v2-karg.stamp       
        ExecStart=/bin/systemctl --no-block reboot       
        [Install]                       
        WantedBy=multi-user.target       
                

Podman pod

Podman supports the creation of pods. Pods are quite handy when you need to group containers together within the same network namespace. Containers within a pod can communicate with each other using the localhost address.

Create and configure a pod to run the different services needed by Matrix:

- name: podmanpod.service       
  enabled: true       
  contents: |       
    [Unit]       
    Description=Creates a podman pod to run the matrix services.       
    After=cgroups-v2-karg.service network-online.target       
    Wants=After=cgroups-v2-karg.service network-online.target       
    [Service]       
    Type=oneshot       
    RemainAfterExit=yes       
    ExecStart=sh -c 'podman pod exists matrix || podman pod create -n matrix -p 80:80 -p 443:443 -p 8448:8448'       
    [Install]       
    WantedBy=multi-user.target

Another advantage of using a pod is that we can control which ports are exposed to the host from the pod as a whole. Here we expose the ports 80, 443 (HTTP and HTTPS) and 8448 (Matrix federation) to the host to make these services available outside of the pod.

A web server with Let’s Encrypt support

The Matrix protocol is HTTP based. Clients connect to their homeserver via HTTPS. Federation between Matrix homeservers is also done over HTTPS. For this setup, you will need three domains. Using distinct domains helps to isolate each service and protect against cross-site scripting (XSS) vulnerabilities.

  • example.tld: The base domain for your homeserver. This will be part of the user Matrix IDs (for example: @username:example.tld).
  • matrix.example.tld: The sub-domain for your Synapse Matrix server.
  • chat.example.tld: The sub-domain for your Element web client.

To simplify the configuration, you only need to set your own domain once in the secrets file.

You will need to ask Let’s Encrypt for certificates on first boot for each of the three domains. Make sure that the domains are configured beforehand to resolve to the IP address that will be assigned to your server. If you do not know what IP address will be assigned to your server in advance, you might want to use another ACME challenge method to get Let’s Encrypt certificates (see DNS Plugins).

- name: certbot-firstboot.service
  enabled: true
  contents: |
    [Unit]
    Description=Run certbot to get certificates
    ConditionPathExists=!/var/srv/matrix/letsencrypt-certs/archive
    After=podmanpod.service network-online.target nginx-http.service
    Wants=network-online.target
    Requires=podmanpod.service nginx-http.service

    [Service]
    Type=oneshot
    ExecStart=/bin/podman run \
                  --name=certbot \
                  --pod=matrix \
                  --rm \
                  --cap-drop all \
                  --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \
                  --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \
                  docker.io/certbot/certbot:latest \
                  --agree-tos --webroot certonly

    [Install]
    WantedBy=multi-user.target

Once the certificates are available, you can start the final instance of nginx. Nginx will act as an HTTPS reverse proxy for your services.

- name: nginx.service
  enabled: true
  contents: |
    [Unit]
    Description=Run the nginx server
    After=podmanpod.service network-online.target certbot-firstboot.service
    Wants=network-online.target
    Requires=podmanpod.service certbot-firstboot.service

    [Service]
    ExecStartPre=/bin/podman pull docker.io/nginx:stable
    ExecStart=/bin/podman run \
                  --name=nginx \
                  --pull=always \
                  --pod=matrix \
                  --rm \
                  --volume /var/srv/matrix/nginx/nginx.conf:/etc/nginx/nginx.conf:ro,z \
                  --volume /var/srv/matrix/nginx/dhparam:/etc/nginx/dhparam:ro,z \
                  --volume /var/srv/matrix/letsencrypt-webroot:/var/www:ro,z \
                  --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:ro,z \
                  --volume /var/srv/matrix/well-known:/var/well-known:ro,z \
                  docker.io/nginx:stable
    ExecStop=/bin/podman rm --force --ignore nginx

    [Install]
    WantedBy=multi-user.target

Because Let’s Encrypt certificates have a short lifetime, they must be renewed frequently. Set up a system timer to automate their renewal:

- name: certbot.timer
  enabled: true
  contents: |
    [Unit]
    Description=Weekly check for certificates renewal
    [Timer]
    OnCalendar=Sun --* 02:00:00 
    Persistent=true
    [Install]
    WantedBy=timers.target
- name: certbot.service
  enabled: false
  contents: |
  [Unit]
  Description=Let's Encrypt certificate renewal
  ConditionPathExists=/var/srv/matrix/letsencrypt-certs/archive
  After=podmanpod.service network-online.target
  Wants=network-online.target
  Requires=podmanpod.service
  [Service]
  Type=oneshot
  ExecStart=/bin/podman run \
                --name=certbot \
                --pod=matrix \
                --rm \
                --cap-drop all \
                --volume /var/srv/matrix/letsencrypt-webroot:/var/lib/letsencrypt:rw,z \
                --volume /var/srv/matrix/letsencrypt-certs:/etc/letsencrypt:rw,z \
                docker.io/certbot/certbot:latest \
                renew
  ExecStartPost=/bin/systemctl restart --no-block nginx.service 

Synapse and database

Finally, configure the Synapse server and PostgreSQL database.

The Synapse server requires a configuration and secrets keys to be generated. Follow the GitHub repository’s README file section to generate those.

Once these steps completed, add a systemd service using podman to run Synapse as a container:

- name: synapse.service       
  enabled: true       
  contents: |       
    [Unit]       
    Description=Run the synapse service.       
    After=podmanpod.service network-online.target                       
    Wants=network-online.target       
    Requires=podmanpod.service
    [Service]       
    ExecStart=/bin/podman run \
                  --name=synapse \       
                  --pull=always  \       
                  --read-only \       
                  --pod=matrix \       
                  --rm \       
                  -v /var/srv/matrix/synapse:/data:z \       
                  docker.io/matrixdotorg/synapse:v1.24.0       
    ExecStop=/bin/podman rm --force --ignore synapse       
    [Install]            
    WantedBy=multi-user.target

Setting up the PostgreSQL database is very similar. You will also need to provide a POSTGRES_PASSWORD, in the repository’s secrets file and declare a systemd service (check here for all the details).

Setting the Fedora CoreOS host

FCCT provides a storage directive which is useful for creating directories and adding files on the Fedora CoreOS host.

The following configuration makes sure that the configuration needed by each service is present under /var/srv/matrix. Each service has a dedicated directory. For example /var/srv/matrix/nginx and /var/srv/matrix/synapse. These directories are mounted by podman as volumes when the containers are started.

storage:
  directories:
    - path: /var/srv/matrix
      mode: 0700
    - path: /var/srv/matrix/synapse/media_store
      mode: 0777
    - path: /var/srv/matrix/postgres
    - path: /var/srv/matrix/letsencrypt-webroot
  trees:
    - local: synapse
      path: /var/srv/matrix/synapse
    - local: nginx
      path: /var/srv/matrix/nginx
    - local: nginx-http
      path: /var/srv/matrix/nginx-http
    - local: letsencrypt-certs
      path: /var/srv/matrix/letsencrypt-certs
    - local: well-known
      path: /var/srv/matrix/well-known
    - local: element-web
      path: /var/srv/matrix/element-web
  files:
    - path: /etc/postgresql_synapse
      contents:
        local: postgresql_synapse
      mode: 0700

Auto-updates

You are now ready to setup the most powerful part of Fedora CoreOS ‒ auto-updates. On Fedora CoreOS, the system is automatically updated and restarted approximately once every two weeks for each new Fedora CoreOS release. On startup, all containers will be updated to the latest version (because the pull=always option is set). The containers are stateless and volume mounts are used for any data that needs to be persistent across reboots.

The PostgreSQL container is an exception. It can not be fully updated automatically because it requires manual intervention for major releases. However, it will still be updated with new patch releases to fix security issues and bugs as long as the specified version is supported (approximately five years). Be aware that Synapse might start requiring a newer version before support ends. Consequently, you should plan a manual update approximately once per year for new PostgreSQL releases. The steps to update PostgreSQL are documented in this project’s README.

To maximise availability and avoid service interruptions in the middle of the day, set an update strategy in Zincati’s configuration to only allow reboots for updates during certain periods of time. For example, one might want to restrict reboots to week days between 2 AM and 4 AM UTC. Make sure to pick the correct time for your timezone. Fedora CoreOS uses the UTC timezone by default. Here is an example configuration snippet that you could append to your config.yaml:

[updates]
strategy = "periodic"

[[updates.periodic.window]]
days = [ "Mon", "Tue", "Wed", "Thu", "Fri" ]
start_time = "02:00"
length_minutes = 120

Creating your own Matrix server by using the git repository

Some sections where lightly edited to make this article easier to read but you can find the full, unedited configuration in this GitHub repository. To host your own server from this configuration, fill in the secrets values and generate the full configuration with fcct via the provided Makefile:

$ cp secrets.example secrets
${EDITOR} secrets
# Fill in values not marked as generated by synapse

Next, generate the Synapse secrets and include them in the secrets file. Finally, you can build the final configuration with make:

$ make
# This will generate the config.ign file

You are now ready to deploy your Matrix homeserver on Fedora CoreOS. Follow the instructions for your platform of choice from the documentation to proceed.

Registering new users

What’s a service without users? Open registration was disabled by default to avoid issues. You can re-enable open registration in the Synapse configuration if you are up for it. Alternatively, even with open registration disabled, it is possible to add new users to your server via the command line:

$ sudo podman run --rm --tty --interactive \
       --pod=matrix \
       -v /var/srv/matrix/synapse:/data:z,ro \
       --entrypoint register_new_matrix_user \
       docker.io/matrixdotorg/synapse:latest \
       -c /data/homeserver.yaml http://127.0.0.1:8008

Conclusion

You are now ready to join the Matrix federated universe! Enjoy your quickly deployed and automatically updating Matrix server! Remember that auto-updates are made as safe as possible by the fact that if anything breaks, you can either rollback the system to the previous version or use the previous container image to work around any bugs while they are being fixed. Being able to quickly setup a system that will be kept updated and secure is the main advantage of the Fedora CoreOS model.

To go further, take a look at this other article that is taking advantage of Terraform to generate the configuration and directly deploy Fedora CoreOS on your platform of choice.

<figure class="wp-block-embed is-type-wp-embed is-provider-fedora-magazine wp-block-embed-fedora-magazine">
Deploy Fedora CoreOS servers with Terraform
<iframe class="wp-embedded-content" data-secret="CgvZJMKIiQ" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/deploy-fedora-coreos-with-terraform/embed/#?secret=CgvZJMKIiQ" title="“Deploy Fedora CoreOS servers with Terraform” — Fedora Magazine" width="600"></iframe>
</figure>

Episode 254 – Right to Repair Security

Posted by Josh Bressers on January 18, 2021 12:01 AM

Josh and Kurt talk about the new right to repair rules in the EU. There’s a strange line between loving the idea of right to repair, but also being horrified as security people at the idea of a device being on the Internet for 30 years.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2236-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_254_Right_to_Repair_Security.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_254_Right_to_Repair_Security.mp3</audio>

Show Notes

Fedora program update: 2021-02

Posted by Fedora Community Blog on January 15, 2021 09:24 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. Self-Contained Change proposals for Fedora 34 are due by Tuesday 19 January. The mass rebuild begins on 20 January. Not next week, but normally I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, […]

The post Fedora program update: 2021-02 appeared first on Fedora Community Blog.

Fedora 33 : Using the finch chat program.

Posted by mythcat on January 15, 2021 01:54 PM
Finch is a TUI (text user interface) IM client for Linux which uses libpurple.
This is very useful when you want to chat and don't have an environment to install Fedora on.
Finch is built using the ncurses toolkit, which is a library designed especially to built text user interfaces.
This program lets you sign on to Jabber, GoogleTalk, IRC, and other IM networks. 
After this was built the Pidgin application.
Let's install it with the DNF tool:
[root@desk mythcat]# dnf install finch.x86_64
...
Installed:
finch-2.14.1-2.fc33.x86_64 libgnt-2.14.0-3.fc33.x86_64

Complete!
You can find all information using the man command:
[mythcat@desk ~]$ man finch 
You can run it easy with this command:
[mythcat@desk ~]$ finch
After that, you can see the windows form for your account.
You can see one example with my IRC account:
If not you can select it by pressing the keys: Alt+A and select Accounts.
Use the Tab key and select Modify.
The new window form lets you set your account.
In the next image, you can see some settings for my IRC account.
Another important element is that this interface can be customized using the file named .gntrc.
Open it with your editor, I used vi editor:
[mythcat@desk ~]$ vi ~/.gntrc
I make these changes to use the mouse:
[general]
mouse = 1
Although the manual page says the mouse support is still experimental.
I install the GPM - a cut and paste utility and mouse server for virtual consoles.
[root@desk mythcat]# dnf install gpm.x86_64 
...
[root@desk mythcat]# systemctl status gpm
● gpm.service - Console Mouse manager
Loaded: loaded (/usr/lib/systemd/system/gpm.service; enabled; vendor prese>
Active: inactive (dead)
...
[root@desk mythcat]# systemctl enable gpm
[root@desk mythcat]# systemctl status gpm
● gpm.service - Console Mouse manager
Loaded: loaded (/usr/lib/systemd/system/gpm.service; enabled; vendor prese>
Active: inactive (dead) since Fri 2021-01-15 15:35:56 EET; 6min ago
Main PID: 15241 (code=exited, status=0/SUCCESS)
CPU: 2ms

Jan 15 15:35:32 desk systemd[1]: Starting Console Mouse manager...
Jan 15 15:35:32 desk /usr/sbin/gpm[15314]: O0o.oops(): [daemon/check_uniqueness>
Jan 15 15:35:32 desk systemd[1]: Started Console Mouse manager.
Jan 15 15:35:32 desk /usr/sbin/gpm[15314]: gpm is already running as pid 15241
The GPM can be configureted to set your mouse settings.
I restart my Fedora 33 Linux distro and works very well with the mouse.

F33-20210114 updated Live isos released

Posted by Ben Williams on January 15, 2021 01:13 PM


The Fedora Respins SIG is pleased to announce the latest release of Updated F33-20210114-Live ISOs, carrying the 5.10.6-200 kernel.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have about 800MB+ of updates)).

A huge thank you goes out to irc nicks dowdle, dbristow, luna, yogoyoSouthern-Gentleman for testing these iso.

And as always our isos can be found at http://tinyurl.com/Live-respins2

Fedora 32 : Can be better? part 012.

Posted by mythcat on January 15, 2021 11:59 AM
Pidgin is a chat program that lets you log in to accounts on multiple chat networks simultaneously. 
Pidgin can be installed on multiple operating systems and platforms. 
Pidgin is compatible with the following chat networks out of the box: I.R.C., Jabber/XMPP, Bonjour, Gadu-Gadu, IRC, Novell GroupWise Messenger, Lotus Sametime, SILC, SIMPLE, and Zephyr. Can it be better? 
The only problem a user in need of help may have been in the command line environment. Obviously, in this case, this application cannot be used. I would suggest building a terminal application like WeeChat dedicated to Fedora users and including I.R.C channels. 
Now, let's install this application.
[root@desk mythcat]# dnf install pidgin.x86_64
Last metadata expiration check: 0:45:32 ago on Sun 27 Sep 2020 04:21:51 PM EEST.
Dependencies resolved.
==============================================================================================
Package Architecture Version Repository Size
==============================================================================================
Installing:
pidgin x86_64 2.13.0-18.fc32 updates 1.4 M
Installing dependencies:
cyrus-sasl-md5 x86_64 2.1.27-4.fc32 fedora 41 k
cyrus-sasl-scram x86_64 2.1.27-4.fc32 fedora 27 k
farstream02 x86_64 0.2.9-1.fc32 fedora 239 k
gtkspell x86_64 2.0.16-20.fc32 fedora 43 k
libgadu x86_64 1.12.2-10.fc32 fedora 110 k
libnice-gstreamer1 x86_64 0.1.17-2.fc32 updates 20 k
libpurple x86_64 2.13.0-18.fc32 updates 5.2 M
meanwhile x86_64 1.1.0-28.fc32 fedora 106 k

Transaction Summary
==============================================================================================
Install 9 Packages

Total download size: 7.2 M
Installed size: 31 M
Is this ok [y/N]: y
Downloading Packages:
...
Complete!

Be lazy, automate: GitHub actions for static blogging

Posted by Pablo Iranzo Gómez on January 14, 2021 09:24 PM
<section class="slide level2" id="be-lazy-automate-github-actions-for-static-blogging">

Be lazy, automate: GitHub Actions for static blogging

/me: Pablo Iranzo Gómez ( https://iranzo.github.io )

</section> <section class="slide level2" id="what-is-a-blog">

What is a blog?

A place to share knowledge, interests, tips, etc.

Usually features: - images - comments from visitors, - related articles, - etc.

</section> <section class="slide level2" id="what-are-the-costs-for-a-blog">

What are the costs for a blog?

Web costs money: - Hosting - Domain - Maintenance - etc.

</section> <section class="slide level2" id="what-is-static-blogging">

What is static blogging?

Generate a static webpage - Think of it as rendering templates into HTML - Has no requirements on the web server, any simple webserver is enough: - Look ma!, no database! - Look ma!, no users! - Look ma!, no security issues!

</section> <section class="slide level2" id="what-does-it-mean-to-us">

What does it mean to us?

  • We write an article
  • Command for generating html from templates is used
  • New files uploaded to webserver
</section> <section class="slide level2" id="some-philosophy">

Some Philosophy

Empty your mind, be shapeless, formless, like water. Now you put water in a cup, it becomes the cup, you put water into a bottle, it becomes the bottle You put water in a teacup, it becomes the teapot Now water can flow or it can crash. Be water my friend

Note: Automation: Be lazy, have someone else doing it for you.

</section> <section class="slide level2" id="git-hub-gitlab">

Git Hub / Gitlab

  • Lot of source code is hosted at github or gitlab, but it’s a code repository.
  • BUT: We want a website!!
</section> <section class="slide level2" id="pages-come-to-play">

Pages come to play

Both provide a ‘static’ webserver to be used for your projects for free 😜

G(H/L) serve from a branch in your repo (usually ‘yourusername.github.io’ repo)

You can buy a domain and point it to your repo.

</section> <section class="slide level2" id="static-doesnt-mean-end-of-fun">

Static doesn’t mean end of fun

There are many ‘static’ content generators that provide rich features:

  • styles
  • links
  • image resizing
  • even ‘search’
</section> <section class="slide level2" id="even-more-fun-with-external-services">

Even more fun with external services

  • comments
  • mailing lists
  • etc.
</section> <section class="slide level2" id="some-static-generators">

Some static generators

Importance of language is for developing ‘plugins’, not content.

  • Jekyll (Ruby)
  • Pelican (Python)
  • Hugo(Go)

They ‘render’ markdown into html

</section> <section class="slide level2" id="theres-even-more-fun">

There’s even more fun

  • Github provides Jekyll support out of the box.
  • Github, Gitlab, etc allow to plug in third-party CI
  • Github has Actions

Think about endless possibilities!!!

</section> <section class="slide level2" id="what-are-github-actions">

What are Github actions?

  • Github is a repository for code
    • Allows third-party integration: Travis, Jenkins, bots, etc
  • Github added GitHub Actions
    • For all repos
    • For free
    • Easy to define new actions
    • ‘clonable’ with just a yaml in the
</section> <section class="slide level2" id="what-can-we-find">

What can we find?

  • CI
  • Formatting
  • Linting
  • Publishing
  • Anything can be combined!!
  • A full Marketplace (https://github.com/marketplace?type=actions)
</section> <section class="slide level2" id="some-food-for-thought">

Some food for thought

  • Repos have branches
  • Repos can have automation
  • External automation like Travis CI can do things for you

Note: We’ve all the pieces to push a new markdown file and have it triggering a website update and publish

</section> <section class="slide level2" id="is-a-static-webpage-ugly">

Is a static webpage ugly?

  • There are lot of templates http://www.pelicanthemes.com
  • Each theme has different feature set
  • Choose wisely! (Small screens, html5, etc)
    • If not, changing themes is quite easy: update, and ‘render’ using a new one.
</section> <section class="slide level2" id="travis-ci.org">

Travis-ci.org

Automation for projects:

  • Free for OpenSource projects
  • Configured via .travis.yml
  • Some other settings via Web UI (env vars, etc)
</section> <section class="slide level2" id="why-actions">

Why Actions?

  • Configured within yaml files in the repo
  • GH pre-creates a token that can be used to push new files, branches, etc
  • Without too much hassle, we’ve all the pieces!
</section> <section class="slide level2" id="wrap-up">

Wrap up

Ok, automation is ready, our project validates commits, PR’s, website generation…

What else?

</section> <section class="slide level2" id="test-it-yourself">

Test it yourself

Try https://github.com/iranzo/blog-o-matic/

Fork to your repo and get: - minimal setup steps - Automated setup of Pelican + Elegant theme via Git Hub action that builds on each commit. - Ready to be submitted to search engines via sitemap, and web claiming

</section> <section class="slide level2" id="questions">

Questions?

Pablo.Iranzo@gmail.com

</section>

Toolbox — After a gap of 15 months

Posted by Debarshi Ray on January 14, 2021 08:49 PM

toolbox-logo-landscape

We just released version 0.0.99, and I realized that it’s been a while since I blogged about Toolbox. So it’s time to address that.

Rewritten in Go

About a year ago, Ondřej Míchal single-handedly rewrote Toolbox in Go, making it massively easier to work on the code compared to the previous POSIX shell implementation. Go comes with much nicer facilities for command line parsing, error handling, logging, parsing JSON, and in general is a lot more pleasant to program in. Plus all the container tools in the OCI ecosystem are written in Go anyway, so it was a natural fit.

Other than the obvious benefits of Go, the rewrite immediately fixed a few bugs that were inherently very cumbersome to fix in the POSIX shell implementation. Something as simple as offering a –version option, or avoiding duplicate entries when listing containers or images was surprisingly difficult to achieve in the past.

What’s more, we managed to pull this off by retaining full compatibility with the previous code. So users and distributors should have no hesitation to update.

Towards version 0.1.0

We have been very conservative about our versioning scheme so far due to the inherently prototype nature of Toolbox. All our release numbers have followed the 0.0.x format. We thought that the move to Go deserves at least a minor version bump, but we also wanted to give it some time to shake out any bugs that might have crept in; and implement the features and fix the bugs that have been on our short-term wish list before putting a 0.1.0 stamp on it.

Therefore, we started a series of 0.0.9x releases to work our way towards version 0.1.0. The first one was 0.0.90 which shipped the Go code in March 2020, and we are currently at 0.0.99. Suffice to say that we are very close to the objective.

Rootful Toolboxes

Sometimes a rootless OCI container just isn’t enough because it can’t do things that require privilege escalation beyond the user’s current user ID on the host. This means that various debugging tools, such as Nmap, don’t work.

Therefore, we added support for running toolbox as root in version 0.0.98.1. This should hopefully unlock various new use-cases that were so far not possible when running rootless.

When running as root, Toolbox cannot rely on things like the user’s session D-Bus instance or the XDG_RUNTIME_DIR environment variable, because sudo doesn’t create a full-fledged user session that offers them. This means that graphical applications can only work by connecting to a X11 server, but then again running graphical applications as root is never a good idea to begin with.

Red Hat Universal Base Image (or UBI)

We recently took the first step towards supporting operating system distributions other than Fedora as first class citizens. From version 0.0.99 onwards, Toolbox supports Red Hat Enterprise Linux hosts where it will create containers based on the Red Hat Universal Base Image by default.

On hosts that aren’t running RHEL, one can still create UBI containers as:
$ toolbox create --distro rhel --release 8.3

Read more

Those were some of the big things that have happened in Toolbox land since my last update. If you are interested in more details, then you can read Ondřej’s posts where he writes at length about the port to Go and the changes in each of the releases since then.

Discover Fedora Kinoite: a Silverblue variant with the KDE Plasma desktop

Posted by Fedora Magazine on January 14, 2021 08:00 AM

Fedora Kinoite is an immutable desktop operating system featuring the KDE Plasma desktop. In short, Fedora Kinoite is like Fedora Silverblue but with KDE instead of GNOME. It is an emerging variant of Fedora, based on the same technologies as Fedora Silverblue (rpm-ostree, Flatpak, podman) and created exclusively from official RPM packages from Fedora.

Fedora Kinoite is like Silverblue, but what is Silverblue?

From the Fedora Silverblue documentation:

Fedora Silverblue is an immutable desktop operating system. It aims to be extremely stable and reliable. It also aims to be an excellent platform for developers and for those using container-focused workflows.

For more details about what makes Fedora Silverblue interesting, read the “What is Fedora Silverblue?” article previously published on Fedora Magazine. Everything in that article also applies to Fedora Kinoite.

Fedora Kinoite status

Kinoite is not yet an official emerging edition of Fedora. But this is in progress and currently planned for Fedora 35. However, it is already usable! Join us if you want to help.

Kinoite is made from the same packages that are available in the Fedora repositories and that go into the classic Fedora KDE Plasma Spin, so it is as functional and stable as classic Fedora. I’m also “eating my own dog food” as it is the only OS installed on the laptop that I am currently using to write this article.

However, be aware that Kinoite does not currently have graphical support for updates. You will need to be confortable with using the command line to manage updates, install Flatpaks, or overlay packages with rpm-ostree.

How to try Fedora Kinoite

As Kinoite is not yet an official Fedora edition, there is not a dedicated installer for it for now. To get started, install Silverblue and switch to Kinoite with the following commands:

# Add the temporary unofficial Kinoite remote
$ curl -O https://tim.siosm.fr/downloads/siosm_gpg.pub
$ sudo ostree remote add kinoite https://siosm.fr/kinoite/ --gpg-import siosm_gpg.pub

# Optional, only if you want to keep Silverblue available
$ sudo ostree admin pin 0

# Switch to Kinoite
$ sudo rpm-ostree rebase kinoite:fedora/33/x86_64/kinoite

# Reboot
$ sudo systemctl reboot

How to keep it up-to-date

Kinoite does not yet have graphical support for updates in Discover. Work is in progress to teach Discover how to manage an rpm-ostree system. Flatpak management mostly works with Discover but the command line is still the best option for now.

To update the system, use:

$ rpm-ostree update

To update Flatpaks, use:

$ flatpak update

Status of KDE Apps Flatpaks

Just like Fedora Silverblue, Fedora Kinoite focuses on applications delivered as Flatpaks. Some non KDE applications are available in the Fedora Flatpak registry, but until this selection is expanded with KDE ones, your best bet is to look for them in Flathub (see all KDE Apps on Flathub). Be aware that applications on Flathub may include non-free or proprietary software. The KDE SIG is working on packaging KDE Apps as Fedora provided Flatpaks but this is not ready yet.

Submitting bug reports

Report issues in the Fedora KDE SIG issue tracker or in the discussion thread at discussion.fedoraproject.org.

Other desktop variants

Although this project started with KDE, I have also already created variants for XFCE, Mate, Deepin, Pantheon, and LXQt. They are currently available from the same remote as Kinoite. Note that they will remain unofficial until someone steps up to maintain them officially in Fedora.

I have also created an additional smaller Base variant without any desktop environment. This allows you to overlay the lightweight window manager of your choice (i3, Sway, etc.). The same caveats as the ones for other desktop environments apply (currently unofficial and will need a maintainer).

Join Us: Design Team Sessions Live!

Posted by Fedora Community Blog on January 14, 2021 08:00 AM

Just before the Christmas holidays, you may have participated in one of three impromptu live design sessions / video chats I held. In the first session, a group of Fedorans did a critique on one of the Fedora 34 wallpaper mockups. In the second session, another group of us did a collaborative design session for […]

The post Join Us: Design Team Sessions Live! appeared first on Fedora Community Blog.

Working with decimals in Elixir

Posted by Josef Strzibny on January 14, 2021 12:00 AM

Integers are not enough, and floats are flawed? Decimals to the rescue! A short guide of what’s important when working with decimals in Elixir.

This post is about the Decimal 2.0 module from decimal Hex package.

As with every module in Elixir, running h Module and Module.module_info in IEx is a good place to start.

Among other things, it will tell us that the Decimal module follows the following standards:

Installation

Decimals are not part of the stdlib, and so before using the Decimal module, we need to declare is as a dependency in mix.exs:

def deps do
  [{:decimal, "~> 2.0"}]
end

There was some discussion of inclusion in the standard library, but the Elixir core team is reluctant to add something that works well as a separate module.

Contexts

Since using decimals is about getting the right precision in the first place, there is a little setting called context, which can be changed on the fly.

Decimal.Context.get/0 revels what’s our current settings:

iex(1)> Decimal.Context.get()
%Decimal.Context{
  flags: [],
  precision: 28,
  rounding: :half_up,
  traps: [:invalid_operation, :division_by_zero]
}

You’ll probably care about the maximum precision of calculations and how rounding is done.

Decimal.Context.set/1 can set different values. It accepts the %Decimal.Context struct.

Creation and conversion

We can create a decimal with new/1 or from_float/1 functions:

iex(1)> Decimal.new(55)
#Decimal<55>
iex(2)> Decimal.from_float(20.5)
#Decimal<20.5>

The other way around, we can use to_integer/1 and to_float/2.

To check something is actually indeed a decimal:

iex(8)> require Decimal
Decimal
iex(9)> Decimal.is_decimal("2.5")
false
iex(10)> Decimal.is_decimal(Decimal.new(2))
true

is_decimal/1 is a macro; therefore, we have to require the Decimal module first.

Addition and subtraction

We can use add/2 to get the sum of two decimals:

iex(1)> Decimal.add(6, 7)
#Decimal<13>
iex(2)> Decimal.add(Decimal.new(6), Decimal.from_float(7.5))
#Decimal<13.5>
iex(3)> Decimal.add(Decimal.new(6), "7.5")
#Decimal<13.5>

Note that instead of using the from_float/1 function, we can pass a string (“7.5”). We can do that in any of the Decimal functions on this page.

Similarly we can substract a value from a decimal with sub/2 function:

iex(1)> Decimal.sub(Decimal.new(6), 7)
#Decimal<-1>

Multiplication and division

We can multiply with mult/2:

iex(1)> Decimal.mult(Decimal.new(5), Decimal.new(3))
#Decimal<15>
iex(2)> Decimal.mult("Inf", -1)
#Decimal<-Infinity>

And devide with div/2:

iex(1)> Decimal.div(1, 3)
#Decimal<0.3333333333333333333333333333>
iex(2)> Decimal.div(1, 0)
** (Decimal.Error) division_by_zero
    (decimal) lib/decimal.ex:481: Decimal.div/2

Nobody solved the division by zero, so all you get is Decimal.Error.

Using contexts, we could change the precision to one we care about like this:

iex(1)> Decimal.Context.set(%Decimal.Context{Decimal.Context.get() | precision: 9})
:ok
iex(2)> Decimal.div(1, 3)
#Decimal<0.333333333>

Comparison

It’s super important not to expect that comparisons with < and > signs would work with Decimal.

Instead, we can use equal?/2 and compare/2 functions.

Checking equality is easy:

iex(1)> Decimal.equal?(-1, 0)
false
iex(2)> Decimal.equal?(0, "0.0")
true

Comparison returns one of :lt, :gt, and :eq atoms:

iex(1)> Decimal.compare(-1, 0)
:lt
iex(2)> Decimal.compare(0, -1)
:gt
iex(3)> Decimal.compare(0, 0)
:eq

Alternatively, there are gt?/2, lt?/2, min/2, max/2 functions as well.

Why is Free Software important in home automation?

Posted by Daniel Pocock on January 13, 2021 03:10 PM

There are many serious issues to reflect on after the siege at the US Capitol.

One of those is the importance of genuinely Free Software, with full source code for appliances in our homes and our communications platforms. From Trump Tower to the White House, Free Software like Domoticz is your (only) friend.

Please join my session about Free Communications at FOSDEM 2021, online due to the pandemic. Subscribe here for announcements about major achievements in Free Real-Time Communications technology.

All systems go

Posted by Fedora Infrastructure Status on January 13, 2021 02:02 PM
Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on January 13, 2021 12:28 PM
Service 'The Koji Buildsystem' now has status: major: koji is growing a little covid-belly and running out of space, we're looking for some more

Kiwi TCMS 9.0

Posted by Kiwi TCMS on January 13, 2021 12:20 PM

We're happy to announce Kiwi TCMS version 9.0!

IMPORTANT: this is a major release which includes backwards incompatible database and API changes, improvements, bug fixes, translation updates, new tests and internal refactoring. It is the eight release to include contributions via our open source bounty program.

This is the third release after Kiwi TCMS reached 200K pulls on Docker Hub!

You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  f98908772a2a    695 MB
kiwitcms/kiwi       6.2     7870085ad415    957 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
kiwitcms/kiwi       6.1     b559123d25b0    970 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
kiwitcms/kiwi       5.3.1   a420465852be    976 MB

Changes since Kiwi TCMS 8.9

Improvements

  • Update django from 3.1.4 to 3.1.5
  • Update django-contrib-comments from 1.9.2 to 2.0.0
  • Update pygithub from 1.53 to 1.54.1
  • Update pygments from 2.7.3 to 2.7.4
  • Update mysqlclient from 2.0.1 to 2.0.3
  • Update node_modules/prismjs from 1.22.0 to 1.23.0
  • Update node_modules/marked from 1.2.5 to 1.2.7
  • Implement 'Select all' for TestCase Search page. Resolves Issue #2103 (Bryan Mutai)
  • Change ON/OFF button messages for several buttons (Krum Petkov)
  • Remove delete_selected action from admin pages
  • Show active test runs in TestPlan page
  • Hide irrelevant Version & Build selectors for Testing breakdown telemetry
  • Allow running to be passed as URL query param to TestRun Search page

Settings

  • Remove unused kiwi.rpc log handler from LOGGING setting

Database

Warning: Contains backwards incompatible changes.

  • Replace Build.product with Build.version. Closes Issue #246. Build objects are now associated with Version objects, not with Product objects!

    Warning:

    After migration existing builds will point to the "unspecified" version! If you want your telemetry to be accurate you will have to update these objects manually and point them to the appropriate version value!

  • Rename related_name for TestExecution model: case_run -> executions

  • Rename related_name for TestCase model: case -> cases

API

Warning: Contains backwards incompatible changes.

  • Methods Build.filter, Build.create and Build.update replace the product field with a version field

Bug fixes

  • Display raw Markdown text before rendering to fix a bug where anonymous users don't see any text on the screen even if they are allowed to view an object

Refactoring & testing

  • Add tests for tcms.core.middleware. Fixes Issue #1605 (Gagan Deep)
  • Add tests for tcms.handlers. Fixes Issue #1611 (Gagan Deep)
  • Add tests for tcms.kiwi_auth.views. Fixes Issue #1608 (Abhishek Chaurasia)
  • Update pip during bugtracker integration tests to fix dependency issues
  • Reformat all files with black and isort. Closes Issue #1193
  • Refactor TestExecution.get_bugs() to use TestExecution.links()
  • Add return statement for invalid form to make pylint happy
  • Make Bug.assignee field a UserField
  • Replace deprecated ugettext_lazy with gettext_lazy
  • Fixes for Azure Boards integration tests
  • Remove CsrfDisableMiddleware. Closes Issue #297
  • Remove unused methods & left-over views

Kiwi TCMS Enterprise v9.0-mt

  • Based on Kiwi TCMS v9.0
  • Update kiwitcms-github-app from 1.2.1 to 1.2.2
  • Update kiwitcms-tenants from 1.3.1 to 1.4.2

For more info see https://github.com/kiwitcms/enterprise#v90-mt-12-jan-2021

Automation framework plugins

The following test automation framework plugins have been upgraded to work with Kiwi TCMS v9.0:

How to upgrade

Backup first! If you are using Kiwi TCMS as a Docker container then:

cd path/containing/docker-compose/
docker-compose down
docker-compose pull
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Refer to our documentation for more details!

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Parsing HID Unit Items

Posted by Peter Hutterer on January 13, 2021 11:28 AM

This post explains how to parse the HID Unit Global Item as explained by the HID Specification, page 37. The table there is quite confusing and it took me a while to fully understand it (Benjamin Tissoires was really the one who cracked it). I couldn't find any better explanation online which means either I'm incredibly dense and everyone's figured it out or no-one has posted a better explanation. On the off-chance it's the latter [1], here are the instructions on how to parse this item.

We know a HID Report Descriptor consists of a number of items that describe the content of each HID Report (read: an event from a device). These Items include things like Logical Minimum/Maximum for axis ranges, etc. A HID Unit item specifies the physical unit to apply. For example, a Report Descriptor may specify that X and Y axes are in mm which can be quite useful for all the obvious reasons.

Like most HID items, a HID Unit Item consists of a one-byte item tag and 1, 2 or 4 byte payload. The Unit item in the Report Descriptor itself has the binary value 0110 01nn where the nn is either 1, 2, or 3 indicating 1, 2 or 4 bytes of payload, respectively. That's standard HID.

The payload is divided into nibbles (4-bit units) and goes from LSB to MSB. The lowest-order 4 bits (first byte & 0xf) define the unit System to apply: one of SI Linear, SI Rotation, English Linear or English Rotation (well, or None/Reserved). The rest of the nibbles are in this order: "length", "mass", "time", "temperature", "current", "luminous intensity". In something resembling code this means:


system = value & 0xf
length_exponent = (value & 0xf0) >> 4
mass_exponent = (value & 0xf00) >> 8
time_exponent = (value & 0xf000) >> 12
...
The System defines which unit is used for length (e.g. SILinear means length is in cm). The actual value of each nibble is the exponent for the unit in use [2]. In something resembling code:

switch (system)
case SILinear:
print("length is in cm^{length_exponent}");
break;
case SIRotation:
print("length is in rad^{length_exponent}");
break;
case EnglishLinear:
print("length is in in^{length_exponent}");
break;
case EnglishRotation:
print("length is in deg^{length_exponent}");
break;
case None:
case Reserved"
print("boo!");
break;

For example, the value 0x321 means "SI Linear" (0x1) so the remaining nibbles represent, in ascending nibble order: Centimeters, Grams, Seconds, Kelvin, Ampere, Candela. The length nibble has a value of 0x2 so it's square cm, the mass nibble has a value of 0x3 so it is cubic grams (well, it's just an example, so...). This means that any report containing this item comes in cm²g³. As a more realistic example: 0xF011 would be cm/s.

If we changed the lowest nibble to English Rotation (0x4), i.e. our value is now 0x324, the units represent: Degrees, Slug, Seconds, F, Ampere, Candela [3]. The length nibble 0x2 means square degrees, the mass nibble is cubic slugs. As a more realistic example, 0xF014 would be degrees/s.

Any nibble with value 0 means the unit isn't in use, so the example from the spec with value 0x00F0D121 is SI linear, units cm² g s⁻³ A⁻¹, which is... Voltage! Of course you knew that and totally didn't have to double-check with wikipedia.

Because bits are expensive and the base units are of course either too big or too small or otherwise not quite right, HID also provides a Unit Exponent item. The Unit Exponent item (a separate item to Unit in the Report Descriptor) then describes the exponent to be applied to the actual value in the report. For example, a Unit Eponent of -3 means 10⁻³ to be applied to the value. If the report descriptor specifies an item of Unit 0x00F0D121 (i.e. V) and Unit Exponent -3, the value of this item is mV (milliVolt), Unit Exponent of 3 would be kV (kiloVolt).

Now, in hindsight all this is pretty obvious and maybe even sensible. It'd have been nice if the spec would've explained it a bit clearer but then I would have nothing to write about, so I guess overall I call it a draw.

[1] This whole adventure was started because there's a touchpad out there that measures touch pressure in radians, so at least one other person out there struggled with the docs...
[2] The nibble value is twos complement (i.e. it's a signed 4-bit integer). Values 0x1-0x7 are exponents 1 to 7, values 0x8-0xf are exponents -8 to -1.
[3] English Linear should've trolled everyone and use Centimetres instead of Centimeters in SI Linear.

All systems go

Posted by Fedora Infrastructure Status on January 13, 2021 09:07 AM
Service 'Pagure' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on January 13, 2021 09:01 AM
Service 'Pagure' now has status: major: Something is up and being investigated

Normalizing audio and video files

Posted by Lukas "lzap" Zapletal on January 13, 2021 12:00 AM

Normalizing audio and video files

To normalize audio or video files without reencoding video stream, use ffmpeg-normalize script. In Fedora, it is available in the python3-ffmpeg-normalize package.

Usage is very simple:

ffmpeg-normalize a_file.wav a_file.mp4 a_file.mkv

By default, the output stream will be PCM at 192kHz which will usually be a bit overkill. For example, I often record screencast with speech and before uploading to YouTube it’s a good idea to normalize and encode the audio channel into 44.1kHz with AAC at 96kbps:

ffmpeg-normalize -nt ebu -ar 44100 -c:a aac -b:a 96k a_file.mkv

Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248

Posted by Lennart Poettering on January 12, 2021 11:00 PM

TL;DR: It's now easy to unlock your LUKS2 volume with a FIDO2 security token (e.g. YubiKey or Nitrokey FIDO2). And TPM2 unlocking is easy now too.

Blogging is a lot of work, and a lot less fun than hacking. I mostly focus on the latter because of that, but from time to time I guess stuff is just too interesting to not be blogged about. Hence here, finally, another blog story about exciting new features in systemd.

With the upcoming systemd v248 the systemd-cryptsetup component of systemd (which is responsible for assembling encrypted volumes during boot) gained direct support for unlocking encrypted storage with three types of security hardware:

  1. Unlocking with FIDO2 security tokens (well, at least with those which implement the hmac-secret extension, most do). i.e. your YubiKeys (series 5 and above), or Nitrokey FIDO2 and such.

  2. Unlocking with TPM2 security chips (pretty ubiquitous on non-budget PCs/laptops/…)

  3. Unlocking with PKCS#11 security tokens, i.e. your smartcards and older YubiKeys (the ones that implement PIV). (Strictly speaking this was supported on older systemd already, but was a lot more "manual".)

For completeness' sake, let's keep in mind that the component also allows unlocking with these more traditional mechanisms:

  1. Unlocking interactively with a user-entered passphrase (i.e. the way most people probably already deploy it, supported since about forever)

  2. Unlocking via key file on disk (optionally on removable media plugged in at boot), supported since forever.

  3. Unlocking via a key acquired through trivial AF_UNIX/SOCK_STREAM socket IPC. (Also new in v248)

  4. Unlocking via recovery keys. These are pretty much the same thing as a regular passphrase (and in fact can be entered wherever a passphrase is requested) — the main difference being that they are always generated by the computer, and thus have guaranteed high entropy, typically higher than user-chosen passphrases. They are generated in a way they are easy to type, in many cases even if the local key map is misconfigured. (Also new in v248)

In this blog story, let's focus on the first three items, i.e. those that talk to specific types of hardware for implementing unlocking.

To make working with security tokens and TPM2 easy, a new, small tool was added to the systemd tool set: systemd-cryptenroll. It's only purpose is to make it easy to enroll your security token/chip of choice into an encrypted volume. It works with any LUKS2 volume, and embeds a tiny bit of meta-information into the LUKS2 header with parameters necessary for the unlock operation.

Unlocking with FIDO2

So, let's see how this fits together in the FIDO2 case. Most likely this is what you want to use if you have one of these fancy FIDO2 tokens (which need to implement the hmac-secret extension, as mentioned). Let's say you already have your LUKS2 volume set up, and previously unlocked it with a simple passphrase. Plug in your token, and run:

# systemd-cryptenroll --fido2-device=auto /dev/sda5

(Replace /dev/sda5 with the underlying block device of your volume).

This will enroll the key as an additional way to unlock the volume, and embeds all necessary information for it in the LUKS2 volume header. Before we can unlock the volume with this at boot, we need to allow FIDO2 unlocking via /etc/crypttab. For that, find the right entry for your volume in that file, and edit it like so:

myvolume /dev/sda5 - fido2-device=auto

Replace myvolume and /dev/sda5 with the right volume name, and underlying device of course. Key here is the fido2-device=auto option you need to add to the fourth column in the file. It tells systemd-cryptsetup to use the FIDO2 metadata now embedded in the LUKS2 header, wait for the FIDO2 token to be plugged in at boot (utilizing systemd-udevd, …) and unlock the volume with it.

And that's it already. Easy-peasy, no?

Note that all of this doesn't modify the FIDO2 token itself in any way. Moreover you can enroll the same token in as many volumes as you like. Since all enrollment information is stored in the LUKS2 header (and not on the token) there are no bounds on any of this. (OK, well, admittedly, there's a cap on LUKS2 key slots per volume, i.e. you can't enroll more than a bunch of keys per volume.)

Unlocking with PKCS#11

Let's now have a closer look how the same works with a PKCS#11 compatible security token or smartcard. For this to work, you need a device that can store an RSA key pair. I figure most security tokens/smartcards that implement PIV qualify. How you actually get the keys onto the device might differ though. Here's how you do this for any YubiKey that implements the PIV feature:

# ykman piv reset
# ykman piv generate-key -a RSA2048 9d pubkey.pem
# ykman piv generate-certificate --subject "Knobelei" 9d pubkey.pem
# rm pubkey.pem

(This chain of commands erases what was stored in PIV feature of your token before, be careful!)

For tokens/smartcards from other vendors a different series of commands might work. Once you have a key pair on it, you can enroll it with a LUKS2 volume like so:

# systemd-cryptenroll --pkcs11-token-uri=auto /dev/sda5

Just like the same command's invocation in the FIDO2 case this enrolls the security token as an additional way to unlock the volume, any passphrases you already have enrolled remain enrolled.

For the PKCS#11 case you need to edit your /etc/crypttab entry like this:

myvolume /dev/sda5 - pkcs11-uri=auto

If you have a security token that implements both PKCS#11 PIV and FIDO2 I'd probably enroll it as FIDO2 device, given it's the more contemporary, future-proof standard. Moreover, it requires no special preparation in order to get an RSA key onto the device: FIDO2 keys typically just work.

Unlocking with TPM2

Most modern (non-budget) PC hardware (and other kind of hardware too) nowadays comes with a TPM2 security chip. In many ways a TPM2 chip is a smartcard that is soldered onto the mainboard of your system. Unlike your usual USB-connected security tokens you thus cannot remove them from your PC, which means they address quite a different security scenario: they aren't immediately comparable to a physical key you can take with you that unlocks some door, but they are a key you leave at the door, but that refuses to be turned by anyone but you.

Even though this sounds a lot weaker than the FIDO2/PKCS#11 model TPM2 still bring benefits for securing your systems: because the cryptographic key material stored in TPM2 devices cannot be extracted (at least that's the theory), if you bind your hard disk encryption to it, it means attackers cannot just copy your disk and analyze it offline — they always need access to the TPM2 chip too to have a chance to acquire the necessary cryptographic keys. Thus, they can still steal your whole PC and analyze it, but they cannot just copy the disk without you noticing and analyze the copy.

Moreover, you can bind the ability to unlock the harddisk to specific software versions: for example you could say that only your trusted Fedora Linux can unlock the device, but not any arbitrary OS some hacker might boot from a USB stick they plugged in. Thus, if you trust your OS vendor, you can entrust storage unlocking to the vendor's OS together with your TPM2 device, and thus can be reasonably sure intruders cannot decrypt your data unless they both hack your OS vendor and steal/break your TPM2 chip.

Here's how you enroll your LUKS2 volume with your TPM2 chip:

# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7 /dev/sda5

This looks almost as straightforward as the two earlier sytemd-cryptenroll command lines — if it wasn't for the --tpm2-pcrs= part. With that option you can specify to which TPM2 PCRs you want to bind the enrollment. TPM2 PCRs are a set of (typically 24) hash values that every TPM2 equipped system at boot calculates from all the software that is invoked during the boot sequence, in a secure, unfakable way (this is called "measurement"). If you bind unlocking to a specific value of a specific PCR you thus require the system has to follow the same sequence of software at boot to re-acquire the disk encryption key. Sounds complex? Well, that's because it is.

For now, let's see how we have to modify your /etc/crypttab to unlock via TPM2:

myvolume /dev/sda5 - tpm2-device=auto

This part is easy again: the tpm2-device= option is what tells systemd-cryptsetup to use the TPM2 metadata from the LUKS2 header and to wait for the TPM2 device to show up.

Bonus: Recovery Key Enrollment

FIDO2, PKCS#11 and TPM2 security tokens and chips pair well with recovery keys: since you don't need to type in your password everyday anymore it makes sense to get rid of it, and instead enroll a high-entropy recovery key you then print out or scan off screen and store a safe, physical location. i.e. forget about good ol' passphrase-based unlocking, go for FIDO2 plus recovery key instead! Here's how you do it:

# systemd-cryptenroll --recovery-key /dev/sda5

This will generate a key, enroll it in the LUKS2 volume, show it to you on screen and generate a QR code you may scan off screen if you like. The key has highest entropy, and can be entered wherever you can enter a passphrase. Because of that you don't have to modify /etc/crypttab to make the recovery key work.

Future

There's still plenty room for further improvement in all of this. In particular for the TPM2 case: what the text above doesn't really mention is that binding your encrypted volume unlocking to specific software versions (i.e. kernel + initrd + OS versions) actually sucks hard: if you naively update your system to newer versions you might lose access to your TPM2 enrolled keys (which isn't terrible, after all you did enroll a recovery key — right? — which you then can use to regain access). To solve this some more integration with distributions would be necessary: whenever they upgrade the system they'd have to make sure to enroll the TPM2 again — with the PCR hashes matching the new version. And whenever they remove an old version of the system they need to remove the old TPM2 enrollment. Alternatively TPM2 also knows a concept of signed PCR hash values. In this mode the distro could just ship a set of PCR signatures which would unlock the TPM2 keys. (But quite frankly I don't really see the point: whether you drop in a signature file on each system update, or enroll a new set of PCR hashes in the LUKS2 header doesn't make much of a difference). Either way, to make TPM2 enrollment smooth some more integration work with your distribution's system update mechanisms need to happen. And yes, because of this OS updating complexity the example above — where I referenced your trusty Fedora Linux — doesn't actually work IRL (yet? hopefully…). Nothing updates the enrollment automatically after you initially enrolled it, hence after the first kernel/initrd update you have to manually re-enroll things again, and again, and again … after every update.

The TPM2 could also be used for other kinds of key policies, we might look into adding later too. For example, Windows uses TPM2 stuff to allow short (4 digits or so) "PINs" for unlocking the harddisk, i.e. kind of a low-entropy password you type in. The reason this is reasonably safe is that in this case the PIN is passed to the TPM2 which enforces that not more than some limited amount of unlock attempts may be made within some time frame, and that after too many attempts the PIN is invalidated altogether. Thus making dictionary attacks harder (which would normally be easier given the short length of the PINs).

Postscript

(BTW: Yubico sent me two YubiKeys for testing and Nitrokey a Nitrokey FIDO2, thank you! — That's why you see all those references to YubiKey/Nitrokey devices in the text above: it's the hardware I had to test this with. That said, I also tested the FIDO2 stuff with a SoloKey I bought, where it also worked fine. And yes, you!, other vendors!, who might be reading this, please send me your security tokens for free, too, and I might test things with them as well. No promises though. And I am not going to give them back, if you do, sorry. ;-))

Restarting regular Fedora India meetings

Posted by Ankur Sinha "FranciscoD" on January 12, 2021 08:27 PM
Photo by Nikunj Gupta on Unsplash.

Photo by Nikunj Gupta on Unsplash.

The Fedora India community has always been quite an active group of people working in various Fedora teams and Special Interest Groups (SIGs). At some point in recent years, people got busy with projects as we tend to do, and we stopped having regular community meetings. This is unfortunate, since these meetings keep the community ticking, and provide a platform for new members to join in.

Recently, there's been interest in revitalising the Fedora India community. We've started to clean up the mailing list, and we'd like to restart regular meetings. We had a short first one at the end of 2020 to kick things off. You can read the logs here.

The next meeting, the first of 2021, will happen soon. In the meantime, this is a short notice to make the community aware of these on-goings. Please subscribe to the mailing list to keep up with the community's activities. If it has been a while since you were active there, please feel free to send in a short introduction so newcomers can get to know you.

The usual chat channels are also active. You can join #fedora-india on the Freenode IRC network here. It is also bridged to the @fedoraindia Telegram super group.

We hope to see you in the channels!

Next Open NeuroFedora meeting: 18 January 1500 UTC

Posted by The NeuroFedora Blog on January 12, 2021 07:56 PM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 18 January at 1300UTC in #fedora-neuro on IRC (Freenode). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 next Monday'

The meeting will be chaired by @ankursinha (me). The agenda for the meeting is:

We hope to see you there!

Fedora 33 : Install wordpress on Fedora distro.

Posted by mythcat on January 12, 2021 07:18 PM
For those who are celebrating the winter holidays with the Linux operating system, I have created this little tutorial...
The first step - update and upgrade the Fedora 33 Linux distro.
[root@desk mythcat]# dnf update
...
Nothing to do.
Complete!
[root@desk mythcat]# dnf upgrade
...
Nothing to do.
Complete!
Install all packages for WordPress package:
[root@desk mythcat]# dnf install @"Web Server" php-mysqlnd mariadb-server
Last metadata expiration check: 1:10:28 ago on Thu 24 Dec 2020 12:59:20 PM EET.
Package php-mysqlnd-7.4.13-1.fc33.x86_64 is already installed.
No match for group package "powerpc-utils"
No match for group package "lsvpd"
Dependencies resolved.
...
Complete!
Finds the packages providing for WordPress.
[root@desk mythcat]# dnf provides wordpress
Last metadata expiration check: 1:13:24 ago on Thu 24 Dec 2020 12:59:20 PM EET.
wordpress-5.5.1-1.fc33.noarch : Blog tool and publishing platform
Repo : fedora
Matched from:
Provide : wordpress = 5.5.1-1.fc33

wordpress-5.6-1.fc33.noarch : Blog tool and publishing platform
Repo : updates
Matched from:
Provide : wordpress = 5.6-1.fc33
Install WordPress with DNF tool:
[root@desk mythcat]# dnf install wordpress
...
Verifying : php-simplepie-1.5.4-3.fc33.noarch 7/7

Installed:
libc-client-2007f-26.fc33.x86_64 php-IDNA_Convert-0.8.0-14.fc33.noarch
php-getid3-1:1.9.20-2.fc33.noarch php-imap-7.4.13-1.fc33.x86_64
php-phpmailer6-6.1.8-1.fc33.noarch php-simplepie-1.5.4-3.fc33.noarch
wordpress-5.6-1.fc33.noarch

Complete!
Allow firewall in case it is running on your system.
[root@desk mythcat]# firewall-cmd --add-port=80/tcp --permanent
success
[root@desk mythcat]# firewall-cmd --reload
success
If SELinux is running, run the commands to allow database and sendmail:
[root@desk mythcat]# setsebool -P httpd_can_network_connect_db=1
[root@desk mythcat]# setsebool -P httpd_can_sendmail=1
Enable systemctl services for WordPress:
[root@desk mythcat]# systemctl start httpd
...
[root@desk mythcat]# systemctl enable httpd
...
[root@desk mythcat]# systemctl enable mariadb
...
[root@desk mythcat]# systemctl start mariadb
...
Set up the MySQL service for WordPress:
[root@desk mythcat]# mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
haven't set the root password yet, you should just press enter here.

Enter current password for root (enter for none):
OK, successfully used password, moving on...

Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation.

You already have your root account protected, so you can safely answer 'n'.

Switch to unix_socket authentication [Y/n] n
... skipping.

You already have your root account protected, so you can safely answer 'n'.

Change the root password? [Y/n] n
... skipping.

By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] Y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] Y
... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] Y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
Create a database named mysite for WordPress:
[root@desk mythcat]# mysqladmin create mysite -u root -p 
Enter password:
[root@desk mythcat]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 25
Server version: 10.4.17-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysite |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.000 sec)

MariaDB [(none)]> grant all privileges on mysite.* to testadmin@localhost identified by 'TESTtest!';
Query OK, 0 rows affected (0.019 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> quit;
Bye
Set WordPress default configuration file named wp-config.php:
[root@desk mythcat]# vim /etc/wordpress/wp-config.php
This source code show you to set wp-config.php for localhost install:
define( 'DB_NAME', 'mysite' );

/** MySQL database username */
define( 'DB_USER', 'testadmin' );

/** MySQL database password */
define( 'DB_PASSWORD', 'TESTtest!' );

/** MySQL hostname */
define( 'DB_HOST', 'localhost' );
Restart the httpd service:
[root@desk mythcat]# systemctl restart httpd
Open URL to set WordPress to set the last steps of this install process:
http://localhost/wordpress/ 

Better together: Community Blog and Discussion

Posted by Fedora Community Blog on January 12, 2021 06:34 PM
Logo for the Fedora Community Blog.

Yesterday, taking advantage of the post-New Year enthusiasm of our dear Fedora Project Leader, I enabled a WordPress plugin that connects this blog with the Discourse forum at discussion.fedoraproject.org. If everything works correctly, Community Blog posts will start a new thread in the Community Blog category on Discussion. Discussion will function as the comments mechanism […]

The post Better together: Community Blog and Discussion appeared first on Fedora Community Blog.

Packaging Domoticz for Debian, Ubuntu, Raspbian and Fedora

Posted by Daniel Pocock on January 12, 2021 05:55 PM

Today I published fresh packages for Domoticz and the Domoticz-Zigate. As the instructions have changed for setting up the Domoticz-Zigate, this is an updated blog, verified with v4.12.102 of the Domoticz-Zigate plugin.

Getting started in home automation has never been easier, cheaper and more important

Many countries are now talking about longer lockdowns to restrict new strains of the Coronavirus. When the new US President takes office, many suspect he will introduce more stringent restrictions than his predecessor. Smart lighting can make life more enjoyable when spending time at home.

At the same time, more and more companies are bringing out low-cost Zigbee solutions. A previous blog covered Lidl's new products in December. Ikea's products are also incredibly cheap, they include a wide range of bulbs, buttons, motion sensors, smart sockets and other accessories that work with free solutions like Domoticz.

NOTE: when you use the ZiGate stick, you do not need to buy the hub from Philips, Ikea or any other vendor and you do not need their proprietary apps. Your ZiGate stick becomes the hub.

Packaging details

Federico Ceratto and I have been working on the Debian packaging of Domoticz, one of the leading free and open source home automation / smart home solutions. As there are a large suite of packages involved, I'm keen to find more people willing to collaborate on a parallel packaging for Fedora. Many of the enhancements we've made for Debian, such as the directory structure, are applicable across all GNU/Linux distributions.

As part of that effort, I've also been packaging the plugin for the Zigate USB stick and two of the utilities for updating firmware on the Zigate, the JennicModuleProgrammer and the zigate-flasher. This gives users a complete solution in the form of familiar packages.

These are initially Debian packages, also available for Raspbian, but I also try to share any lessons from this effort with the upstream developers and also provide a foundation for Fedora packaging. Fedora already has the core Domoticz package since Fedora 32. Some of the other related packages described here are fairly straightforward to port.

Raspberry Pi 2.0 with Zigate USB stick

Trying the packages

Raspbian setup (Raspbian users only)

If you have a regular Debian setup, you can skip to the next section.

  1. Download the Raspbian light image from the official Raspbian download page
  2. Write the image to an SD card using
    cat
    or a similar tool
  3. Boot the Raspberry Pi
  4. Login as user pi with the password raspberry
  5. Run
    sudo raspi-config
  6. Set a password for the pi user, the default is raspberry
  7. Set the hostname
  8. In (4) Localization settings, set your Locale, Timezone and Keyboard
  9. In (5) Interfacing, enable SSH
  10. At the command line, run
    timedatectl
    to verify the time and time synchronization is correct
  11. Run
    ip addr
    to see your IP address
  12. (optional) Connect to the Pi from your usual workstation and copy your SSH public key to
    ~pi/.ssh/authorized_keys
  13. (optional) Disable PasswordAuthentication in
    /etc/ssh/sshd_config
    (you don't want the script kiddies next door turning on your heating system in the middle of summer do you?)

Package repository setup (Debian, Raspbian and Ubuntu users)

As these were not included in the recent release of Debian 10 (buster) and as they are evolving more rapidly than packages in the stable distribution, I'm distributing them through Debify.

To use packages from Debify, the first step is to update your apt configuration. This works for regular Debian or Raspbian systems running stretch or buster:

$ wget -O - http://apt.debify.org/add-apt-debify | bash

Package installation

$ sudo apt update
$ sudo apt install domoticz-zigate-plugin

All the necessary packages should be installed automatically. A sample installation log is below for comparison.

Zigate users - check firmware version

You may want to check that you have the latest firmware installed in your Zigate.

  1. Download the firmware image from here
  2. If your apt setup automatically installs Recommended packages then jennic-module-programmer has already been installed for you. If not, you can do
    sudo apt install jennic-module-programmer zigate-flasher
    . You can also try the alternative tool, zigate-flasher
  3. Unplug your Zigate USB stick and open the case. Hold down the button and while holding the button, reconnect it to the USB port. The blue light should now be illuminated but very dimly. This means it is ready for a firmware update.
  4. Run the update, for example, if you copied or downloaded the firmware to /tmp and if you have no other USB serial device attached, you could use the following command:
    sudo JennicModuleProgrammer -s /dev/ttyUSB0 -f /tmp/ZiGate_v3.1d.bin
  5. Wait for the command to complete successfully
  6. Detach the Zigate and put it back into its case
  7. Attach the Zigate again
  8. Restart Domoticz:
    sudo systemctl restart domoticz

If the JennicModuleProgrammer utility doesn't work for you, if it sits there for ten minutes without doing anything you can also try the zigate-flasher. I packaged both of these so you have the choice: please share your feedback in the Domoticz forums. Repeat the steps above, replacing step 4 with:

  1. $ sudo zigate-flasher -p /dev/ttyUSB0 -w /tmp/ZiGate_v3.1d.bin
    Found MAC-address: 00:11:22:33:44:55:66:77
    writing new flash from /tmp/ZiGate_v3.1d.bin
    $
    

Zigate users: quick Domoticz setup

  • Make sure your power supply provides sufficient power for the Raspberry Pi and the Zigate
  • Open a terminal window to monitor the Domoticz logs on the device running Domoticz. Use the command sudo journalctl -f to monitor the logs. You will see various clues about progress when starting up, when adding your ZiGate to Domoticz and when joining devices to it.
  • Connect to the Raspberry Pi using your web browser, Domoticz uses port 8080. Therefore, if the IP address is 192.168.1.105, the URL will be http://192.168.1.105:8080
  • Click the Setup -> Hardware setting in the menu
  • Add your Zigate.
  • Set the option Initialize ZiGate (Erase Memory) to True for your first attempt. After doing this once, you need to come back here and set it False.
  • Click the Add button at the bottom and verify that the ZiGate appears.
  • On the same host, try to access the Domoticz-Zigate plugin on port 9440. It may take a few seconds before it is ready. For example, if the IP address is 192.168.1.105, the URL will be http://192.168.1.105:9440
  • In the Domoticz-Zigate plugin settings, look at the top of the screen and click the button Accept New Hardware. The blue light on the ZiGate stick will start flashing.
  • Try to join a device such as a light bulb or temperature sensor. For example, on a temperature sensor, you might need to hold down the join button for a few seconds until the light flashes.
  • In the Domoticz-Zigate plugin on port 9440, you can go to the Management->Devices page to verify that the device is really there.
  • In Domoticz on port 8080, go to the Setup -> Devices tab and verify the new device is visible in the list

Native Zigbee bindings between the light switches (dimmers) and the bulbs

Zigbee devices can link directly to each other. This ensures that they keep working if the hub / ZiGate is offline.

To do this with Domoticz and/or multiple bulbs, you need to make the connections in a specific order:

  1. For best results, make sure every other device is disconnected from power and only try to join one device at a time. Otherwise, the wrong device may become associated with a switch/button.
  2. Reset each bulb, one at a time, and then join them in Domoticz using the procedure described in the previous section
  3. Reset each light switch/dimmer, one at a time and then join them in Domoticz using the procedure described in the previous section. To reset a Philips Hue dimmer, press the setup button on the back. To reset and Ikea dimmer, press the button on the back 4 times in 5 seconds.
  4. Unplug all the light bulbs, smart sockets and other devices except for the one you want to join first
  5. Hold the light switch/dimmer close to the bulb and complete the native joining procedure, for example, if it is an Ikea dimmer, you hold the button on the back for 10 seconds.
  6. Unplug the bulb, plug in the next bulb and repeat the procedure.
  7. Note that you can join more than one bulb to the same light switch/dimmer in this procedure and you can also join more than one light switch to the same bulb.
  8. You will be able to control these bulbs from Domoticz but if your Domoticz is offline, you can also control them directly with the light switches/dimmers paired in this procedure.

Next steps and troubleshooting

Please share your feedback and questions through the Domoticz forums.

Sample installation log

pi@pi5:~ $ sudo apt install domoticz-zigate-plugin
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
  domoticz jennic-module-programmer libopenzwave1.6 libpython3-dev libpython3.5-dev openzwave
The following NEW packages will be installed:
  domoticz domoticz-zigate-plugin jennic-module-programmer libopenzwave1.6 libpython3-dev libpython3.5-dev openzwave
0 upgraded, 7 newly installed, 0 to remove and 74 not upgraded.
Need to get 51.7 MB of archives.
After this operation, 91.8 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf libopenzwave1.6 armhf 1.6+ds-1~bpo9+1 [406 kB]
Get:2 http://raspbian.raspberrypi.org/raspbian stretch/main armhf libpython3.5-dev armhf 3.5.3-1+deb9u1 [36.9 MB]
Get:3 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf openzwave armhf 1.6+ds-1~bpo9+1 [24.6 kB]
Get:4 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf domoticz armhf 4.11020-2~bpo9+1 [10.8 MB]
Get:5 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf domoticz-zigate-plugin all 4.4.9~beta1-2~bpo9+1 [3,515 kB]
Get:6 http://apt.debify.org/debify debify-raspbian-stretch-backports/main armhf jennic-module-programmer armhf 0.6-1~bpo9+1 [9,690 B]
Get:7 http://raspbian.raspberrypi.org/raspbian stretch/main armhf libpython3-dev armhf 3.5.3-1 [18.7 kB]                                                                     
Fetched 51.7 MB in 9s (5,717 kB/s)                                                                                                                                           
Selecting previously unselected package libopenzwave1.6.
(Reading database ... 34831 files and directories currently installed.)
Preparing to unpack .../0-libopenzwave1.6_1.6+ds-1~bpo9+1_armhf.deb ...
Unpacking libopenzwave1.6 (1.6+ds-1~bpo9+1) ...
Selecting previously unselected package libpython3.5-dev:armhf.
Preparing to unpack .../1-libpython3.5-dev_3.5.3-1+deb9u1_armhf.deb ...
Unpacking libpython3.5-dev:armhf (3.5.3-1+deb9u1) ...
Selecting previously unselected package libpython3-dev:armhf.
Preparing to unpack .../2-libpython3-dev_3.5.3-1_armhf.deb ...
Unpacking libpython3-dev:armhf (3.5.3-1) ...
Selecting previously unselected package openzwave.
Preparing to unpack .../3-openzwave_1.6+ds-1~bpo9+1_armhf.deb ...
Unpacking openzwave (1.6+ds-1~bpo9+1) ...
Selecting previously unselected package domoticz.
Preparing to unpack .../4-domoticz_4.11020-2~bpo9+1_armhf.deb ...
Unpacking domoticz (4.11020-2~bpo9+1) ...
Selecting previously unselected package domoticz-zigate-plugin.
Preparing to unpack .../5-domoticz-zigate-plugin_4.4.9~beta1-2~bpo9+1_all.deb ...
Unpacking domoticz-zigate-plugin (4.4.9~beta1-2~bpo9+1) ...
Selecting previously unselected package jennic-module-programmer.
Preparing to unpack .../6-jennic-module-programmer_0.6-1~bpo9+1_armhf.deb ...
Unpacking jennic-module-programmer (0.6-1~bpo9+1) ...
Setting up jennic-module-programmer (0.6-1~bpo9+1) ...
Setting up libopenzwave1.6 (1.6+ds-1~bpo9+1) ...
Setting up libpython3.5-dev:armhf (3.5.3-1+deb9u1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up libpython3-dev:armhf (3.5.3-1) ...
Setting up openzwave (1.6+ds-1~bpo9+1) ...
Setting up domoticz (4.11020-2~bpo9+1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/domoticz.service → /lib/systemd/system/domoticz.service.
Setting up domoticz-zigate-plugin (4.4.9~beta1-2~bpo9+1) ...
Adding user `domoticz' to group `dialout' ...
Adding user domoticz to group dialout
Done.
pi@pi5:~ $ 

Kafka destination improved with template support in syslog-ng

Posted by Peter Czanik on January 12, 2021 11:46 AM

The C implementation of the Kafka destination in syslog-ng has been improved in version 3.30. Support for templates in topic names was added as a result of a Google Summer of Code (GSoC) project. The advantage of the new template support feature is that you no longer have to use a static topic name. For example, you can include the name of your host or the application sending the log in the topic name.

From this blog you can learn about a minimal Kafka setup, configuring syslog-ng and testing syslog-ng with Kafka.

Before you begin

Template support in the C implementation of the Kafka destination first appeared in syslog-ng version 3.30. This version is not yet available in Linux most distributions, and even where it is available, Kafka support is not enabled. You can either compile syslog-ng 3.30 or later yourself, or use 3rd party package repositories to install it. You can learn more about them at https://www.syslog-ng.com/3rd-party-binaries In most cases, Kafka support is not part of the base syslog-ng package, but available as a sub package. For example, in (open)SUSE and Fedora/RHEL packages it is available in the syslog-ng-kafka package.

Kafka might be available for your Linux distribution of choice, but for simplicity’s sake, I use the binary distribution from the Kafka website. At the time of writing, the latest available version is kafka_2.13-2.6.0.tgz and it should work equally well on any Linux host with a recent enough (that is, 1.8+) Java. If you use a local Kafka installation, you might need to modify some of the example command lines.

Downloading and starting Kafka

A proper Kafka installation is outside of the scope of my blog. Here we follow relevant parts of the Kafka Quickstart documentation. We download the archive containing Kafka, extract it, and start its components. You will need network access and four terminal windows.

First, download the latest Kafka release and extract it. The exact version might be already different:

wget https://downloads.apache.org/kafka/2.6.0/kafka_2.13-2.6.0.tgz
tar xvf kafka_2.13-2.6.0.tgz

At the end, you will see a new directory: kafka_2.13-2.6.0

From now on, you will need the 3 extra terminal windows, as first we start two separate daemons in the foreground to see their messages, and two more windows are required to send messages to Kafka and to receive them.

First, start zookeeper in one of the terminal windows. Change to the freshly created directory and start the application:

cd kafka_2.13-2.6.0/
bin/zookeeper-server-start.sh config/zookeeper.properties

Now you can start the Kafka server in a different terminal window:

cd kafka_2.13-2.6.0/
bin/kafka-server-start.sh config/server.properties

Both applications print lots of data on screen. Normally, the flood of debug information stops after a few seconds and the applications are ready to be used. If there is a problem, you will get back the command line. In this case, you have to browse through the debug messages and resolve the problem.

Now you can do some minimal functional testing, without syslog-ng involved yet. This way you can make sure that access to Kafka is not blocked by a firewall or other software.

Open yet another terminal window, change to the Kafka directory and start a script to collect messages from a Kafka topic. You can safely ignore the warning message, as it appears because the topic does not exist yet.

cd kafka_2.13-2.6.0/
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytest
[2020-12-15 14:41:09,172] WARN [Consumer clientId=consumer-console-consumer-31493-1, groupId=console-consumer-31493] Error while fetching metadata with correlation id 2 : {mytest=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

Now you can start a fourth terminal window to send some test messages. Just enter something after the “>” sign and hit Enter. Moments later, you should see what you just entered in the third terminal window:

bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic mytest
>bla
>blabla
>blablabla
>

You can exit with ^D.

Configuring syslog-ng

Now that we checked that we can send messages to Kafka and pull those messages with another application, it is time to configure syslog-ng. As a first step, we just check sending logs to Kafka and check the results.

If syslog-ng on your system is configured to include .conf files from the /etc/syslog-ng/conf.d/ directory, create a new configuration file there. Otherwise, append the configuration below to syslog-ng.conf.

destination d_kafka {
  kafka-c(config(metadata.broker.list("localhost:9092")
                   queue.buffering.max.ms("1000"))
        topic("mytest")
        message("$(format-json --scope rfc5424 --scope nv-pairs)"));
};

log {
  source(src);
  destination(d_kafka);
};

Note that the source name for local logs might be different in your syslog-ng.conf, so check before reloading the configuration. The name “src” is used on openSUSE/SLES. As a safety measure, check your configuration with:

syslog-ng -s

While it cannot check if you spelled the source name correctly, a quick syntax check will ensure that all necessary syslog-ng modules are installed. If you see a message about JSON or Kafka, install the missing modules.

Once you reloaded syslog-ng, you should see logs arriving on your third terminal window in JSON format, similar to these:

blablabla
{"SOURCE":"src","PROGRAM":"syslog-ng","PRIORITY":"notice","PID":"5841","MESSAGE":"syslog-ng starting up; version='3.30.1.25.g793d7e4'","HOST_FROM":"leap152","HOST":"leap152","FACILITY":"syslog","DATE":"Dec 15 15:04:58"}
{"SOURCE":"src","PROGRAM":"systemd","PRIORITY":"info","PID":"1","MESSAGE":"Stopping System Logging Service...","HOST_FROM":"leap152","HOST":"leap152","FACILITY":"daemon","DATE":"Dec 15 15:04:57"}

Using a template in the topic name

To use a template in the topic name, the syslog-ng configuration needs two modifications. First of all, you need to modify the topic(). But you also need to provide an additional parameter: fallback-topic(). Note that topic names can only contain numbers and letters from the English alphabet. Special characters or letters with accent marks are rejected. This is why you need a fallback-topic: if a topic name cannot be used, the related message is saved to the topic named in the fallback-topic(). You can find the modified configuration below:

destination d_kafka {
  kafka-c(config(metadata.broker.list("localhost:9092")
                   queue.buffering.max.ms("1000"))
        topic("mytest_$PROGRAM")
        fallback-topic("mytest")
        message("$(format-json --scope rfc5424 --scope nv-pairs)"));
};

log {
  source(src);
  destination(d_kafka);
};

Using this configuration, the name of the application sending the log is also included in the topic name. Once you reload syslog-ng, you will receive a lot less logs on the “mytest” topic. But, for example, postfix logs will still arrive there, as they include a slash in the application name. Alternatively, you can send a log with accent marks yourself. Being Hungarian is an advantage here, but German also has its own share of accent marked characters. For example, you can use “logger” to send logs and the “-t” option can be used to set the application name:

logger -t öt_szép_szűzleány_őrült_írót_nyúz bla

You will see the related message in the “mytest” topic:

{"SOURCE":"src","PROGRAM":"öt_szép_szűzleány_őrült_írót_nyúz","PRIORITY":"notice","PID":"6177","MESSAGE":"bla","HOST_FROM":"leap152","HOST":"leap152","FACILITY":"user","DATE":"Dec 15 16:21:01"}

By now, you should have logs from a couple of applications. Stop the application pulling logs from Kafka on the third terminal window, and list the available topics. You should see something similar:

bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
__consumer_offsets
mytest
mytest_syslog-ng
mytest_systemd

When you start the collector script again with mytest_systemd as a topic, you will most likely not see any input for several minutes. The reason is that by default, the script is only collecting any new messages. Check the built-in help how you can check earlier messages.

What is next?

This blog is enough to get you started and learn the basic concepts of Kafka and the syslog-ng Kafka destination. On the other hand, it is far from anything production-ready. For that, you need a proper Kafka installation and most likely the syslog-ng configuration also needs additional settings.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @Pczanik.

Turing Pi 1

Posted by Richard W.M. Jones on January 11, 2021 03:27 PM
<figure class="wp-block-image size-large"></figure>

Finally getting down to assembling and installing this 7 node cluster.

Community Outreach Revamp Objective

Posted by Fedora Community Blog on January 11, 2021 02:45 PM

In June of 2020 the Community Outreach Revamp was developed. The initial plan was drafted by Marie Nordin and then edited and approved by the Mindshare Committee. Implementation began soon after by co-leads Mariana Balla and Sumantro Mukherjee. It quickly became evident that the Revamp is of monumental proportions and will take a lot of […]

The post Community Outreach Revamp Objective appeared first on Fedora Community Blog.

fwupd 1.5.5

Posted by Richard Hughes on January 11, 2021 10:39 AM

I’ve just released fwupd 1.5.5 with the following new features:

  • Add a plugin to update PixArt RF devices; the hardware this enables we’ll announce in a few weeks hopefully
  • Add new hardware to use the elantp (for TouchPads) and rts54hid (for USB Hubs) plugins
  • Allow specifying more than one VendorID for a device, which allows ATA devices to use the OUI-assigned vendor if set
  • Detect the AMD TSME encryption state for HSI-4 — use fwupdmgr security --force to help test
  • Detect the AMI PK test key is not installed for HSI-1 — a failure here is very serious
  • As usual, this release fixes quite a few bugs too:

  • Fix flashing a fingerprint reader that is in use; in theory the window to hit this is vanishingly small, but on some hardware we ask the user to authorise the request using the very device that we’re trying to update…
  • Fix several critical warnings when parsing invalid firmware, found using hongfuzz, warming my office on these cold winter days
  • Fix updating DFU devices that use DNLOAD_BUSY which fixes fwupd on some other future hardware support
  • Ignore the legacy UEFI OVMF dummy GUID so that we can test the dbx updates using qemu on older releases like RHEL
  • Make libfwupd more thread safe to fix a crash in gnome-software — many thanks to Philip Withnall for explaining a lot of the GMainContext threading complexities to me
  • We now never show unprintable chars from invalid firmware in the logs — as a result of fuzzing insane things the logs would often be full of gobbledygook, but no longer
  • I’m now building 1.5.5 into Fedora 33 and Fedora 32, packages should appear soon.

    Shut up, auditd!

    Posted by Maxim Burgerhout on January 11, 2021 10:24 AM

    Audit logging

    (Happy 2021, may it be better than 2020!)

    On my tiny, Raspberry Pi based Fedora systems, I have a lot of audit messages in my journal. And I mean a lot, I mean like over 50,000 over the course of 9 days. That’s over 5,500 per day. Or, to put it plainly: too many.1

    Jan 02 12:20:31 system.domain.lan audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=someunit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? te [...]
    Jan 02 12:20:51 system.domain.lan audit[471147]: CRYPTO_KEY_USER pid=471147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=destroy kind=server fp=SHA256:77:39:df:9f:00:00:00:0 [...] 
    Jan 02 12:20:51 system.domain.lan audit[471147]: USER_LOGIN pid=471147 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct="(unknown)" exe="/usr/sbin/sshd" hostname=? ad [...]
    

    These logs take up space and consume resources that are scarse on a Raspberry Pi. Therefore, I set off to disable them.

    /etc/systemd/journald.conf

    The first thing I tried is setting Audit=no in /etc/systemd/journald.conf, but that didn’t work. As man 5 journald.conf says:

    Note that this option does not control whether systemd-journald collects generated audit
    records, it just controls whether it tells the kernel to generate them. This means if
    another tool turns on auditing even if systemd-journald left it off, it will still
    collect the generated messages.
    

    The audit socket

    The audit messages originate from the Linux kernel audit framework. In order to quiet them down, you can pass audit=0 to the kernel at boot time. This prevents the kernel from logging and the systemd-journald-audit.socket unit from starting up. That means no more audit log messages in my journal.

    Configuring grub

    Just open /etc/default/grub and append audit=0 to the line that starts with GRUB_CMDLINE_LINUX. Regenerate your grub.cfg with the following command:

    WARNING: You really need to verify whether (a) your grub.cfg is in the same location as mine2, and (b) whether this command doesn’t do anything funky on your system. A broken grub.cfg will render your system unbootable!! Make sure you at least have a backup of the original grub.cfg.

    sudo grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
    

    Now reboot your system. That should do it.

    1. I do understand there is a purpose to these messages, and that not everyone should disable them. I also understand that I might have been artificially increasing the number of log messages by using monit to check on running services. 

    2. It should be if you are on a UEFI system, but check anyway. 

    USB Audio on a Boombox

    Posted by James Just James on January 11, 2021 09:00 AM
    In an earlier post, I got USB audio working on my used, manual, 2013 Honda Civic EX. Today I’ll do the same for a Sony ZS-RS60BT boombox. Here it is in all its used glory after a little cleaning.Background: I decided I should get a small portable boombox to practice my dope dance moves. I currently still suck because I think I spend more time computering than out on the floor.

    Using DNSSEC with (Free) IPA

    Posted by Luc de Louw on January 11, 2021 07:38 AM

    The DNS infrastructure contains a growing number of critical information such as services records pointing to authentication services, TLSA records, SSH fingerprints and the like. DNSSEC signs this information, the client can trust the information DNS sends. It protects against forged information through cache poisoning. This article shows how to achieve a DNSSEC protected DNS environment with the help of FreeIPA This article was taking some time to write as I wanted to see how it behaves in the long ....Read More

    The post Using DNSSEC with (Free) IPA appeared first on Luc de Louw's Blog.

    Episode 253 – Defenders only need to be right once

    Posted by Josh Bressers on January 11, 2021 12:01 AM

    Josh and Kurt talk about this idea that seems to exist in security of “attackers only need to be right once” which is silly. The reality is attackers have to get everything right, defenders really only need to get it right once. But “defenders only need to be right once” isn’t going to sell any products.

    <audio class="wp-audio-shortcode" controls="controls" id="audio-2232-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_253_Defenders_only_need_to_be_right_once.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_253_Defenders_only_need_to_be_right_once.mp3</audio>

    Show Notes