Fedora People

Remote build of the Linux Kernel via Ironic

Posted by Adam Young on October 22, 2021 07:04 PM

Ampere Computing chips run the ARM64 instruction set. My laptop is a Dell running x86_64. In order to edit locally, but build remotely, I make use of servers in our datacenter. These are the steps I am taking.

Prep the machine

First, I create a new server, using Ironic. The following one-liner is the kind of call I can make to create the server instance on a baremetal machine and then add a floating IP address to it.

openstack server create --flavor bm.falcon --image fedora-server-34-aarch64-qcow2 --network baremetal-dataplane --key-name ayoung  f34-kernel-test
openstack server add floating ip f34-kernel-test 10.76.97.231

Obviously, I have a lot of pre-canned knowledge of the cluster in order to run this command. I know that the flavor bm.falcon will grab one of our falcon instances, and that the image I want for a Fedora 34 server is fedora-server-34-aarch64-qcow2. The following commands let me look up this kind of information:

openstack floating ip list
openstack flavor list
openstack image list
openstack network list
openstack keypair list

The second command adds a floating IP to the server instance. A Floating IP is routable outside of the rack. If I don’t add a floating IP, the server will only have link local IP addresses, and the VMs will only be able to talk to each other, and then only if on the same network.

Assuming I am on a machine with the private key that matches the public key identified by the –key-name parameter, I can ssh in to the VM as the fedora User.

ssh fedora@10.76.97.231
The authenticity of host '10.76.97.231 (10.76.97.231)' can't be established.
ED25519 key fingerprint is SHA256:sCiBSBv0bmNyBgkprz/TwyY5SI6LmciITNGfUm78wsU.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '10.76.97.231' (ED25519) to the list of known hosts.
[systemd]
Failed Units: 1
  NetworkManager-wait-online.service
[fedora@f34-kernel-test ~]$ exit

Once I can get in to the machine interactively, I can use the ssh command to run remote commands. A simple smoke test is to run pwd:

$ ssh fedora@10.76.97.231 pwd
/home/fedora

Or hostname

$ ssh fedora@10.76.97.231 hostname
f34-kernel-test.novalocal

Send the code

I do not need the entire git repository on the remote machine in order to test it. However, I do want to make sure I check in all my code to some branch to make sure I don’t lose it. If I use the git archive command, I can send the source tree over in a stream, and expand it on the remote side.

git archive --format=tar HEAD | ssh fedora@10.76.97.231 "mkdir  -p devel/linux && cd devel/linux && tar -x "

The && makes sure that the previous command succeeds before attempting the following command. That keeps you from expanding the contents of the linux directory into your home directory.

Yes, I made that mistake. It is a mess and a pain to clean.

Remote Compile and Install

Here is a pretty clean write up of the set of steps to follow in order to build, install, and run your own kernel.

We can use ssh in order to install the prerequisite packages needed to build the kernel…

ssh fedora@10.76.97.231 "sudo dnf install -y kernel-devel kernel-headers bc xz cpio openssl dwarves"
ssh fedora@10.76.97.231 sudo dnf group install -y \"Development Tools\"

and to execute the build processes…

ssh fedora@10.76.97.231 "cd devel/linux  && make -j 32"
ssh fedora@10.76.97.231 "cd devel/linux  && make modules -j 32"

…and to install the kernel…

ssh fedora@10.76.97.231 "cd devel/linux  && sudo make install "
ssh fedora@10.76.97.231 "cd devel/linux  && sudo make modules_install "

…and to configure the system to boot the kernel.

ssh fedora@10.76.97.231 "cd devel/linux  && sudo  grubby --set-default /boot/vmlinuz-5.15.0-rc4"

Confirm that the kernel we want is configured and default:

$ ssh fedora@10.76.97.231 "sudo grubby --default-kernel"
/boot/vmlinuz-5.15.0-rc4

Reboot the remote machine and wait for it to come back up.

ssh fedora@10.76.97.231 "sudo reboot"
ping 10.76.97.231

And check the remote status. Note that a change of the Linux Kernel might change the ssh key fingerprint. I had to remove a couple of entries from ~/.ssh/known_hosts.

$ ssh fedora@10.76.97.231 "uname -a"
Linux f34-kernel-test.novalocal 5.15.0-rc4 #1 SMP Thu Oct 21 20:07:43 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

These are just my notes for a positive thread, not a recommendation of how to do it. Obviously, the repeated ssh command and parameters should be scripted somehow. Ansible is a likely target. Some of these commands could be put into a script and Some of these commands could be put into a script and executed together as well. What direction we go will in part be directed by the technology we need to perform other tasks, such as queueing and scheduling. Bash is a good starting point, as it is easy to translate to the other technologies.

Shell variable expansion across ssh sessions

Posted by Adam Young on October 22, 2021 01:40 PM

ssh allows you to run a command on a remote machine. You may want to use a shell variable in a remote command. You have to be aware of when that variable gets evaluated.

This session started with me interactively logged in on a remote system called f34-kernel-test. We’ll call this the server, and the system from which I logged in we’ll call the client. My goal is to type the command on the client, hit enter, and see the hostname of the remote system.

[fedora@f34-kernel-test ~]$ echo $HOSTNAME
f34-kernel-test.novalocal
[fedora@f34-kernel-test ~]$ exit
logout
Connection to 10.76.97.231 closed.
[ayoung@ayoung-home linux]$ echo $HOSTNAME
ayoung-home.amperecomputing.com
[ayoung@ayoung-home linux]$ ssh fedora@10.76.97.231   echo $HOSTNAME
ayoung-home.amperecomputing.com
[ayoung@ayoung-home linux]$ ssh fedora@10.76.97.231   "echo $HOSTNAME"
ayoung-home.amperecomputing.com
[ayoung@ayoung-home linux]$ ssh fedora@10.76.97.231   "echo '$HOSTNAME'"
ayoung-home.amperecomputing.com
[ayoung@ayoung-home linux]$ ssh fedora@10.76.97.231   "echo \$HOSTNAME"
f34-kernel-test.novalocal

Note that all of the quoted options failed to prevent the variable expansion from happening on the client system. The only approach which worked here was to escape the slash.

PSA: gnome-settings-daemon's MediaKeys API is going away

Posted by Bastien Nocera on October 20, 2021 12:12 PM

 In 2007, Jan Arne Petersen added a D-Bus API to what was still pretty much an import into gnome-control-center of the "acme" utility I wrote to have all the keys on my iBook working.

It switched the code away from remapping keyboard keys to "XF86Audio*", to expecting players to contact the D-Bus daemon and ask to be forwarded key events.

 

Multimedia keys circa 2003

In 2013, we added support for controlling media players using MPRIS, as another interface. Fast-forward to 2021, and MPRIS support is ubiquitous, whether in free software, proprietary applications or even browsers. So we'll be parting with the "org.gnome.SettingsDaemon.MediaKeys" D-Bus API. If your application still wants to work with older versions of GNOME, it is recommended to at least quiet the MediaKeys API's unavailability.

 

Multimedia keys in 2021
 

TL;DR: Remove code that relies on gnome-settings-daemon's MediaKeys API, make sure to add MPRIS support to your app.

LDAP query from Python

Posted by Pablo Iranzo Gómez on October 19, 2021 08:34 PM

Recently, some colleagues commented about validating if users in a Telegram group were or not employees anymore, so that the process could be automated without having to chase down the users that left the company.

One of the fields that can be configured by each user, is the link to other platforms (Github, LinkedIn, Twitter, Telegram, etc), so querying an LDAP server could suffice to get the list of users.

First, we need to get some data required, in our case, we do anonymous binding to our LDAP server and the field to search for containing the ‘other platform’ links.

We can do a simple query like this in Python:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import ldap

myldap = ldap.initialize("ldap://myldapserver:389")
binddn = ""
pw = ""
basedn = "ou=users,dc=example,dc=com"
searchAttribute = ["SocialURL"]
searchFilter = "(SocialURL=*)"

# this will scope the entire subtree under UserUnits
searchScope = ldap.SCOPE_SUBTREE

# Bind to the server
myldap.protocol_version = ldap.VERSION3
myldap.simple_bind_s(binddn, pw)  # myldap.simple_bind_s() if anonymous binding is desired

# Perform the search
ldap_result_id = myldap.search(basedn, searchScope, searchFilter, searchAttribute)
result_set = []
while True:
    result_type, result_data = myldap.result(ldap_result_id, 0)
    if result_data == []:
        break
    else:
        if result_type == ldap.RES_SEARCH_ENTRY:
            result_set.append(result_data)

# Unbind from server
myldap.unbind_s()

At this point, the variable result_set will contain the values we want to filter, for example, the url containing the username in https://t.me/USERNAMEform and the login id.

This, can be then acted accordingly and kick users that are no longer (or haven’t configured Telegram username) in the LDAP directory.

Enjoy!

Apple M1 Max – Finally Replacing My Last Intel Machine

Posted by Jon Chiappetta on October 18, 2021 07:24 PM
<figure class="aligncenter size-large"></figure>
<figure class="aligncenter"></figure>

So I’ve placed an order for my first ARM laptop to finally replace my five year old 2017 Macbook Air (the last Intel processor)I’m going all-in on ARM !

Machine Specs:

  • Macbook Pro: 14-inch Liquid Retina XDR Display
  • Apple M1 Max: 10-core CPU & 16-core Neural Engine
  • Systems on a Chip: 32-GB Unified RAM & 32-core Integrated GPU
  • IO: Three Thunderbolt 4 ports, SDXC card slot, HDMI, Full-Fn-Backlit Keyboard
  • 1TB SSD Storage
  • Touch ID & MagSafe!
  • The end of the x86 era!!

رنگی کردن خروجی kubectl با kubecolor

Posted by Fedora fans on October 18, 2021 07:30 AM
kubernetes

kubernetesابزارها و نرم افزارهای گوناگونی جهت تعامل و ارتباط با Kubernetes وجود دارد که یکی از آنها kubectl می باشد که یک ابزار خط فرمانی می باشد. اکنون با استفاده از kubecolor می توان خروجی دستورهای kubectl را به صورت رنگی مشاهده کرد.

روش کار به اینصورت می باشد که kubecolor به صورت داخلی kubectl را فراخوانی می کند (internally calls) و خروجی آن را به صورت رنگی نمایش می دهد و هیچ کار دیگری انجام نمی دهد!

 

نصب kubecolor:

برای نصب kubecolor کافیست تا به صفحه releases آن در GitHub مراجعه کنید و آخرین نسخه ی آن را برای سیستم عامل خود دانلود کنید:

https://github.com/dty1er/kubecolor/releases

به عنوان نمونه برای دانلود نسخه x86-64 برای لینوکس کافیست تا دستور زیر را اجرا کنید:

$ wget -c https://github.com/dty1er/kubecolor/releases/download/v0.0.20/kubecolor_0.0.20_Linux_x86_64.tar.gz

سپس آن را از حالت فشرده خارج کنید:

$ tar -xzvf kubecolor_0.0.20_Linux_x86_64.tar.gz

اکنون فایل باینری kubecolor را با استفاده از کاربر root به مسیر زیر منتقل کنید:

# mv kubecolor /usr/local/bin

اکنون برای اجرای خودکار کافیست تا فایل bashrc. خود را باز کنید:

$ vi ~/.bashrc

سپس خطوط زیر را به آن اضافه کنید:

alias kubectl=kubecolor

### autocomplete for kubecolor
complete -o default -F __start_kubectl kubecolor

از این پس هنگام استفاده از دستور kubectl خروجی آن را به صورت رنگی و زیبا مشاهده خواهید کرد. در ادامه تصاویری از کارکرد kubecolor را مشاهده می کنید:

kubeclolor

kubeclolor

برای اطلاعات بیشتر در مورد kubecolor کافیست تا پروژه آن را بر روی GitHub مشاهده کنید:

https://github.com/dty1er/kubecolor

 

The post رنگی کردن خروجی kubectl با kubecolor first appeared on طرفداران فدورا.

Setup OpenWRT on BPi-R2

Posted by Zamir SUN on October 16, 2021 03:05 PM

It’s pretty easy to get OpenWRT start and running on BPi-R2. However, I realized that I need to extend the root filesystem to the whole disk, which is where the struggling starts.

Setup OpenWRT on BPi-R2

Just download the bpi_bananapi-r2-squashfs-img.gz from openwrt.org and dd it to the TF card like playing with any SBC. Replace sdX with the device name of your card.

$ wget https://downloads.openwrt.org/releases/21.02.0/targets/mediatek/mt7623/openwrt-21.02.0-mediatek-mt7623-bpi_bananapi-r2-squashfs-img.gz
$ sudo dd if=openwrt-21.02.0-mediatek-mt7623-bpi_bananapi-r2-squashfs-img.gz of=/dev/sdX status=progress

Then plug the card into BPi-R2. Make sure the power supply is firmly connected to BPi-R2 (it’s a 12v power supply, by the way). Hold the PWR button until the red LED near the power cable lights up, and only the red LED on that area is on. It usually takes 10~15 seconds for me. Release the PWR button now. Soon you’ll see the red LED near the power cable goes off and then only the red LED near the GPIO head is on. Now OpenWRT is up and running.

Expand the root partition

This is where the confusion starts. Googling for a while I did not find anything useful. All of a sudden this answer came to my eye and I believe this is the way I should do. In short, the OpenWRT image for BPi-R2 contains 3 partitions. The last partition is not a single partition, but rather, a combination of the squashfs and the f2fs overlay. In order to extend the root partition, we need to extend the f2fs overlay, which requires us to extend the 3rd partition. Here is how I finish it.

  • Install losetup in OpenWRT
root@OpenWrt:~# opkg update
root@OpenWrt:~# opkg install losetup
  • Check the current disk table
/dev/root /rom squashfs ro,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,noatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,noatime 0 0
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev,noatime 0 0
/dev/loop0 /overlay f2fs rw,lazytime,noatime,background_gc=on,discard,no_heap,user_xattr,inline_xattr,inline_data,inline_dentry,flush_merge,extent_cache,mode=adaptive,active_logs=6,alloc_mode=reuse,fsync_mode=posix 0 0
overlayfs:/overlay / overlay rw,noatime,lowerdir=/,upperdir=/overlay/upper,workdir=/overlay/work 0 0
tmpfs /dev tmpfs rw,nosuid,relatime,size=512k,mode=755 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,mode=600,ptmxmode=000 0 0
debugfs /sys/kernel/debug debugfs rw,noatime 0 0
none /sys/fs/bpf bpf rw,nosuid,nodev,noexec,noatime,mode=700 0 0

Then check the offset of the f2fs partition

root@OpenWrt:~# losetup
NAME       SIZELIMIT  OFFSET AUTOCLEAR RO BACK-FILE  DIO LOG-SEC
/dev/loop0         0 2883584         1  0 /mmcblk1p3   0     512

Now we see the offset is 2883584 in my case. Remember the number. Power off your BPi-R2 and take out your card. Now we need to connect the card to a running Linux and do the expand.

First let’s expand the 3rd partition on the card. I use cfdisk to expand it. You can use any partition tools you are familiar with.

After extending the partition, let’s expand the f2fs ‘partition’ inside. I am using Fedora so I need to install f2fs-tools first. Here we need to use the offset 2883584 we get from the running OpenWRT just now.

$ sudo losetup --show -o 2883584 -f -P /dev/sde3 
/dev/loop0

Now the partition has been attached to the system as /dev/loop0. Let’s try to extend it with resize.f2fs /dev/loop0. In my case, it fails with [f2fs_do_mount:3526] Mount unclean image to replay log first which I think is caused by removing the card when BPi-R2 is still running. Remounting it did not solve my issue. So I just created a brand new f2fs filesystem with

$ sudo mkfs.f2fs -f /dev/loop0

        F2FS-tools: mkfs.f2fs Ver: 1.14.0 (2020-08-24)

Info: Disable heap-based policy
Info: Debug level = 0
Info: Trim is enabled
Info: Segments per section = 1
Info: Sections per zone = 1
Info: sector size = 512
Info: total sectors = 15189504 (7416 MB)
Info: zone aligned segment0 blkaddr: 512
Info: format version with
  "Linux version 5.14.9-200.fc34.x86_64 (mockbuild@bkernel02.iad2.fedoraproject.org) (gcc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1), GNU ld version 2.35.2-5.fc34) #1 SMP Thu Sep 30 11:55:35 UTC 2021"
Info: [/dev/loop0] Discarding device
Info: This device doesn't support BLKSECDISCARD
Info: This device doesn't support BLKDISCARD
Info: Overprovision ratio = 2.330%
Info: Overprovision segments = 176 (GC reserved = 93)
Info: format successful

Detach the loopback device with sudo losetup -D and now it’s time to plug the card into BPi-R2 and run again. You’ll see your root filesystem is around the size you set by cfdisk.

تولد ۲۵ سالگی میزکار KDE

Posted by Fedora fans on October 15, 2021 12:31 PM
kde-desktop-25

یکی از ویژگی های قابل توجه در دنیای سیستم عامل گنو/لینوکس داشتن میزکارهای (Desktop) گوناگونی می باشد که کاربران بر اساس نیاز و سلیقه ی خود می توانند آنها را انتخاب و استفاده کنند.

kde-desktop-25

یکی از این میزکارها KDE می باشد که هم اکنون تولد ۲۵ سالگی خود را جشن می گیرد. سپاس از تیم توسعه این میزکار زیبا که کار کردن با کامپیوتر را برای کاربران لذت بخش تر کرده است.

شما از چه میزکاری استفاده می کنید؟ تجربه و دیدگاه خود را با ما به اشتراک بگذارید.

The post تولد ۲۵ سالگی میزکار KDE first appeared on طرفداران فدورا.

Ironic Clean PXE failure

Posted by Adam Young on October 15, 2021 12:13 PM

One of our ironic baremetal nodes was suffering a cleaing failure. Fixing it was easy…once we knew the cause/

Cleaning is a process by which ironic prepares a node for use. It removes old data and configuration from a node. In order to do that, it has to run a simple image. We use a Debian based image, known as the IPA image, as it runs the Ironic Python Agent. This image is installed via PXE boot. So, if the PXE setup is broken, the node can’t be cleaned.

I watched the node boot messagesvia the ipmi Serial over LAN (SOL) console. What I saw was that there indicated “no response from PXE.”

The message is specific to the hardware you run.

The PXE server in Ironic matches the node to the the MAC address via the baremetal port.

To find out what the port is foir a given node, use the command like:

openstack baremetal port list  --node ac5bf47b-7185-4db5-ab24-7a396deeaf33

Which shows output like this:

+--------------------------------------+-------------------+
| UUID                                 | Address           |
+--------------------------------------+-------------------+
| 268926f8-eab5-4bdf-8b63-7337da43dd52 | 1c:34:da:5a:c7:b0 |
+--------------------------------------+-------------------+

Then, check the (MAC) address returned with the MAC address reported by PXE. In my case, they did not match. I created a new port with:

 openstack baremetal port create  --node ccad9fe2-1f04-45f1-a4cb-1a993f2a3b69  0c:42:a1:49:ee:c4 

And now port list will show both ports. Delete the old one.

If the node is in the clean wait stage, you can use the following command to get it ready for cleaning:

openstack baremetal node abort ccad9fe2-1f04-45f1-a4cb-1a993f2a3b69

And then restart the cleaning process.

A new conceptual model for Fedora

Posted by Máirín Duffy on October 14, 2021 09:14 PM
<figure aria-describedby="caption-attachment-6254" class="wp-caption alignnone" id="attachment_6254" style="width: 1265px">screenshot of getfedora.org<figcaption class="wp-caption-text" id="caption-attachment-6254">Screenshot of the current getfedora.org website</figcaption></figure>

Fedora’s web presence today

It’s no news now that Fedora has a new logo, and what you may not realize is that we do not have a new website – when we began the new logo rollout process, we simply updated the logo in-place on our pre-existing website.

The thing is – and this is regardless of the underlying code or framework under-girding the website, which I have no issues with – the messaging and content on the current getfedora.org website has not kept pace with the developments, goals, and general narrative of the Fedora project. We have a lot of different initiatives, developments, and collaborations happening at what I find at times is a dizzying pace that is challenging to keep up with. The number of different fronts that Fedora development takes place on and the low, technical level they occur at makes it difficult to understand the big picture of what exactly Fedora is, and why and how would one want to use it.

As part of the Fedora rebranding project, I’ve been worrying for a while how we will evolve our web site and our overall web presence. If we’re honest, I’ve been worrying about it quite a bit longer, and some plans we had at making a wildly interactive and participatory community-focused website kind of fell apart some time back and had me feeling somewhat defeated about Fedora’s web presence, particularly for contributors. I think some of the recent, rather exciting developments around contributor-focused Fedora assets such as our upcoming new Matrix-based chat server and Discourse-based discussion board (open source platforms!!) have sort of risen from the ashes of that failed initiative and have got me excited to think about Fedora’s web presence again.

But what *is* Fedora, exactly? What do I do with it?

This question of what is it and why/how should I use it? is a key message a software project website should have. So in setting out to rethink our website, I set out to answer this question for Fedora in 2021.

Through various conversations with folks around the project over the past few months I discovered that our labyrinthine technical developments and initiatives do in fact feed into a somewhat coherent singular story.

The problem is that our website currently does not tell that story.

In order to tell the story, we need the story. What is it?

Introducing the Fedora “F” Model

 

A diagram in the shape of an F. We start at the bottom, with a green ball labeled "Desktop User." As we move up the stem of the F (labeled "Development"), there are two branches: (1) an orange "Web/App Developer" branch, and (2) An "IoT Developer" branch. The Web/App developer branch has 3 connected nodes. The first node is labeled, "Local container-based development." The second node is labeled "Container-based development with remote Fedora CoreOS. The third node is labeled "Container-based development on K8S-based runtime." For the IoT developer branch, there are two nodes: "Fedora IoT on device, local container-based development" and "IoT devices at scale" are the labels for those two nodes. To the left of the F-shaped diagram is a circle labeled "quay.io registry" with arrows pointing from the two branches to it (the paths of containers, perhaps.)

Somehow, this diagram oddly turned out to be in the shape of an “F” for Fedora, yay! (It started out as a weird upside-down “L.”)

Anyhow, this diagram is meant to represent “the story” of Fedora and how you would use it, and serve as a model from which we will build the narrative for Fedora and its web presence. The core idea here is that there are three different ways of using Fedora, and the hope is that all of these ways (if not currently the default) will someday be the default using container-oriented options. Let’s walk through this diagram together and make some sense of it:

Desktop User

We start at the bottom of the “F”, at the green node labeled “Desktop User.” This is where most people come to Fedora today, and honestly, they need not go anywhere else if this is what they need and what serves them. They come to Fedora looking for a great Linux-based desktop operating system. Ideally, by default, this would be the container-based Silverblue version of Fedora’s Desktop – but one thing at a time, I suppose!

This desktop user can just hang out here and have this be their Fedora experience, and that’s totally fine. However, if they are interested in building software, or are a developer and looking to migrate to a Linux-based desktop / platform for their development work, they can bridge out from their basic desktop usage, up the spine of the letter “F” and venture into the “Fedora for Development” branches of the F.

Web/App Developer

I struggled to come up with a name for this branch: perhaps you have a better one? The idea here, is these are developers who are writing web apps mostly, kind of a “traditional” web-focused developer who is not developing for specific hardware or IoT style deployment targets (the IoT branch, which we will cover next, is for that.)

We do have users of Fedora today who use Fedora as their main development workstation. Linux as a workstation for developers is an obvious easy sell, since the apps they write are being deployed to UNIX-like environments. What is kind of compelling about Fedora – and sure maybe we could even be better at it with the focus of a narrative like this? – is that we have a lot of built-in tooling to do this type of development in a container-oriented way.

The idea here then is:

  • Get yourself set up with Fedora as a development workstation, and maybe we have affordances in Fedora itself (maybe right now they are web pages on our web site with information, later could be default apps or configurations, etc.) so that you can easily start developing using containers on your local Fedora workstation system right away.
  • As your app gets more sophisticated, and you need to use remote computing resources to get things running or get your app launched to be publicly accessible, your next step (moving right along the bottom branch of the “F” shape) would be to deploy Fedora CoreOS as a container host at your favorite cloud provider, and push your containers to that.
  • Finally, the end game and final stop on our orange web/app developer branch here is to deploy your even more sophisticated container-based app at scale, via a Kubernetes platform.

IoT Developer

I am not sure if the name for this branch is great either, but it’s basically the “Edge Computing” branch… here we have developers using Fedora who intend to deploy to specific hardware of the kind supported by the Fedora IoT Edition.

Here the story starts the same as the previous two – you begin by using Fedora as a Desktop / Workstation. Then you start developing your app using local containers. In this branch, we develop via containers locally on our Fedora Workstation, and in order to test out the code we are writing, we deploy Fedora IoT to the target device and deploy our locally-constructed containers over to Fedora IoT as a container host.

The next step to parallel the Web/App developer branch is to do this IoT, container-based development at scale, deploying to 100’s or 1000’s+ systems – we don’t really have a story for that today, but it’s a future end point worth thinking about so I left it in the model.

Containers, Containers, Containers!
An image of Jan from the Brady Bunch

If all of this is being done via the medium of containers – ok, great! Where do those containers live? Where do they go?

I don’t know the answer. I drew a “quay.io Registry” bit into the diagram with the idea that anyone can get a free accoutn there and use it to push containers to and pull containers from, as I understand it. I don’t know that Fedora wants to be in the business of maintaining its own open container registry (Oh! TIL – we do have one.) But certainly, having a registry somewhere in this narrative would be helpful. So I drew that one in. 🙂

Um, so what does this have to do with the website?

Well, the next step from here is to share this model with all of you fine folks in the Fedora project to see if it makes sense, if anything is missing or needs correction, and to just generally suss out if this is the right story we want to tell about what Fedora is and what you can do with it.

If that works out (my initial bugging a few Fedora folks with this idea seems to indicate it may well work out), then the next step is to construct the content and structure of the Fedora website around this model to make sure the website reflects this model, and to make sure we include all the resources necessary for users of Fedora in order to use it in the way prescribed by this model.

Practically, by way of example, this could mean mention of and support for the usage of the Containers initiative suite of open-source container tooling that we ship in Fedora by default – podman, buildah, cri-o, skopeo, and friends on the Fedora website.

Feedback wanted!

Does this make sense? Is this in need of a lot of work? Does this excite you, or terrify you? Are there corrections to be made, or new ideas you’d like to add?

Your feedback, comments, questions are wholeheartedly encouraged! Let me know in the comments here, or catch me on matrix.org (@duffy:fedora.im.).

USB-C – Design by Committee

Posted by Jon Chiappetta on October 13, 2021 02:59 AM

This is what happens! 😂 ¯\_(ツ)_/¯

<figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="372" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/8mbaNUm4UD4?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="660"></iframe>
</figure>

Having Fun With: DNS Records + Signed Certificates + Cryptographic Algorithms!

Posted by Jon Chiappetta on October 12, 2021 04:53 AM

So I was experimenting and if you can get signed certs from let’s-encrypt and dns records from cloud-flare, then you could store your public signed certificate as a set of split txt entries which anyone could verify with a set of trusted root certificates. You can then use the private key to sign an encryption key (stored as another txt record) along with the encrypted message (also another txt record).

This would allow you to sign, store, and send short messages (in a single direction) with confidentiality, integrity, and authenticity all through a plain text protocol (as long as the root certs exist)!

<figure class="aligncenter size-large"></figure>

Verification Chain:

  • The message data is hashed into -> The encryption key
  • The encryption key is signed with -> The private key
  • The private signature is decrypted with -> The public key
  • The public key is embedded into -> The signed certificate
  • The signed certificate is verified with -> The root certificates
  • The end verify -> The root certificates + The domain name + The expiry time
# ./dns_fun.sh fossjon.com d

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            04:b3:64:ec:80:70:47:42:2a:8a:ef:b4:11:60:03:9d:23:78
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=R3
        Validity
            Not Before: Oct  9 18:35:11 2021 GMT
            Not After : Jan  7 18:35:10 2022 GMT
        Subject: CN=*.fossjon.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c8:1c:f6:86:b7:b5:45:63:68:7b:e4:34:10:6e:
                    0c:51:da:73:5b:65:d4:f7:fd:c8:c7:2e:d8:8b:01:
                    2d:c5:67:a4:0e:7a:b6:57:bf:fe:2a:c9:52:4d:38:
                    51:56:a6:08:bb:5a:8f:85:32:88:c0:3c:9b:3e:ad:
                    f9:1a:aa:21:fb:b6:2f:d1:7c:bc:c1:6e:ae:d8:b4:
                    c9:87:a5:69:a4:c5:f9:1d:3a:1f:49:68:94:75:b9:
                    ab:f6:12:9f:56:4c:7f:26:f1:6e:85:9e:1e:66:be:
                    74:e0:01:91:6d:59:cb:0d:34:01:10:5c:b9:43:44:
                    52:07:6a:ca:3f:83:1e:41:6c:51:5a:a6:fa:20:8f:
                    33:40:76:90:ab:4d:04:6f:33:70:f0:09:c7:38:25:
                    26:15:70:d6:f9:f4:e6:b6:71:11:e0:7a:c6:04:86:
                    30:c9:56:f0:14:6e:9e:66:60:b3:7d:2b:42:a4:b9:
                    fb:6d:73:4f:26:2e:17:aa:a8:64:72:e2:f5:a8:b1:
                    17:8d:f4:db:a8:10:fc:70:ff:1b:cc:78:6f:04:84:
                    e0:fc:1d:15:72:de:41:bd:14:c5:26:72:3e:56:2a:
                    aa:1d:9f:1a:3c:17:40:91:21:7e:2a:b4:8a:c2:ab:
                    79:0f:dd:21:13:a1:2e:da:6a:a3:92:49:e7:f1:58:
                    36:bf
                Exponent: 65537 (0x10001)

Secret_key_123-77cf362ab2442e3fa3062d06adb571f0ea92647f0ba137300a272767d6ea0834

this is just a test of the emergency broadcast system, this is not the real thing!
#!/bin/bash

d="$1" ; z="$2" ; m="$3" ; k="$4"
echo

if [ "$z" == "e" ] ; then
  h=$(echo "$m" | openssl dgst -sha256 -r | awk '{ print $1 }' | tr -d '\t\r\n')
  t="${k}-${h}"

  e=$(echo "$m" | openssl enc -aes-256-cbc -e -k "$t" -S 00 | base64 -b 255)
  echo "$e" ; echo

  s=$(echo -n "$t" | openssl rsautl -sign -inkey privkey.pem | base64 -b 255)
  echo "$s" ; echo

  p=$(cat crt.pem | grep -iv '^---' | base64 -d | base64 -b 255)
  echo "$p" ; echo
fi

if [ "$z" == "d" ] ; then
  c=$(dig "z.pubcrt.$d" txt +short | tr -d ' "\n' | base64 -d | base64 -b 64)
  ( echo '-----BEGIN CERTIFICATE-----' ; echo "$c" ; echo '-----END CERTIFICATE-----' ) > /tmp/crt.pem

  t=$(openssl x509 -text -noout -in /tmp/crt.pem | grep -i 'exponent' -B 64)
  echo "$t" ; echo

  v=$(dig "z.pubkey.$d" txt +short | tr -d ' "\n' | base64 -d | openssl rsautl -verify -certin -inkey /tmp/crt.pem)
  echo "$v" ; echo

  o=$(dig "z.pubmsg.$d" txt +short | tr -d ' "\n' | base64 -d | openssl enc -aes-256-cbc -d -k "$v")
  echo "$o" ; echo
fi

Using Cloudflare NS Records For Better Web Proxying & DNS Service

Posted by Jon Chiappetta on October 10, 2021 05:55 PM

So I decided to switch the nameserver records on my fossjon.com domain over to Cloudflare’s service for two different reasons. One is that they offer more advanced https reverse proxying tech along with a better dns management interface as well! I still have the domain registered with Google Domains as they also offer pretty good mx record email forwarding via gmail.

<figure class="aligncenter"></figure>

Cloudflare won’t let you directly rewrite the HTTP HOST header field anymore, however, they will let you setup a more advanced HTTP JavaScript worker process. This process can handle the incoming web proxy requests along with the outgoing responses and perform some modifications on them. This is an extremely powerful framework and it behaves more like a proper reverse proxy server would!

/* worker */
addEventListener("fetch", event => {
  event.respondWith(handleRequest(event.request))
});

async function handleRequest(request) {
  /* request */
  var url = new URL(request.url);
  url.hostname = "fossjon.wordpress.com";
  //url.pathname = url.pathname.replace(/\/*$/mig, "") + "/";
  var repHost = new RegExp(url.hostname, "mig");
  var reqHost = request.headers.get("host");
  const response = await fetch(url, request);

  /* response */
  const newHead = new Response(response.body, response);
  for (var keyVal of newHead.headers.entries()) {
    if (keyVal[0].toLowerCase().includes("location")) {
      var newHost = keyVal[1].replace(repHost, reqHost);
      newHead.headers.set(keyVal[0], newHost);
    }
  }
  const resText = await response.text();
  const newBody = resText.replace(repHost, reqHost);

  /* return */
  return new Response(newBody, newHead);
}

Note: It seems like CF offers a better DNS API service, however, I couldn’t yet find a DNS backup button to help save all my records locally (hacky webarchive file but at least it’s sorted nicely).

javascript:(function(){
var t = document.getElementsByTagName("table")[0];
var s = t.getElementsByTagName("tr");
var m = {}, l = [], z = 1;
for (var i=0; i<s.length; ++i) {
  var d = s[i].getElementsByTagName("td");
  if (d.length > 4) {
    var k = d[2].innerText.trim();
    if (k.endsWith(".com")) { k = "@"; }
    if (!(k in m)) { m[k] = {"r":[]}; l.push(k); }
    var r = d[1].innerText.trim();
    if (!(r in m[k])) { m[k][r] = []; m[k]["r"].push(r); }
    m[k][r].push([d[3].innerText.trim(), d[4].innerText.trim()]);
  }
}
var b = "style='border: 1px solid black;padding: 4px;white-space: nowrap;'";
var o = "<table style='padding: 8px;'><tr><th "+b+">No.</th><th "+b+">Time</th><th "+b+">Record</th><th "+b+">Type</th><th "+b+">Value</th></tr>";
l.sort();
for (var i in l) {
  var k = l[i];
  m[k]["r"].sort();
  for (var j in m[k]["r"]) {
    var r = m[k]["r"][j];
    m[k][r].sort();
    for (var d in m[k][r]) {
      o += ("<tr><td "+b+">"+z+"</td><td "+b+">"+m[k][r][d][1]+"</td><td "+b+">"+k+"</td><td "+b+">"+r+"</td><td "+b+">"+m[k][r][d][0]+"</td></tr>"); ++z;
    }
  }
}
o += "</table>";
document.head.innerHTML = document.head.innerHTML.replace(/script/ig, "xscript");
document.body.innerHTML = o;
})();

Star Cert via Let’s Encrypt via DNS TXT via Docker Container (manual process)

Posted by Jon Chiappetta on October 09, 2021 09:10 PM

Source Code: https://github.com/stoops/dockerssl

If you want to get a wild-card certificate with let’s-encrypt then you’ll have to use the DNS verification method. I made an example Docker file and script that can quickly and easily spin up a Debian container to install and run the certbot application. You can then connect to the container via a local URL (http://127.0.0.1:8080/) and interact with the process to setup the TXT record and then verify the DNS entry and then download the signed cert chain + key pem files!

Note: I do wish Google had API access to their Domains service which would allow for automated TXT records!

<figure class="aligncenter size-large"></figure>
$ c=fullchain.pem ; k=privkey.pem ; openssl x509 -noout -modulus -in $c | md5 ; openssl rsa -noout -modulus -in $k | md5

ca7e9eba4cde42a000038aa7dae8680b
ca7e9eba4cde42a000038aa7dae8680b
$ openssl x509 -text -noout -in fullchain.pem

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            04:b3:64:ec:80:70:47:42:2a:8a:ef:b4:11:60:03:9d:23:78
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=R3
        Validity
            Not Before: Oct  9 18:35:11 2021 GMT
            Not After : Jan  7 18:35:10 2022 GMT
        Subject: CN=*.fossjon.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:c8:1c:f6:86:b7:b5:45:63:68:7b:e4:34:10:6e:
                    .....
                    79:0f:dd:21:13:a1:2e:da:6a:a3:92:49:e7:f1:58:
                    36:bf
                Exponent: 65537 (0x10001)
        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage:
                TLS Web Server Authentication, TLS Web Client Authentication
            X509v3 Basic Constraints: critical
                CA:FALSE
            X509v3 Subject Key Identifier:
                34:94:9E:5B:B9:3C:11:0C:F3:33:3E:A1:C4:41:DA:61:64:ED:1D:97
            X509v3 Authority Key Identifier:
                keyid:14:2E:B3:17:B7:58:56:CB:AE:50:09:40:E6:1F:AF:9D:8B:14:C2:C6
            Authority Information Access:
                OCSP - URI:http://r3.o.lencr.org
                CA Issuers - URI:http://r3.i.lencr.org/
            X509v3 Subject Alternative Name:
                DNS:*.fossjon.com
            X509v3 Certificate Policies:
                Policy: 2.23.140.1.2.1
                Policy: 1.3.6.1.4.1.44947.1.1.1
                  CPS: http://cps.letsencrypt.org
    Signature Algorithm: sha256WithRSAEncryption
         48:aa:26:6c:2e:fe:ed:a8:14:3e:80:12:c3:0b:c5:f5:95:5c:
         .....
         f2:0f:4c:9d:4e:d5:df:18:4a:cd:b3:a2:be:3e:57:2f:fc:d0:
         8e:c2:03:3e
$ openssl s_client -connect lo.fossjon.com:8443

CONNECTED(00000003)
---
Certificate chain
 0 s:/CN=*.fossjon.com
   i:/C=US/O=Let's Encrypt/CN=R3
 1 s:/C=US/O=Let's Encrypt/CN=R3
   i:/C=US/O=Internet Security Research Group/CN=ISRG Root X1
 2 s:/C=US/O=Internet Security Research Group/CN=ISRG Root X1
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
---
Server certificate
subject=/CN=*.fossjon.com
issuer=/C=US/O=Let's Encrypt/CN=R3
---
No client certificate CA names sent
Server Temp Key: ECDH, X25519, 253 bits
---
SSL handshake has read 4628 bytes and written 289 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES256-GCM-SHA384
    Session-ID: 5A9CA7F699D780CFD9FAFBC197FDBA14FC4307F225CE6C90E55CE0658E3055F8
    Session-ID-ctx:
    Master-Key: C84D32162158587663310FB67F482AE63CA9F964158B74C1E40806D8915E1B25AFB3DC2F22E15D58450F7CFCA0FAA8B4
    TLS session ticket lifetime hint: 7200 (seconds)
    TLS session ticket:
    0000 - 48 82 2b be 43 84 b1 13-11 7a e5 bf 39 97 89 55   H.+.C....z..9..U
    0010 - 43 41 ce 61 42 f8 16 e7-89 28 67 af 8d 73 6d 5c   CA.aB....(g..sm\
    0020 - 60 c0 13 20 cc e9 77 0d-5a 34 73 50 85 23 57 b0   `.. ..w.Z4sP.#W.
    0030 - 10 fd 8e c7 6b d4 37 8b-59 4e f4 30 b3 46 b4 d7   ....k.7.YN.0.F..
    0040 - aa c6 79 ff c0 f9 50 c2-54 f0 8e ca 64 3e 49 15   ..y...P.T...d>I.
    0050 - f5 42 fa 29 12 73 a6 f2-92 b0 a8 e0 9f 13 fa 89   .B.).s..........
    0060 - d1 8c c0 93 19 bf ea 81-32 0c 86 e7 37 42 f8 20   ........2...7B.
    0070 - f6 9d 94 d3 38 d8 c9 38-07 9f b6 99 79 b5 43 6a   ....8..8....y.Cj
    0080 - c5 11 fd a1 30 3a d6 e0-74 d3 ba b6 6f 35 47 f4   ....0:..t...o5G.
    0090 - eb c9 af c3 0f 69 95 9f-d1 4c f2 21 80 cc b5 db   .....i...L.!....
    Start Time: 1633812734
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
---
^C

Time’s Cover Page -!- I Think It’s About Time

Posted by Jon Chiappetta on October 08, 2021 08:13 PM

DNS Server: fossjon.wp.com/2021/03/24/…dns-server-blocker-forwarder/

Edit: This would be a good time to plug another amazing blog site written by Cory Doctorow (a member of the EFF) called Pluralistic!

<figure class="aligncenter size-large"></figure>

Reddit Refresher Javascript Bookmark

Posted by Jon Chiappetta on October 08, 2021 03:51 AM
<figure class="aligncenter size-large"></figure>

Note: This is only compatible with the good old.reddit.com website!

javascript:(function(){
function w() {
  document.title = (document.title+" . [*]");
  var y = Math.random(), x = new XMLHttpRequest();
  x.onload = function() {
    var c = 0, d = document.createElement("div");
    d.innerHTML = this.responseText;
    var l = d.getElementsByClassName("thing");
    for (var i=(l.length-1); i>=0; --i) {
      var p = (l[i].getAttribute("data-permalink") + "");
      if (!p.startsWith("/r/")) { continue; }
      var f = 0, z = [null, null]; c = 0;
      var m = document.getElementsByClassName("thing");
      for (var j=0; j<m.length; ++j) {
        var q = (m[j].getAttribute("data-permalink") + "");
        if (!q.startsWith("/r/")) { continue; }
        if (q == p) { f = 1; }
        if (!z[0]) { z[0] = m[j]; }
        c += 1; z[1] = m[j];
      }
      if (c > (36-(parseInt(y*5)+1))) {
        z[1].parentNode.removeChild(z[1]);
      }
      if (f == 0) {
        console.log(p);
        z[0].parentNode.insertBefore(l[i], z[0]);
      }
    }
    var a = (new Date() + "").replace(/[ ]*[^ ]*-.*$/, "");
    var h = ("<b><font color='red'>" + a + "</font></b> &nbsp; &nbsp; pop in: ");
    document.getElementsByClassName("menuarea")[0].getElementsByTagName("span")[0].innerHTML = h;
    document.title = ("("+c+") . reddit");
  };
  var s="?", r=location.href;
  if (r.includes("?")) { s="&"; }
  x.open("GET", r+s+"r="+y);
  x.send();
}
w(); setInterval(w, ((9 * 60) + 1) * 1000);
})();

[Short Tip] Accessing tabular nushell output for non-nushell commands

Posted by Roland Wolters on October 07, 2021 03:58 PM
<figure class="alignright size-thumbnail"></figure>

After I learned how subshells can be executed within nushell I was confident that I could handle that part. But few minutes ago I run into an error I didn’t really understand:

❯ rpm -qf (which dwebp)
error: Type Error
   ┌─ shell:24:16
   │
24 │ rpm -qf (which dwebp)
   │                ^^^^^ Expected string, found row

I thought the parameter was provided somehow in the wrong way, and put it into quotes: "dwebp". But it didn’t help. I tested around more with sub-shells, some of them worked while others didn’t. The error message was misleading for me, letting me think that there is a difference in how the argument is interpreted:

❯ rpm -qi (echo rpm)
Name        : rpm
Version     : 4.16.1.3
Release     : 1.fc34
[...]

❯ echo (which dwebp)
───┬───────┬────────────────┬─────────
 # │  arg  │      path      │ builtin 
───┼───────┼────────────────┼─────────
 0 │ dwebp │ /usr/bin/dwebp │ false   
───┴───────┴────────────────┴─────────

It took me a while until I understood what I was looking at – and to make the error message make sense: the builtin nushell command which can give back multiple results, thus returning a table. The builting nushell command echo returns a string!

Thus the right way to execute my query is to get the content of the cell of the table I am looking at via get:

❯ rpm -qf (which dwebp|get path|nth 0)
libwebp-tools-1.2.1-1.fc34.x86_64

Note that nth 0 is not strictly necessary here since there is only one item in the table anyway. But it might help as a reference for future examples.

You don’t have to use pipe, btw., there is an even shorter way available:

❯ rpm -qf (which dwebp|nth 0).path
libwebp-tools-1.2.1-1.fc34.x86_64

Decided to purchase a domain for 2021

Posted by Jon Chiappetta on October 06, 2021 09:20 PM

So I thought I’d try something new/different these days and buy a domain again! I’ve purchased some in the past but never made much use of them personally speaking…

Domain Example: fossjon.com/2021/10/06/…a-domain-for-2021/

I chose Google Domains because they offered some extra valuable & useful features:

  • easy record management
    • whois info privacy
    • backup yaml file
  • email forwarding inbound
    • including mx-records
    • including star-matching
  • web redirection proxying
    • including sub-domains
    • including preserve-paths
    • including https-certificates
  • outbound email records (spf+dmarc)

The only thing I can’t do is send email outbound with a domain address via gmail itself (without an extra smtp server setup) because Google removed the ability to modify the FROM field in the message headers directly with an alias email (it now requires a persistent external smtp auth login)!

Edit: If you can find a third-party smtp mail provider that allows you to add & verify email aliases more easily, you can instruct gmail to connect to that external smtp server with your other account and then you’ll be able to send email from an alias address via gmail directly!

Got a score of 85 which is not-too-bad for a super-simple domain-name email address!
 

$ dig fossjon.com txt | grep -i spf

fossjon.com.		550	IN	TXT	
"v=spf1 include:_spf.google.com include:_spf.mail.yahoo.com ~all"

# dig fossjon.com txt +short | grep '!' | tr -d '"' | base64 -d

A history of cell phone ownership…

Posted by Jon Chiappetta on October 06, 2021 01:21 AM

I wanted to make a historical list of phones that I’ve owned over the years and the reasons why I purchased them in particular. I generally buy phones on the ‘S’ year (tick-tock cycle) when the small improvements have been made to it over time versus the major redesign years!

2007: An important day to remember in the history books…

<figure class="wp-block-embed aligncenter is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="372" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation" src="https://www.youtube.com/embed/MnrJzXM7a6o?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="660"></iframe>
</figure>

<=2009: A long long time ago, we used to have flip phones, and at this point I had a BlackBerry Pearl !

<figure class="aligncenter size-large"></figure>

2010: One of the first affordable smart phones I owned was the HTC Desire which ran an early version of Android OS and it was very mod-able/customize-able at the time vs the first iPhones back in those days!

<figure class="aligncenter size-large"></figure>

2011: After some time of using an Android phone I remember my main complaint being that the battery life barely got me through the day. I then received a hand-me-down phone called the Apple iPhone 4S which had a beautifully-solid-all-glass design, much better battery life, and my first intro to iOS which felt much more put together but more limited in terms of what it allowed me to do with it!

<figure class="aligncenter size-large"></figure>

2012: I then switched back to vanilla Android with the Google Nexus 4 because it offered a bigger screen size, a clean OS / unlocked phone, and it was a very low price compared to the competitors. Battery life again was so-so but the back glass looked amazing and sparkly!

<figure class="aligncenter size-large"></figure>

2013: I switched back again to the Apple side with the Apple iPhone 5S even though it had a smaller screen size compared to the Android phones. I liked the square edge design in the Gold color and mainly because it was the first phone in the whole market to offer a fingerprint unlock. I grew tired of entering in the long PIN codes by hand with the Nexus and I also got the good battery life back again!

<figure class="aligncenter size-large"></figure>

2015: I kept the 5S for a bit and then upgraded to the Apple iPhone 6S for the same reasons as before except this time it offered the bigger screen size and greater battery life. However, it was still an LCD panel compared to the Android phones which were leading the way with their more advanced screen technology!

<figure class="aligncenter size-large"></figure>

2017: I had been waiting since the 6S for Apple to release a bigger-sized-but-less-than-6-inches, edge-to-edge OLED screen and they never did for quite some time. So I purchased the Samsung Galaxy S8 which offered best quality screen on the market in addition to a headphone jack, a fingerprint reader, and a modern version of Android OS. My main issue with this phone was not so much the phone part but the Samsung-as-a-company part where they only provided us with 2 years worth of OS updates for a thousand dollar phone… We also couldn’t unlock the boot-loader very easily (to upgrade the OS manually) or remove their forced apps (press F in the chat for Bixby) and I was getting worried about the security of the device over time!

<figure class="aligncenter size-large"></figure>

2020: After nearly 5 years of waiting, Apple finally released a phone that had a smaller-than-6-inches + edge-to-edge OLED display. It was called the iPhone 12 Mini and I immediately purchased it and retired my old Samsung phone. I really appreciated the form factor of this phone and what it had to offer. Even though I do miss the headphone jack and fingerprint unlocker, it is very hard to find a small-sized, full-screen phone these days for those of us with smaller hands!

<figure class="aligncenter size-large">
</figure>

2021: And now, back to today, with the Apple iPhone 13 Mini Blue — it’s the ‘S’ year again! 🙂

A simple HTML table for comma separated variable files.

Posted by Adam Young on October 01, 2021 04:02 PM

Enough of us are looking at the cluster that I want an easy to read snapshot of the node state. So, I convert the csv output of the openstack command into a simple HTML table.

Yes, there are many other better ways to do this.

This one steps over the line for a simple tool. Ideally, it would just perform the csv to table part, with no style or HTML body. But, for my purposes, it makes it much easier to visualize if I get some formatting.

#!/usr/bin/python
import sys
import csv
 
first=True

head= '''
<html><head>
<style>
table, td {
  border: 1px solid black;
 border-collapse: collapse;
}
</style>
</head><body>
'''.strip()

print(head)

data = sys.stdin.readlines()

for line in csv.reader(data):
    if first:
        print('')
        tag = "th"
    else:
        tag = "td"
    print('')
    for token in line:
        print('<%s>%s' % (tag, token.replace('"',''), tag))
    else:
        print('')
    first = False

print('
') print("</body>")

Here’s what it looks like:

<figure class="wp-block-image"></figure>

Select only the Jades

Posted by Adam Young on October 01, 2021 03:42 PM

Some custom jq for RegEx selection of OpenStack Ironic baremetal nodes. Our Server types show up in their names. I want to be able to build lists of only the Mt. Jade Servers, which have names that look like this:

jade09-r097

openstack baremetal node list  --sort provision_state:asc   -c UUID -c Name -f json | jq '.[] | select(.Name | test("jade."))'

Firefox Wayland development in 2021

Posted by Martin Stransky on October 01, 2021 08:45 AM
<figure class="aligncenter size-large"><figcaption>I swear, no more crashes on Wayland!</figcaption></figure>

It’s been long time from my last update about Firefox news on Linux and I’ve finally got some time to sum up what we’ve been working on for last year and what’s coming. There haven’t been introduced any new exciting features (from Linux perspective) for the last year but rather a hidden but important changes.

From Linux desktop developers perspective 2021 is a year of Wayland. KDE has been shipping decent Wayland compositor which becomes default for Fedora 34. It’s actually pretty fast and gives you smooth feeling of “good old times” with X11/Gtk2/name-your-favorite environment where any graphics change was just instant without lags or slow transitions. I must mention Robert Mader who created a new Firefox Wayland SW backend for the KDE.

As major desktop distro (Ubuntu) is slowly moving to Wayland we’re getting more and more Wayland/Firefox users. Even notorious troublemaker (NVIDIA) decided to step in and support it.

What’s done for next releases?

It’s good that Wayland market share is rising but we also need to make sure that Firefox is ready to run there without any major issues and matches its X11 variant. There are two major areas where Firefox is behind its X11 counterpart – clipboard and popup handling. It’s given by some Wayland protocol features where we can’t simply duplicate the X11 code here.

Clipboard on Wayland is similar to X11 one but we need to translate Wayland asynchronous clipboard to Firefox/Web synced one. I tried various approaches but the best one seems to be just use the asynchronous Wayland clipboard as is and implement some kind of abstraction over it. That was implemented in Firefox 93 an it’s going to be shipped by default in Firefox 94.

On the other hand popups are the most annoying thing we have to implement on Wayland. Firefox just expect any popup can be created any time without its parent (or use main window as a parent) but Wayland requests strict popup hierarchy. It means every window can have only one child popup. When more than one popup is opened it has to be attached to the previously opened popup which becomes a parent for it. And when any popup in the chain is closed the popups must be rearranged to keep the chain connected. This involves all kind of popups like content menus, tooltips, extensions popups, permissions popups and so on. Plus there are some interesting bugs in Wayland protocol or Gtk so excitement/frustration is guaranteed and basic popup implementation becomes extraordinary challenge where small changes can introduce various regressions. Despite the ‘fun with popups’ the popup tracker is almost clear and we’ll ship it in Firefox 94.

One of main Wayland feature is support of monitors with various DPI/scale factor together. Fedora default compositor Mutter shows here a creative approach and reports screen sizes differently than other compositors. As we really want to know screen sizes Firefox tracks monitor changes from Wayland directly and find correct screen by matching screen left top corner point – which fortunately stays stays same for all compositor. We also stop painting Firefox window when screen scale changes so you should enjoy seamless experience on systems with various screen sizes with Firefox 93.

Future plans for Firefox 95

Firefox 95 development cycle begins next ween and I’m going to look to drag and drop which has been partially broken for long long time. Some Wayland specific fixes are already in Firefox 94 but we need to rework some parts to correctly copy files from remote destinations (like inbox) to local filesystems, fix names of dropped files or do tab preview of moved tabs. There are also new interesting compositors bugs as usually 🙂

Future plans for Firefox 96

Firefox Wayland port is generally done and there isn’t any big difference between X11 and Wayland variant at least on GNOME which Fedora uses as default environment. We’re fixing minor bugs and keep eye on user reports.

For next quarter I’d like to look at GPU process for Wayland. GPU process is running tasks related to graphics hardware and shields browser from HW drivers crashes. It’s also place where VAAPI video decoding should be run and will be properly sandboxed there (right now VAAPI is run in content process along general Firefox code, it’s restricted by content sanbox which leads to various VAAPI code crashes and failures).

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on October 01, 2021 07:00 AM

We're updating copr packages to the new versions which will bring new features and bugfixes. This outage impacted the copr-frontend and the copr-backend

What Nodes are broken?

Posted by Adam Young on October 01, 2021 01:55 AM

While I tend to think about the nodes in OpenStack term, the people that physically move the servers around are more familiar with their IPMI address. We have several nodes that are not responding to IPMI requests. Some have been put into the manageable state, some are in error.

Here’s the query I used to list them.

for node in `openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="manageable"  or  ."Provisioning State"=="error"   )  | .UUID' ` ; 
do openstack baremetal node show  $node -f json | jq -r  '.uuid + " " + .name  + " " + ( .driver_info | .ipmi_address |tostring)  ' ;
done



fab1bcf7-a7fc-4c19-9d1d-fc4dbc4b2281 falcon05-r097 10.76.97.171
8470e638-0085-470c-9e51-b2ed016569e1 jade02-r097 10.76.97.174
44960bfb-9ce5-4f42-9bd2-3f585ea69a85 jade08-r097 10.76.97.180
9860787a-fa63-45ab-a92f-39f97b44991e jade09-r097 10.76.97.181
43123529-e839-40ce-95f8-38ba7485667c jade10-r097 10.76.97.182
a5a14515-32b3-4f31-8490-c0aa21560449 jade11-r097 10.76.97.183

Legible Error traces from openstack server show

Posted by Adam Young on September 30, 2021 08:13 PM

If an OpenStack server (Ironic or Nova) has an error, it shows up in a nested field. That field is hard to read in its normal layout, due to JSON formatting. Using jq to strip the formatting helps a bunch

The nested field is fault.details.

The -r option strips off the quotes.

[ayoung@ayoung-home scratch]$ openstack server show oracle-server-84-aarch64-vm-small -f json | jq -r '.fault | .details'
Traceback (most recent call last):
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/compute/manager.py", line 2437, in _build_and_run_instance
    block_device_info=block_device_info)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 3458, in spawn
    block_device_info=block_device_info)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 3831, in _create_image
    fallback_from_host)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 3922, in _create_and_inject_local_root
    instance, size, fallback_from_host)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/driver.py", line 9243, in _try_fetch_image_cache
    trusted_certs=instance.trusted_certs)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/imagebackend.py", line 275, in cache
    *args, **kwargs)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/imagebackend.py", line 642, in create_image
    self.verify_base_size(base, size)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/virt/libvirt/imagebackend.py", line 331, in verify_base_size
    flavor_size=size, image_size=base_size)
nova.exception.FlavorDiskSmallerThanImage: Flavor's disk is too small for requested image. Flavor disk is 21474836480 bytes, image is 34359738368 bytes.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/compute/manager.py", line 2161, in _do_build_and_run_instance
    filter_properties, request_spec)
  File "/var/lib/kolla/venv/lib/python3.7/site-packages/nova/compute/manager.py", line 2525, in _build_and_run_instance
    reason=e.format_message())
nova.exception.BuildAbortException: Build of instance 5281b93a-0c3c-4d38-965d-568d79abb530 aborted: Flavor's disk is too small for requested image. Flavor disk is 21474836480 bytes, image is 34359738368 bytes.

Debugging a Clean Failure in Ironic

Posted by Adam Young on September 30, 2021 04:30 PM

My team is running a small OpenStack cluster with reposnsibility for providing bare metal nodes via Ironic. Currently, we have a handful of nodes that are not usable. They show up as “Cleaning failed.” I’m learning how to debug this process.

Tools

The following ipmtool commands allow us to set the machine to PXE boot, remote power cycle the machine, and view what happens during the boot process.

Power stuff:

ipmitool -H $H -U $U -I lanplus -P $P chassis power status
ipmitool -H $H -U $U -I lanplus -P $P chassis power on
ipmitool -H $H -U $U -I lanplus -P $P chassis power off
ipmitool -H $H -U $U -I lanplus -P $P chassis power cycle

Serial over LAN (SOL)

ipmitool -H $H -U $U -I lanplus -P $P sol activate

PXE Boot

ipmitool -H $H -U $U -I lanplus -P $P chassis bootdev pxe
#Set Boot Device to pxe

Conductor Log

To tail the log and only see entries relevant to the UUID of the node I am cleaning:

tail -f /var/log/kolla/ironic/ironic-conductor.log | grep $UUID

OpenStack baremetal node commands

What is the IPMI address for a node?

openstack baremetal node show fab1bcf7-a7fc-4c19-9d1d-fc4dbc4b2281 -f json | jq '.driver_info | .ipmi_address'
"10.76.97.171"

Cleaning Commands

We have a script that prepares the PXE server to accept a cleaning request from a node. It performs the following three actions (don’t do these yet):

 
 openstack baremetal node maintenance unset ${i}
 openstack baremetal node manage ${i}
 openstack baremetal node provide ${i}

Getting ipmi addresses for nodes

To look at the IPM power status (and confirm that IPMI is set up right for the nodes)

for node in `openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="clean failed")  | .UUID' ` ; 
do    
echo $node ; 
METAL_IP=`openstack baremetal node show  $node -f json | jq -r  '.driver_info | .ipmi_address' ` ; 
echo $METAL_IP  ; 
ipmitool -I lanplus -H  $METAL_IP  -L ADMINISTRATOR -U admin -R 12 -N 5 -P admin chassis power status   ; 
done 

Yes, I did that all on one line, hence the semicolons.

A couple other one liners. This selects all active nodes and gives you their node id and ipmi IP address.

for node in `openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="active")  | .UUID' ` ; do echo $node ;  openstack baremetal node show  $node -f json | jq -r  '.driver_info | .ipmi_address' ;done

And you can swap out active with other values. For example, if you want to see what nodes are in either error or clean failed states:

openstack baremetal node list -f json | jq -r '.[] | select(."Provisioning State"=="error" or ."Provisioning State"=="manageable")  | .UUID'

Troubleshooting

PXE outside of openstack

If I want to ensure I can PXE boot, out side of the openstack operations, in one terminal, I can track the state in a console. I like to have this running in a dedicated terminal: open the SOL.

ipmitool -H 10.76.97.176 -U ADMIN -I lanplus -P ADMIN sol activate

and in another, set the machine to PXE boot, then power cycle it:

ipmitool -H 10.76.97.176 -U ADMIN -I lanplus -P ADMIN chassis bootdev pxe
Set Boot Device to pxe
[ayoung@ayoung-home keystone]$ ipmitool -H 10.76.97.176 -U ADMIN -I lanplus -P ADMIN chassis power cycle
Chassis Power Control: Cycle

If the Ironic server is not ready to accept the PXE request, your server will let you know with a message like this one:

>>Checking Media Presence......
>>Media Present......
>>Start PXE over IPv4 on MAC: 1C-34-DA-51-D6-C0.
PXE-E18: Server response timeout.

ERROR: Boot option loading failed

PXE inside of a clean

openstack baremetal node list --provision-state "clean failed"  -f value -c UUID

Produces output like this:

8470e638-0085-470c-9e51-b2ed016569e1
5411e7e8-8113-42d6-a966-8cacd1554039
08c14110-88aa-4e45-b5c1-4054ac49115a
3f5f510c-a313-4e40-943a-366917ec9e44

Clean wait log entries

I’ll track what is going on in the log for a specific node by running tail -f and grepping for the uuid of the node:

tail -f /var/log/kolla/ironic/ironic-conductor.log | grep 5411e7e8-8113-42d6-a966-8cacd1554039

If you run the three commands I showed above, the Ironic server should be prepared for cleaning and will accept the PXE request. I can execute these one at a time and track the state in the conductor log. If I kick off a clean, eventually, I see entries like this in the conductor log (I’m removing the time stamps and request ids for readability):

ERROR ironic.conductor.task_manager [] Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "clean failed" from state "clean wait"; target provision state is "available"
INFO ironic.conductor.utils [] Successfully set node 5411e7e8-8113-42d6-a966-8cacd1554039 power state to power off by power off.
INFO ironic.drivers.modules.network.flat [] Removing ports from cleaning network for node 5411e7e8-8113-42d6-a966-8cacd1554039
INFO ironic.common.neutron [] Successfully removed node 5411e7e8-8113-42d6-a966-8cacd1554039 neutron ports.

Manual abort

And I can trigger this manually if a run is taking too long by running:

openstack baremetal node abort  $UUID

Kick off clean process

The command to kick off the clean process is

openstack baremetal node provide $UUID

In the conductor log, that should show messages like this (again, edited for readability)

Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "cleaning" from state "manageable"; target provision state is "available"
Adding cleaning network to node 5411e7e8-8113-42d6-a966-8cacd1554039
For node 5411e7e8-8113-42d6-a966-8cacd1554039 in network de931fcc-32a0-468e-8691-ffcb43bf9f2e, successfully created ports (ironic ID: neutron ID): {'94306ff5-5cd4-4fdd-a33e-a0202c34d3d0': 'd9eeb64d-468d-4a9a-82a6-e70d54b73e62'}.
Successfully set node 5411e7e8-8113-42d6-a966-8cacd1554039 power state to power on by rebooting.
Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "clean wait" from state "cleaning"; target provision state is "available"

PXE during a clean

At this point, the most interesting thing is to see what is happening on the node. ipmiptool sol activate provides a running log. If you are lucky, the PXE process kicks off and a debian-based kernel should start booting. My company has a specific login set for the machines:

<pra lang="bash"> debian login: ampere Password: Linux debian 5.10.0-6-arm64 #1 SMP Debian 5.10.28-1 (2021-04-09) aarch64 </pra>

Debugging on the Node

After this, I use sudo -i to run as root.

$ sudo -i 
...
# ps -ef | grep ironic
root        2369       1  1 14:26 ?        00:00:02 /opt/ironic-python-agent/bin/python3 /usr/local/bin/ironic-python-agent --config-dir /etc/ironic-python-agent.d/

Looking for logs:

ls /var/log/
btmp	ibacm.log  opensm.0x9a039bfffead6720.log  private
chrony	lastlog    opensm.0x9a039bfffead6721.log  wtmp

No ironic log. Is this thing even on the network?

# ip a
1: lo: <loopback> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0f0np0: <broadcast> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 98:03:9b:ad:67:20 brd ff:ff:ff:ff:ff:ff
3: enp1s0f1np1: <broadcast> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 98:03:9b:ad:67:21 brd ff:ff:ff:ff:ff:ff
4: enxda90910dd11e: <broadcast> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether da:90:91:0d:d1:1e brd ff:ff:ff:ff:ff:ff

Nope. Ok, lets get it on the network:

# dhclient
[  486.508054] mlx5_core 0000:01:00.1 enp1s0f1np1: Link down
[  486.537116] mlx5_core 0000:01:00.1 enp1s0f1np1: Link up
[  489.371586] mlx5_core 0000:01:00.0 enp1s0f0np0: Link down
[  489.394050] IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready
[  489.400646] mlx5_core 0000:01:00.0 enp1s0f0np0: Link up
[  489.406226] IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready
root@debian:~# [  500.596626] sr 0:0:0:0: [sr0] CDROM not ready.  Make sure there is a disc in the drive.
ip a
1: lo: <loopback> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0f0np0: <broadcast> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:03:9b:ad:67:20 brd ff:ff:ff:ff:ff:ff
    inet 192.168.97.178/24 brd 192.168.97.255 scope global dynamic enp1s0f0np0
       valid_lft 86386sec preferred_lft 86386sec
    inet6 fe80::9a03:9bff:fead:6720/64 scope link 
       valid_lft forever preferred_lft forever
3: enp1s0f1np1: <broadcast> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 98:03:9b:ad:67:21 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9a03:9bff:fead:6721/64 scope link 
       valid_lft forever preferred_lft forever
4: enxda90910dd11e: <broadcast> mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000
    link/ether da:90:91:0d:d1:1e brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d890:91ff:fe0d:d11e/64 scope link 
       valid_lft forever preferred_lft forever

And…quite shortly thereafter in the conductor log:

Agent on node 5411e7e8-8113-42d6-a966-8cacd1554039 returned cleaning command success, moving to next clean step
Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "cleaning" from state "clean wait"; target provision state is "available"
Executing cleaning on node 5411e7e8-8113-42d6-a966-8cacd1554039, remaining steps: []
Successfully set node 5411e7e8-8113-42d6-a966-8cacd1554039 power state to power off by power off.
Removing ports from cleaning network for node 5411e7e8-8113-42d6-a966-8cacd1554039
Successfully removed node 5411e7e8-8113-42d6-a966-8cacd1554039 neutron ports.
Node 5411e7e8-8113-42d6-a966-8cacd1554039 cleaning complete
Node 5411e7e8-8113-42d6-a966-8cacd1554039 moved to provision state "available" from state "cleaning"; target provision state is "None"

Cause of Failure

So, in our case, the issue seems to be that the IPA image does not have dhcp enabled.

Nest With Fedora 2021 recordings now available!

Posted by Fedora Community Blog on September 30, 2021 11:37 AM
Eggs in a bird nest

I am happy to announce the recordings for Nest With Fedora are now up on the Fedora YouTube channel. You can search for the ones you want or watch the whole playlist! There are 56 videos to peruse full of Fedora information and friends! Nest with Fedora 2021 was another huge virtual event success for our community. The event garnered 900+ registrations with an 81% turnout (4% above industry standard). This is almost double our numbers from Nest in 2020. A huge welcome to all the newcomers. We are so glad you are part of the Fedora community!

This year at Nest with Fedora we had eight sponsors and two media partners. Our sponsors were Red Hat, AWS, AlmaLinux, Lenovo, openSUSE, Gitlab, Datto, and Shells.com. Our media partners were It’s FOSS and the Destination Linux Network. Another round of thanks to all of our sponsors & media partners who helped make Nest a success. We also used the WorkAdventure platform to host our custom Fedora Museum, designed by Dhairya Chaudhary. I saw and received so many positive pieces of feedback about the Fedora Museum, and the good news is we are going to keep using it for future events!

The post-event survey for Nest with Fedora 2021 showed some interesting data. From this survey—as well as informal polls during the event—we estimate that about half of the attendees had never participated in a Fedora event before, even if they are occasional contributors. 70.6% of respondents feel they have a better understanding of the Fedora Project after attending Nest With Fedora. The highlight of the survey results is a 4.33/5.00 satisfaction rating for the event overall!

We look forward to 2022, whether we Flock or Nest with Fedora. I can’t wait to get everyone back together. This is under discussion and it isn’t clear yet what our contributor conference will look like next year. I will say the most ideal outcome would be a hybrid event. Look forward to an update in the first couple months of 2022.

On a final note: please enjoy the videos and share them. A big thanks to everyone who participated at Nest with Fedora 2021. Cheers!

The post Nest With Fedora 2021 recordings now available! appeared first on Fedora Community Blog.

نسخه Fedora Linux 35 Beta منتشر شد

Posted by Fedora fans on September 29, 2021 12:11 PM
fedora35

fedora35پروژه فدورا خبر انتشار نسخه Fedora Linux 35 Beta را اعلام کرد. بر همین اساس نسخه نهایی فدورا ۳۵ برای پایان ماه اکتبر (October) برنامه ریزی شده است.

هم اکنون نسخه ی فدورا ۳۵ بتا از طریق وب سایت پروژه فدورا در دسترس و قابل دانلود می باشد:

دانلود Fedora 35 Workstation Beta

دانلود Fedora 35 Server Beta

دانلود Fedora 35 IoT Beta

یا اینکه می توانید از سایر نسخه های محبوب فدورا که شامل میزکار هایی مانند KDE Plasma, XFCE و سایر میزکارها می باشند و همچنین برای معماری ARM مانند Raspberry Pi 2  و  3 در دسترس می باشند استفاده کنید:

دانلود Fedora Spins 35 Beta

دانلود Fedora Labs 35 Beta

دانلود Fedora Linux 35 Beta for ARM

 

در ادامه به مهمترین تغییرات در فدورا ۳۵ بتا خواهیم پرداخت.

 

Fedora Workstation:

نسخه Fedora 35 Workstation Beta شامل میزکار GNOME 41 می باشد که جدیدترین نسخه از GNOME desktop environment می باشد. میزکار GNOME 41 شامل طراحی دوباره Software application می باشد که جستجو و نصب ابزارهای مورد نیاز را ساده تر کرده است.

همچنین هنگام نصب وقتی که مخازن third-party  را فعال کنید به flatpaks از طریق Flathub دسترسی خواهید داشت. Fedora 35 Workstation همچنین شامل power-profiles-daemon می باشد که به شما اجازه انتخاب بهینه سازی (optimizing) بین system performance یا battery life را می دهد.

 

سایر بروزرسانی ها:

انتشار Fedora Kinoite که یک نسخه ی جدید بر پایه KDE Plasma environment و فناوری rpm-ostree می باشد. مانند Fedora Silverblue نسخه Kinoite ویژگی هایی مانند atomic updates و immutable operating system را برای افزایش reliability ارائه می دهد.

از دیگر بروزرسانی ها می توان به زبان های برنامه نویسی مانند Python 3.10, Perl 5.34 و PHP 8.0 اشاره کرد.

 

همانطور که می دانید این نسخه بتا می باشد و ممکن است شامل bug و مشکلاتی باشد که برای گزارش آنها می توانید از لیست پستی پروژه فدورا استفاده کنید و یا اینکه از کانال IRC به آدرس fedora-qa استفاده کنید.

 

The post نسخه Fedora Linux 35 Beta منتشر شد first appeared on طرفداران فدورا.

Cockpit 254

Posted by Cockpit Project on September 29, 2021 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 254 and cockpit-machines 253:

Overview: Move last login to Health Card

Previously, last successful or failed logins were shown in a dismissable alert, which took a lot of space. Login information was moved to the Health card.

screenshot of move last login to health card

A failed login:

screenshot of move last login to health card

Users: Login history

The user account details page now includes a list of your 15 most recent logins.

users-login-history

Login: Arch Linux Branding

Cockpit now includes branding for the Arch Linux distribution.

screenshot of arch linux branding

Webserver: Restrict frame embedding to same origin

Cockpit’s web server now sets the X-Frame-Options header to explicitly disallow frame embedding within a different origin. Thanks to cookie policy restrictions, this was already prevented in most cases, as embedded frames would always show the login page. With this new option, the browser directly forbids cross-origin embedding and shows an explanatory error page instead. (CVE-2021-3660)

254-webserver-restrict-frame-embedding-to-same-origin

Machines: Support adding and removing host devices

Host devices are physical devices on the machine running virtual machines. This includes devices from USB (mice, cameras, keyboards) and PCI (NICs, GPUs). Users can now assign and detach host devices for each VM.

Please note: When a device is assigned to a VM, it can no longer be used by the host.

254-machines-support-adding-and-removing-host-devices 254-machines-support-adding-and-removing-host-devices-2

Try it out

Cockpit 254 and cockpit-machines 253 are available now:

Ampere

Posted by Adam Young on September 28, 2021 04:45 PM

Time for a change, and a big one at that. As of September 20th, I am now a full time Employee of Ampere Computing. I am in the Software Development team, working on Open Source stuff. That means Linux Kernel and Open Stack, among other things.

I’ll post more on why in the future. Why I left Red Hat, and why I specifically chose Ampere. Both deserve a well formed explanation, as both are very important to me. My head is not there yet, it is in code and machines and processes.

I’m getting Amped.

Trying to live a simpler life

Posted by Jon Chiappetta on September 28, 2021 04:28 PM

Phone History: fossjon.wp.com/2021/10/05/…a-history-of-cell-phone-ownership/

Gear List:

  • Old Macbook Air
  • iPhone 13 Mini
  • Analog mechanical watch
  • Airtag in case I get lost!
  • Basic Networking Components: AP, Switch, Router, Firewall
  • Cable Internet Modem

So I have been waiting and saving up for the future-rumoured M1X Macbook Pro (while still hanging on to my 2017 Macbook Air, for nearly 5 years now). I also have been trying to support the sales of the iPhone Mini because it’s such a great form factor and size and the rumours are saying that Apple may not produce it next year with the iPhone 14! :/

– –

Since I have been working from home during this fall/winter season up north, in the woods, I also setup a mini-network here with a nice: UniFi tri-band-ac POE-UAP, Netgear gigabit-ethernet POE-SWITCH, and the famous TP-Link archer C7-V5 OpenWRT ROUTER+FIREWALL. These all make for a great, stable, and reliable home network configuration when used together! 🙂

How to rebase to Fedora Silverblue 35 Beta

Posted by Fedora Community Blog on September 28, 2021 04:00 PM

Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update to F35 Beta on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert back if anything unforeseen happens.

Prior to the update to Fedora 35 Beta, apply any pending upgrades.

Updating using terminal

Because the Fedora 35 Beta is not available in GNOME Software, the whole upgrade must be done through a terminal.

First, check if the 35 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following line in the output:

fedora:fedora/35/x86_64/silverblue

Next, rebase your system to the Fedora 35 branch.

$ rpm-ostree rebase fedora:fedora/35/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Silverblue 35 Beta.

How to revert

If anything bad happens — for instance, if you can’t boot to Fedora Silverblue 35 Beta at all — it’s easy to go back. Pick the previous entry in the GRUB boot menu, and your system will start in its previous state. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase to Fedora Silverblue 35 Beta and back. So why not do it today?

The post How to rebase to Fedora Silverblue 35 Beta appeared first on Fedora Community Blog.

How to stake on NYM Validator 🐳🐳🐳

Posted by Pablo Iranzo Gómez on September 28, 2021 03:20 PM

As said in the article about mixnodes and validators, NYM is a technology aiming for providing privacy for the communications.

Once you get some tokens, PUNK at this time, you can use the web wallet to check the balance of your account and delegate it to mixnodes or gateways… but, using the binaries, you can additionally delegate to validators.

For doing this, we first need the nymd binary on our system to follow the procedure for compiling it from the documentation for validators, but skip the remaining parts https://nymtech.net/docs/run-nym-nodes/validators/.

Specifically, the binaries we’re interested in are:

  • libwasmvm.so
  • nymd

Restoring the wallet

When you created your wallet at https://testnet-milhon-wallet.nymtech.net/ you got a mnemonic phrase that can be used to access it or to restore… we need that one (still keep it private and do not share with anyone).

So… first things first, we need to restore the wallet with:

nymd --keyring-backend=os keys add youruser --recover

Just make sure to use the proper keyring-backend like os or file to store the key and decide the name youruser for holding and referring to it with later commands.

Once the key is created (restored), it will output the punkADDRESS we need to use later on.

Set the variables we’re going to use:

1
2
3
4
    OPERADDRESS=punkvaloper1875deee8zecl6smhl2zg42ulpgsjn80cj8vq3x
    WALLET=$YOURpunkADDRESS # address from above step
    BACKEND=file # the backend you use for your keyring
    FROM=$YOURACCOUNT # the name of the account  you restored in previous step (youruser in the example above)

Staking against a validator

We need to know the address of the validator, so we just need to go to the explorer https://testnet-milhon-blocks.nymtech.net/validators decide which one we want to stake on, and in the details page of it, get the address, for example: punkvaloper1xq1kABCDEqumupju86ljzlj6q2lqhdz2ne76gv and let’s store it as a variable as VALIDATOR.

To stake, we need to also know our current balance, but as we are not running nymd but using it as client, we need to specify a validator with the 26657 port open:

1
nymd query bank balances ${WALLET}  --node "tcp://testnet-milhon-validator1.nymtech.net:26657"

Once we know the balance, we should get a value expressed in upunk and we should consider the commission for the network fees (5000upunk) and store as a number in a variable BALANCE.

Let’s stake with this command:

1
nymd tx staking delegate --node "tcp://testnet-milhon-validator1.nymtech.net:26657" -y ${VALIDATOR}  ${BALANCE}  --from ${youruser}   --keyring-backend=${BACKEND}   --chain-id "testnet-milhon"   --gas="auto"   --gas-adjustment=1.15   --fees 5000upunk

This will add the delegation and will start appearing on the explorer for the chosen validator.

Claiming rewards

After we staked for a while, we might be able to claim the rewards, note that this still requires ‘gas’ in the form of upunk.

First let’s check again our balance with:

1
nymd query bank balances ${WALLET}  --node "tcp://testnet-milhon-validator1.nymtech.net:26657"

Let’s claim the rewards:

1
~/.nymd/nymd  --node "tcp://testnet-milhon-validator1.nymtech.net:26657"  tx distribution withdraw-rewards -y ${VALIDATOR} --from ${youruser} --keyring-backend=${BACKEND} --chain-id='testnet-milhon' --gas='auto' --gas-adjustment=1.15  --fees 5000upunk

And let’s check again the balance with

1
nymd query bank balances punkADDRESS  --node "tcp://testnet-milhon-validator1.nymtech.net:26657"

If everything went fine, and we got the tokens there for a while, we should see a growing number of tokens back, that… can be delegated again.

Warning

Each transaction requires to pay a fee, so do not try to hurry too much until you make an estimation on how is the staking rewards going on.

Enjoy!

Announcing the release of Fedora Linux 35 Beta

Posted by Fedora Magazine on September 28, 2021 02:35 PM

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 35 Beta, the next step towards our planned Fedora Linux 35 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights

Fedora Workstation

Fedora 35 Workstation Beta includes GNOME 41, the newest release of the GNOME desktop environment. GNOME 41 includes a redesigned Software application that makes it easier to find and install the tools you need. And now when you enable third-party repositories during installation, you’ll get access to flatpaks from Flathub to supplement the Fedora repository. Fedora 35 Workstation also includes power-profiles-daemon, which allows you to choose between optimizing for system performance or battery life.

Other updates

New in this release is Fedora Kinoite—a KDE Plasma environment based on rpm-ostree technology. Like Fedora Silverblue, Kinoite provides atomic updates and an immutable operating system for increased reliability.

Fedora Linux 35 builds on the switch to PipeWire for managing audio by introducing WirePlumber as the default session manager. WirePlumber enables customization of rules for routing streams to and from devices.

Of course, there’s the usual update of programming languages and libraries: Python 3.10, Perl 5.34, PHP 8.0 and more!

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the test mailing list or in the #fedora-qa channel on Libera.chat. As testing progresses, common issues are tracked on the Common F35 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

More information

For more detailed information about what’s new on Fedora Linux 35 Beta release, you can consult the Fedora Linux 35 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Sortie de Fedora Linux 35 Beta

Posted by Charles-Antoine Couret on September 28, 2021 01:30 PM

En ce mardi 28 septembre, la communauté du Projet Fedora sera ravie d'apprendre la disponibilité de la version Beta de Fedora Linux 35.

Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora Linux 35 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.

La version finale est pour le moment fixée pour le 19 ou 26 octobre.

Expérience utilisateur

  • Passage à GNOME 41 ;
  • En lien avec la nouvelle fonctionnalité de GNOME concernant l'énergie, Fedora installe par défaut le paquet power-profiles-daemon pour contrôler via DBus la politique énergétique du système entre performance, équilibré ou économie d'énergie ;
  • GNOME Logiciels et GNOME Initial Setup proposent une option à l'utilisateur pour activer des dépôts tiers ;
  • Ajout d'un dépôt tiers nommé fedora-flathub-filter qui expose des applications Flatpaks provenant de Flathub sélectionnées par Fedora. L'installation usuelle de Flathub reste nécessaire pour accéder à l'ensemble de ses applications ;
  • WirePlumber va gérer les sessions Pipewire pour l'audio dorénavant plutôt que ce que Pipewire utilise en interne ;
  • Le système Fedora Kinoite devient une variante officielle. C'est l'équivalent de Fedora Silverblue avec KDE Plasma comme environnement graphique par défaut.

Gestion du matériel

  • L'image Fedora Cloud prend en charge le mode hybride BIOS et UEFI pour le démarrage de la machine ;
  • Les partitions chiffrées avec LUKS auront la taille du secteur défini automatiquement, suivant le matériel sous-jacent pour améliorer les performances. Jusqu'ici la taille était fixe à 512 octets par secteur, cela devrait être de 4096 octets par secteur dans la majorité des cas.

Internationalisation

  • IBus est proposé à la version 1.5.25 ;
  • La méthode d'entrée par défaut pour les langues indo-aryennes passe de Inscript vers Enhanced Inscript keymaps.

Administration système

  • L'image de base de Fedora ne fournit plus les paquets sssd-client et util-linux pour réduire la taille des conteneurs avec Fedora ;
  • L'installateur Anaconda prend en charge des fichiers de profil et non plus des fichiers de configuration de produits pour être plus générique.
  • systemd-resolved prend en charge DNS over TLS (DoT) si le serveur DNS configuré par l'utilisateur supporte cette fonctionnalité. Cela ajoute une couche cryptographique aux requêtes DNS ;
  • L'image Fedora Cloud utilise le système de fichiers btrfs par défaut ;
  • Les mots de passe des utilisateurs dans /etc/shadow sont hashés par yescrypt par défaut ;
  • La mise à jour d'un paquet ayant un service systemd au niveau utilisateur mènera à son relancement à la fin de la mise à jour. Auparavant cela n'était fait que pour systemd en tant que PID 1 au niveau système ;
  • Le gestionnaire de virtualisation libvirt a un démon par module dorénavant pour plus de souplesse et de fiabilité. Le service libvirtd.service est supprimé en faveur de virtqemud.service, virtxend.service, virtlxcd.service, virtinterfaced.socket, virtnetworkd.socket, virtnodedevd.socket, virtnwfilterd.socket, virtproxyd.socket, virtsecretd.socket et virtstoraged.socket ;
  • La bibliothèque Cyrus SASL passe de Berkeley DB à GDBM pour la gestion des bases de données. Les paquets concernés auront leurs bases de données automatiquement convertis via la commande :
cyrusbdb2current <sasldb path> <new_path>
  • Le cache de SSSD pour les utilisateurs locaux peut être activé ou désactivé à chaud, et il n'est plus lancé par défaut dorénavant.
  • Mise à jour du parefeu dynamique firewalld à la version 1.1.0 ;
  • Suppression du paquet authselect-compat, de fait l'outil authconfig disparaît au profit de authselect qui est mis par défaut depuis Fedora 28 ;
  • Le paquet libusb est renommé libusb-compat-0.1 et libusbx vers libusb1 ;
  • Mise à jour de RPM à la version 4.17.

Développement

  • La collection d'outils binutils passe à la version 2.37 ;
  • La chaine de compilation GNU est mise à jour avec GCC 11, Glibc 2.34 et GDB 10.2 ;
  • De même pour la suite LLVM pour leur 13e version ;
  • La bibliothèque généraliste de C++, Boost, appuie sur le champignon jusqu'à la version 1.76 ;
  • Node.js 16 est proposé par défaut. Les versions 14 et 12 restent disponibles dans les modules facultatives ;
  • Le langage Python 3.10 est déployé pendant que Python 3.5 est entièrement retiré ;
  • Le célèbre générateur de documentation en Python, Sphinx, veille sur la 4e version ;
  • Le langage Perl perle vers la version 5.34 ;
  • Le langage de programmation fonctionnelle et concurrente Erlang 24 est disponible ;
  • Son voisin Haskell bénéficie du compilateur GHC 8.10 et de sa distribution Stackage version 18 ;
  • Le langage PHP 8.0 fait son apparition ;
  • L'environnement de compilation de binaires Windows, MinGW, est mis à jour ;
  • La bibliothèque graphique SDL 2.0 fournira la gestion de la compatibilité avec la version 1.2, plutôt que l'installation de cette ancienne version ;
  • Le paquet libmemcached utilise le code de libmemcached-awesome au lieu du projet d'origine, qui n'est plus maintenu depuis 7 ans. Le tout reste compatible au niveau API et ABI ;
  • Debuginfod est utilisé par défaut pour obtenir les codes source et autres données de débogage en cas de nécessité, plutôt que de recourir à l'installation des paquets de débogage correspondant.

Projet Fedora

  • Le fichier /etc/os-release renvoie le nom du système comme Fedora Linux et non Fedora. Cela met en avant la distinction entre le projet Fedora et le système lui même, qui s'appelle Fedora Linux maintenant ;
  • La politique de choix du compilateur pour générer un paquet évolue pour laisser plus de latitude à l'empaqueteur. GCC ou Clang/LLVM peuvent être choisis par l'empaqueteur même si GCC est pleinement supporté ou non par le projet en question. Avant seulement GCC devait être utilisé, sauf si le projet ne gérait officiellement que Clang ;
  • La politique pour les paquets de Python a été mise à jour pour favoriser le travail commun avec Python et les autres distributions ;
  • Par ailleurs, moins de paquets Python vont dépendre de python3-setuptools ;
  • Un nouveau paquet glibc-gconv-extra est ajouté pour prendre en charge les formats d'encodage en dehors de UTF-*, unicode, ISO-8859-1, ISO8859-15, CP1252 et ANSI_X3.110 pour gagner 8 Mio sur une image minimale, seuls ces formats sont proposés par défaut avec Glibc ;
  • Les paquets seront compilés sans -ffat-lto-objects par défaut, les paquets qui en ont besoin devront l'ajouter eux même ;
  • Import de la macro OpenSUSE pour définir la mémoire minimale nécessaire par constructeur du paquet durant le parallélisme :
%limit_build -m 8192

Pour éviter que les gros projets comme chromium échouent par manque de mémoire.

  • Lors de la construction d'un paquet RPM, le chemin RPATH sera vérifié et pourra faire échouer la génération du paquet s'il ne respecte pas les consignes du projet Fedora ;
  • Les champs Release et changelog d'un paquet RPM peuvent être autogénérés par rpmautospec.

Tester

Durant le développement d'une nouvelle version de Fedora Linux, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est de tester pendant une journée une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe d'assurance qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un rapport de bogue devra être ouvert pour permettre l'élaboration d'un correctif.

C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.

Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement sur mon blog quand une journée de tests est planifiée.

Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.

Si vous avez déjà Fedora Linux 34 ou 33 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

En cas de bogue, n'oubliez pas de relire la documentation pour signaler les anomalies sur le BugZilla ou de contribuer à la traduction sur Weblate. N'oubliez pas de consulter les bogues déjà connus pour Fedora 35.

Bons tests à tous !

Installing Fedora 34 on my Turing Pi 7 node cluster

Posted by Richard W.M. Jones on September 28, 2021 12:06 PM

I now have Fedora 34 running on all 7 nodes of my Turing Pi 1 cluster. Tedious to install, so these are just my notes.

Compile https://github.com/raspberrypi/usbboot

Insert the compute module to program, set the jumper to flash mode, connect the cable, power it on and run ./rpiboot

/dev/sdX will appear once the CM is in the right mode. Following the instructions here:

arm-image-installer --image=$HOME/Fedora-Server-34-1.2.aarch64.raw.xz --target=rpi3 --media=/dev/sdb --resizefs --relabel --addkey $HOME/.ssh/id_rsa.pub

This takes quite a long time to run. You also need to kill initial-setup:

virt-customize -a /dev/sdb --link /dev/null:/etc/systemd/system/initial-setup.service --selinux-relabel

If it’s successful, reboot the Turing Pi with the jumper in the boot setting and check the first CM comes up (I used an HDMI monitor)

Repeat with the other 6 boards(!)

Edit: I couldn’t work out how to get stable MAC addresses. To get a stable MAC address:

nmcli c modify "Wired connection 1" 802-3-ethernet.cloned-mac-address aa:bb:cc:dd:ee:ff

Why people think that I am an IBM Power Champion?

Posted by Peter Czanik on September 28, 2021 11:50 AM

Whenever I talked to people about POWER, someone asked if I am an IBM Power Champion. My response was that I do not even know what it is, and I am not affiliated with IBM in any way. Recently I came across a blog by Torbjörn Appehl which describes what is an IBM Power Champion and lists the European champions: https://builtonpower.com/2021/09/the-2021-ibm-power-champions-in-europe/.

Finally I know what an IBM Power Champion is, and I feel honored to be mistaken to be one of them :-) Normally I do not care much about titles: I have seen too many empty people with well sounding titles, and fantastic people without any titles. Even with this background I’d be proud to wear the IBM Power Champion badge. However, being a loud POWER advocate does not mean that I feel that I am active enough to warrant this badge.

I must admit, that knowing what an IBM Power Champion is, I am not surprised at all, that I was mistaken for an IBM Power Champion. I am a long time POWER user. Started with RS6000 boxes almost 30 years ago. I helped to install the largest POWER server in Hungary at the turn of the century. I supported Linux on the Genesi Pegasos, a PowerPC workstation, for many years. I was an active contributor and moderator on the power-developer forums and on power.org. And recently I support syslog-ng on POWER. POWER9 provided the best syslog-ng performance for years, and I have a strong suspicion that after a short break the release of POWER10 gives back the performance crown to the POWER architecture. Tead the article I wrote based on my OpenPOWER conference talk last year to see my history in detail and that I am not that active recently: I’m a POWER user

So why do people have the impression that I actively work on POWER technologies? I guess it’s because of my job. If I am enthusiastic about a technology, I talk about it loud and clear. Even if it is not part of my job. And my enthusiasm is contagious. I am a technology evangelist, and by definition it means that I advocate technologies and help them in many possible ways. For my job I work with sudo and syslog-ng, however if I like something, it receives the same treatment – in my free time. You can learn more about being an open source evangelist from my article on opensource.com: What is an open source evangelist?

Of course, I have plans related to POWER, even if I’m not too active. I’d love to test syslog-ng on a POWER10 server. However I’m patient. No matter how much I love syslog-ng, an IBM Power E1080 server is an overkill for syslog-ng both in price and performance. Especially that syslog-ng has an upper limit of 64 threads, and this server has more cores than that :-) But once POWER10 is more widespread and there are smaller boxes available as well, I’d love to verify my assumption, that POWER10 is the currently available best CPU for syslog-ng :-)

And what can I say to POWER and POWER users? Live long and prosper!

Bash Shell Scripting for beginners (Part 1)

Posted by Fedora Magazine on September 27, 2021 08:00 AM

As the title implies this article will be covering Bash Shell Scripting at a beginner level. I’m not going to review the history of Bash but there are many resources to fill you in or you can visit the GNU project at https://www.gnu.org/software/bash/. We will start out with understanding some very basic concepts and then start to put things together.

Creating a script file

The first thing to do is create a script file. First make sure the home directory is the current directory.

cd ~

In the home directory, create the example file. This can be named anything but learnToScript.sh will be used in this article.

touch learnToScript.sh

From this point there will be a file called learnToScript.sh in your home directory. Verify it exists and also notice the privileges for that file are -rw-rw-r– by typing the following.

ls -l
[zexcon@trinity ~]$ ls -l
total 7
drwxr-xr-x. 1 zexcon zexcon   90 Aug 30 13:08 Desktop
drwxr-xr-x. 1 zexcon zexcon   80 Sep 16 08:53 Documents
drwxr-xr-x. 1 zexcon zexcon 1222 Sep 16 08:53 Downloads
-rw-rw-r--. 1 zexcon zexcon   70 Sep 17 10:10 learnToScript.sh
drwxr-xr-x. 1 zexcon zexcon    0 Jul  7 16:04 Music
drwxr-xr-x. 1 zexcon zexcon  318 Sep 15 13:53 Pictures
drwxr-xr-x. 1 zexcon zexcon    0 Jul  7 16:04 Public
drwxr-xr-x. 1 zexcon zexcon    0 Jul  7 16:04 Videos
[zexcon@trinity ~]$ 

There is one more thing that needs to be done to get started. Let’s try and execute the script with nothing written in it. Type the following:

./learnToScript.sh
[zexcon ~]$ ./learnToScript.sh
bash: ./learnToScript.sh: Permission denied

You get permission denied because there are no execute permissions on the file. You need to change the permissions of the file to be able to execute the script. If you are not familiar with permissions I would recommend reading the Fedora Magazine articles written by Paul W. Frields:

Command line quick tips: Permissions
<iframe class="wp-embedded-content" data-secret="ttyVvvBgzH" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/command-line-quick-tips-permissions/embed/#?secret=ttyVvvBgzH" title="“Command line quick tips: Permissions” — Fedora Magazine" width="600"></iframe>

Command line quick tips: More about permissions
<iframe class="wp-embedded-content" data-secret="SpPqDOHJ50" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/command-line-quick-tips-more-about-permissions/embed/#?secret=SpPqDOHJ50" title="“Command line quick tips: More about permissions” — Fedora Magazine" width="600"></iframe>

At this point you’ve brushed up on permissions, so back to the terminal and let’s change the learnToScript.sh file so it will execute. Type in the following to allow execution of the file.

chmod 755 learnToScript.sh
[zexcon@trinity ~]$ ls -l
total 7
drwxr-xr-x. 1 zexcon zexcon   90 Aug 30 13:08 Desktop
drwxr-xr-x. 1 zexcon zexcon   80 Sep 16 08:53 Documents
drwxr-xr-x. 1 zexcon zexcon 1222 Sep 16 08:53 Downloads
-rwxr-xr-x. 1 zexcon zexcon   70 Sep 17 10:10 learnToScript.sh
drwxr-xr-x. 1 zexcon zexcon    0 Jul  7 16:04 Music
drwxr-xr-x. 1 zexcon zexcon  318 Sep 15 13:53 Pictures
drwxr-xr-x. 1 zexcon zexcon    0 Jul  7 16:04 Public
drwxr-xr-x. 1 zexcon zexcon    0 Jul  7 16:04 Videos
[zexcon@trinity ~]$ 

Okay now you’re ready, you have read, write and execute permissions (-rwxr-x r-x) to the learnToScript.sh command.

Editing a script file

Take a moment and make certain you are familiar with vim or any text editor. Throughout this article I will be utilizing vim. At the command prompt type the following:

vim learnToScript.sh

This will bring you to an empty text file with a bunch of tildes in it. Type i on your keyboard and this will move you into — INSERT — mode. You can see it’s in this mode by looking at the bottom left of the terminal window. (Note that an alternative editor is the nano editor.)

From here you need to make sure that the file is recognized by the correct interpreter. So enter the shebang ( #! ) and the directory to your bash, /bin/bash:

#!/bin/bash

One last thing that you will use throughout the article is saving the document. Hit Esc to leave input mode, then Shift + Colon. At the colon you will enter wq. This will write(w) the file and quit(q) vim once you hit enter.

A few things to remember while using vim, anytime you want to write into a document you need to enter i and you will see –INSERT– at the bottom. Anytime you want to save, you will need to hit Esc to leave input mode, and then Shift+: to enter w to write the file or Esc then Shift+: to enter q to quit and not save. Or add both wq together and it will write and close. Esc by itself will exit INSERT mode. You can find much more about vim at it’s website or this get started site.

Lets start scripting…

echo

The echo command is used to return something to the terminal. You will notice that you can use single quotes, double quotes or no quotes. So let’s take a look at it with a traditional Hello World!

#!/bin/bash

echo Hello World!
echo 'Hello World!'
echo "Hello World!"
[zexcon ~]$ ./learnToScript.sh
Hello World!
Hello World!
Hello World!
[zexcon ~]$ 

Notice that you get the same result with all three options. This is not always the case but in this basic script it is. In some circumstances the type of quotes will make a difference. By the way, congratulations you have written your first Bash script. Let’s look at a few things that you will want to know as you continue to create more scripts and let your mind run wild.

Command Substitution $( ) or ` `

Command substitution allows you to get the results of a command you might execute at the command line and write that result to a variable. For example if you type ls at the command prompt you will get a list of the current working directory. So let’s put this into practice. You have two options for command substitution. Note that the first option uses a back tick found above the Tab key on the left side of the keyboard. It is paired with the tilde ~ key. The second option uses a shell variable.

#!/bin/bash

command1=`ls`
echo $command1

command2=$(ls)
echo $command2
[zexcon ~]$ ./learnToScript.sh 
Desktop Documents Downloads learnToScript.sh Music Pictures Public snap Videos
Desktop Documents Downloads learnToScript.sh Music Pictures Public snap Videos
[zexcon ~]$ 

Notice no space between the variable, equal sign, and the start of the command. You get the exact same result with both options. Note that variables need to be led by a dollar sign. If you forget and you echo out the command variable without the dollar sign you will just see the name of the command as shown in the next example.

#!/bin/bash

command1=`ls`
echo command1

command2=$(ls)
echo command2
[zexcon ~]$ ./learnToScript.sh 
command1
command2
[zexcon ~]$

Double Parentheses (())

So what are double parentheses for? Double parentheses are simple, they are for mathematical equations.

#!/bin/bash

echo $((5+3)) 
echo $((5-3)) 
echo $((5*3)) 
echo $((5/3)) 
[zexcon ~]$ ./learnToScript.sh 
8
2
15
1
[zexcon ~]$

Conclusion

At this point we have created our first script. We have an idea how to take several commands, place them in a script and run it to get the results. We will continue this discussion in the next article and look at redirection of input and output, the pipe command, using double brackets or maybe just adding some comments.

Episode 290 – The security of the Matrix

Posted by Josh Bressers on September 27, 2021 12:01 AM

Josh and Kurt talk about the security of the Matrix movie series. There was a new Matrix trailer that made us want to discuss some of the security themes. We talk about how the movie is very focused on computing in the 90s. How Neo probably ran Linux and they used a real ssh exploit. How a lot of the plot is a bit silly. It’s a really fun episode.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2563-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_290_The_security_of_the_Matrix.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_290_The_security_of_the_Matrix.mp3</audio>

Show Notes

Growth of Fedora Distribution over Time

Posted by Stephen Smoogen on September 25, 2021 05:18 PM

Growth of the Fedora Distribution over time

There was a conversation in IRC (libera.chat, #fedora-admin) on the amount of disk space that Fedora is using over time. It used to grow astronomically over time, but there was an idea that it might be slowing down.. and then the realization that no one had graphed it. Taking this challenge in hand I decided to look at it. Doing a complete mirror of the data would require me to have a very long time frame and 100+ TB of disk space, but luckily for me, the Fedora mirror system does a du every night and outputs this data to a file, https://dl.fedoraproject.org/pub/DIRECTORY_SIZES.txt

The file covers all the directories that the main download servers have including the archive trees which are where old releases go to live. It also puts it in a ‘human-readable’ format like

egrep '/rawhide$|/releases/[0-9]*$|/updates/[0-9]*$|/updates/testing/[0-9]*$' DIRECTORY_SIZES.txt | egrep -v '^8.0K|^12K|^4.0K|/pub/epel|/pub/alt' > /tmp/dirs
$ grep '/7' /tmp/dirs
71G /pub/archive/fedora/linux/releases/7
55G /pub/archive/fedora/linux/updates/7
1.5G /pub/archive/fedora/linux/updates/testing/7

The above takes all the directories we want to worry about and avoid /pub/alt which is a wild west of directories and data. I also want to avoid /pub/epel so I don’t get a mix between EPEL-7 and Fedora Linux 7. It also allows me to save that entire long grep into a file so I don’t repeat if for every time I do the next data manipulation which is:

# Thanks to https://gist.github.com/fsteffenhagen/e09b827430956d7f1de35140111e14c4
grep '/7' /tmp/dirs | numfmt --from=iec | awk 'BEGIN{sum=0} {sum=sum+$1} END{num=split($0,a,"/"); print sum,a[num]}' | numfmt --to=iec
128G 7

This uses a command numfmt that I wish I had known years before as I have ‘replicated’ it repeatedly poorly in awk and python. The first one converts it to an integer, then feeds it to awk which adds it, and then sums all that and prints the output. The conversion is lossy but ok for a quick blog post.

$ cat foobaz.sh 
#/bin/bash

for i in $( seq 7 35 ); do
grep "/${i}$" /tmp/dirs | numfmt --from=iec | awk 'BEGIN{sum=0} {sum=sum+$1} END{num=split($0,a,"/"); print sum,a[num]}' | numfmt --to=iec | awk '{print $2","$1}'
done
$ bash foobaz.sh
7,128G
8,153G
9,286G
10,207G
11,266G
12,267G
13,202G
14,229G
15,371G
16,388G
17,594G
18,669G
19,600G
20,639G
21,730G
22,804G
23,865G
24,816G
25,821G
26,971G
27,1.1T
28,1.2T
29,1.1T
30,1.3T
31,1.1T
32,1.2T
33,1.3T
34,1.3T
35,200G

This first run found a problem because 35 should be greater than 200G. However only /pub/fedora/linux/updates/35 and /pub/fedora/linux/updates/testing/ are publically readable. Getting some data from root and we correct this to 35 having 917G. Plotting this in openoffice with some magic we get:

This is ok for a textual map but how about a graph picture. For this we remove the conversion to human readable data (aka M,G,T) and put the data into openoffice for some simple bar graphs. And so here is our data:


 

After this we can also look at how someone mirroring the distributions over time need more disk space:


The total growth looks to be move from exponential to linear over time. If you wanted to break out into smaller archives, you could put release 1 to 25 on one 10 TB drive, and 26 to 32 on another 10 TB drive as the releases after 26 are usually 1.4 TB in size at the end of their release cycle.
 

Friday’s Fedora Facts: 2021-38

Posted by Fedora Community Blog on September 24, 2021 08:11 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Fedora Linux 35 Beta is GO!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Ohio Linux FestColumbus, OH, USearly Deccloses 1 Oct
DevConf.CZBrno, CZ & virtual28–29 Jancloses 24 Oct
SCaLEPasadena, CA, US5–8 Marcloses 30 Nov
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F335187
F344972
F35 (pre-release)1104
F36 (rawhide)6196
</figure>

Fedora Linux 35

Schedule

  • 2021-09-28 — Beta release
  • 2021-10-05 — Final freeze begins
  • 2021-10-19 — Early Final target date
  • 2021-10-22 — Final target date #1

For the full schedule, see the schedule website.

Changes

See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
BZ StatusCount
ASSIGNED1
MODIFIED1
ON_QA42
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
1989726mesaNEWAccepted (Final)
1997315abrtASSIGNEDAccepted (Final)
1991075xdg-desktop-portalNEWAccepted (Final)
2006028cockpitPOSTProposed (Final)
2007602geditNEWProposed (Final)
2006746spice-vdagentNEWProposed (Final)
2006632libreportASSIGNEDProposed (Final)
2001837selinux-policyASSIGNEDProposed (Final)
2006393systemdNEWProposed (Final)
2006624webkit2gtk3MODIFIEDProposed (Final)
</figure>

Fedora Linux 36

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Enable exclude_from_weak_autodetect by default in LIBDNFSystem-WideFESCo #2667
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2021-38 appeared first on Fedora Community Blog.

Fedora Workstation: Our Vision for Linux Desktop

Posted by Christian F.K. Schaller on September 24, 2021 05:40 PM

Fedora Workstation
So I have spoken about what is our vision for Fedora Workstation quite a few times before, but I feel it is often useful to get back to it as we progress with our overall effort.So if you read some of my blog posts about Fedora Workstation over the last 5 years, be aware that there is probably little new in here for you. If you haven’t read them however this is hopefully a useful primer on what we are trying to achieve with Fedora Workstation.

The first few years after we launched Fedora Workstation in 2014 we focused on lot on establishing a good culture around what we where doing with Fedora, making sure that it was a good day to day desktop driver for people, and not just a great place to develop the operating system itself. I think it was Fedora Project Lead Matthew Miller who phrased it very well when he said that we want to be Leading Edge, not Bleeding Edge. We also took a good look at the operating system from an overall stance and tried to map out where Linux tended to fall short as a desktop operating system and also tried to ask ourselves what our core audience would and should be. We refocused our efforts on being a great Operating System for all kinds of developers, but I think it is fair to say that we decided that was to narrow a wording as our efforts are truly to reach makers of all kinds like graphics artists and musicians, in addition to coders. So I thought I go through our key pillar efforts and talk about where they are at and where they are going.

Flatpak

Flatpak logo
One of the first things we concluded was that our story for people who wanted to deploy applications to our platform was really bad. The main challenge was that the platform was moving very fast and it was a big overhead for application developers to keep on top of the changes. In addition to that, since the Linux desktop is so fragmented, the application developers would have to deal with the fact that there was 20 different variants of this platform, all moving at a different pace. The way Linux applications was packaged, with each dependency being packaged independently of the application created pains on both sides, for the application developer it means the world kept moving underneath them with limited control and for the distributions it meant packaging pains as different applications who all depended on the same library might work or fail with different versions of a given library. So we concluded we needed a system which allowed us to decouple of application from the host OS to let application developers update their platform at a pace of their own choosing and at the same time unify the platform in the sense that the application should be able to run without problems on the latest Fedora releases, the latest RHEL releases or the latest versions of any other distribution out there. As we looked at it we realized there was some security downsides compared to the existing model, since the Os vendor would not be in charge of keeping all libraries up to date and secure, so sandboxing the applications ended up a critical requirement. At the time Alexander Larsson was working on bringing Docker to RHEL and Fedora so we tasked him with designing the new application model. The initial idea was to see if we could adjust Docker containers to the desktop usecase, but Docker containers as it stood at that time were very unsuited for the purpose of hosting desktop applications and our experience working with the docker upstream at the time was that they where not very welcoming to our contributions. So in light of how major the changes we would need to implement and the unlikelyhood of getting them accepted upstream, Alex started on what would become Flatpak. Another major technology that was coincidentally being developed at the same time was OSTree by Colin Walters. To this day I think the best description of OSTree is that it functions as a git for binaries, meaning it allows you a simple way to maintain and update your binary applications with minimally sized updates. It also provides some disk deduplication which we felt was important due to the duplication of libraries and so on that containers bring with them. Finally another major design decision Alex did was that the runtime/baseimage should be hosted outside the container, so make possible to update the runtime independently of the application with relevant security updates etc.

Today there is a thriving community around Flatpaks, with the center of activity being flathub, the Flatpak application repository. In Fedora Workstation 35 you should start seeing Flatpak from Flathub being offered as long as you have 3rd party repositories enabled. Also underway is Owen Taylor leading our efforts of integrating Flatpak building into the internal tools we use at Red Hat for putting RHEL together, with the goal of switching over to Flatpaks as our primary application delivery method for desktop applications in RHEL and to help us bridge the Fedora and RHEL application ecosystem.

You can follow the latest news from Flatpak through the official Flatpak twitter account.

Silverblue

So another major issue we decided needing improvements was that of OS upgrades (as opposed to application updates). The model pursued by Linux distros since their inception is one of shipping their OS as a large collection of independently packaged libraries. This setup is inherently fragile and requires a lot of quality engineering and testing to avoid problems, but even then sometimes things sometimes fail, especially in a fast moving OS like Fedora. A lot of configuration changes and updates has traditionally been done through scripts and similar, making rollback to an older version in cases where there is a problem also very challenging. Adventurous developers could also have done changes to their own copy of the OS that would break the upgrade later on. So thanks to all the great efforts to test and verify upgrades they usually go well for most users, but we wanted something even more sturdy. So the idea came up to move to a image based OS model, similar to what people had gotten used to on their phones. And OSTree once again became the technology we choose to do this, especially considering it was being used in Red Hat first foray into image based operating systems for servers (the server effort later got rolled into CoreOS as part of Red Hat acquiring CoreOS). The idea is that you ship the core operating system as a singular image and then to upgrade you just replace that image with a new image, and thus the risks of problems are greatly reduced. On top of that each of those images can be tested and verified as a whole by your QE and test teams. Of course we realized that a subset of people would still want to be able to tweak their OS, but once again OSTree came to our rescue as it allows developers to layer further RPMS on top of the OS image, including replacing current system libraries with for instance newer ones. The great thing about OSTree layering is that once you are done testing/using the layers RPMS you can with a very simple command just drop them again and go back to the upstream image. So combined with applications being shipped as Flatpaks this would create an OS that is a lot more sturdy, secure and simple to update and with a lot lower chance of an OS update breaking any of your applications. On top of that OSTree allows us to do easy OS rollbacks, so if the latest update somehow don’t work for you can you quickly rollback while waiting for the issue you are having to be fixed upstream. And hence Fedora Silverblue was born as the vehicle for us to develop and evolve an image based desktop operating system.

You can follow our efforts around Silverblue through the offical Silverblue twitter account.

Toolbx

Toolbox with RHEL

Toolbox pet container with RHEL UBI


So Flatpak helped us address a lot of the the gaps for making a better desktop OS on the application side and Silverblue was the vehicle for our vision on the OS side, but we realized that we also needed some way for all kinds of developers to be able to easily take advantage of the great resource that is the Fedora RPM package universe and the wider tools universe out there. We needed something that provided people with a great terminal experience. We had already been working on various smaller improvements to the terminal for a while, but we realized we needed something a lot more substantial. Accessing an immutable OS like Silverblue through a terminal window tends to be quite limiting. So that it is usually not want you want to do and also you don’t want to rely on the OSTree layering for running all your development tools and so on as that is going to be potentially painful when you upgrade your OS.
Luckily the container revolution happening in the Linux world pointed us to the solution here too, as while containers were rolled out the concept of ‘pet containers’ were also born. The idea of a pet container is that unlike general containers (sometimes refer to as cattle containers) pet container are containers that you care about on an individual level, like your personal development environment. In fact pet containers even improves on how we used to do things as they allow you to very easily maintain different environments for different projects. So for instance if you have two projects, hosted in two separate pet containers, where the two project depends on two different versions of python, then containers make that simple as it ensures that there is no risk of one of your projects ‘contaminating’ the others with its dependencies, yet at the same time allow you to grab RPMS or other kind of packages from upstream resources and install them in your container. In fact while inside your pet container the world feels a lot like it always has when on the linux command line. Thanks to the great effort of Dan Walsh and his team we had a growing number of easy to use container tools available to us, like podman. Podman is developed with the primary usecase being for running and deploying your containers at scale, managed by OpenShift and Kubernetes. But it also gave us the foundation we needed for Debarshi Ray to kicked of the Toolbx project to ensure that we had an easy to use tool for creating and managing pet containers. As a bonus Toolbx allows us to achieve another important goal, to allow Fedora Workstation users to develop applications against RHEL in a simple and straightforward manner, because Toolbx allows you to create RHEL containers just as easy as it allows you to create Fedora containers.

You can follow our efforts around Toolbox on the official Toolbox twitter account

Wayland

Ok, so between Flatpak, Silverblue and Toolbox we have the vision clear for how to create a robust OS, with a great story for application developers to maintain and deliver applications for it, to Toolbox providing a great developer story on top of this OS. But we also looked at the technical state of the Linux desktop and realized that there where some serious deficits we needed to address. One of the first one we saw was the state of graphics where X.org had served us well for many decades, but its age was showing and adding new features as they came in was becoming more and more painful. Kristian Høgsberg had started work on an alternative to X while still at Red Hat called Wayland, an effort he and a team of engineers where pushing forward at Intel. There was a general agreement in the wider community that Wayland was the way forward, but apart from Intel there was little serious development effort being put into moving it forward. On top of that, Canonical at the time had decided to go off on their own and develop their own alternative architecture in competition with X.org and Wayland. So as we were seeing a lot of things happening in the graphics space horizon, like HiDPI, and also we where getting requests to come up with a way to make Linux desktops more secure, we decided to team up with Intel and get Wayland into a truly usable state on the desktop. So we put many of our top developers, like Olivier Fourdan, Adam Jackson and Jonas Ådahl, on working on maturing Wayland as quickly as possible.
As things would have it we also ended up getting a lot of collaboration and development help coming in from the embedded sector, where companies such as Collabora was helping to deploy systems with Wayland onto various kinds of embedded devices and contributing fixes and improvements back up to Wayland (and Weston). To be honest I have to admit we did not fully appreciate what a herculean task it would end up being getting Wayland production ready for the desktop and it took us quite a few Fedora releases before we decided it was ready to go. As you might imagine dealing with 30 years of technical debt is no easy thing to pay down and while we kept moving forward at a steady pace there always seemed to be a new batch of issues to be resolved, but we managed to do so, not just by maturing Wayland, but also by porting major applications such as Martin Stransky porting Firefox, and Caolan McNamara porting LibreOffice over to Wayland. At the end of the day I think what saw us through to success was the incredible collaboration happening upstream between a large host of individual contributors, companies and having the support of the X.org community. And even when we had the whole thing put together there where still practical issues to overcome, like how we had to keep defaulting to X.org in Fedora when people installed the binary NVidia driver because that driver did not work with XWayland, the X backwards compatibility layer in Wayland. Luckily that is now in the process of becoming a thing of the past with the latest NVidia driver updates support XWayland and us working closely with NVidia to ensure driver and windowing stack works well.

PipeWire

Pipewire in action

Example of PipeWire running


So now we had a clear vision for the OS and a much improved and much more secure graphics stack in the form of Wayland, but we realized that all the new security features brought in by Flatpak and Wayland also made certain things like desktop capturing/remoting and web camera access a lot harder. Security is great and critical, but just like the old joke about the most secure computer being the one that is turned off, we realized that we needed to make sure these things kept working, but in a secure and better manner. Thankfully we have GStreamer co-creator Wim Taymans on the team and he thought he could come up with a pulseaudio equivalent for video that would allow us to offer screen capture and webcam access in a convenient and secure manner.
As Wim where prototyping what we called PulseVideo at the time we also started discussing the state of audio on Linux. Wim had contributed to PulseAudio to add a security layer to it, to make for instance it harder for a rogue application to eavesdrop on you using your microphone, but since it was not part of the original design it wasn’t a great solution. At the same time we talked about how our vision for Fedora Workstation was to make it the natural home for all kind of makers, which included musicians, but how the separateness of the pro-audio community getting in the way of that, especially due to the uneasy co-existence of PulseAudio on the consumer side and Jack for the pro-audio side. As part of his development effort Wim came to the conclusion that he code make the core logic of his new project so fast and versatile that it should be able to deal with the low latency requirements of the pro-audio community and also serve its purpose well on the consumer audio and video side. Having audio and video in one shared system would also be an improvement for us in terms of dealing with combined audio and video sources as guaranteeing audio video sync for instance had often been a challenge in the past. So Wims effort evolved into what we today call PipeWire and which I am going to be brave enough to say has been one of the most successful launches of a major new linux system component we ever done. Replacing two old sound servers while at the same time adding video support is no small feat, but Wim is working very hard on fixing bugs as quickly as they come in and ensure users have a great experience with PipeWire. And at the same time we are very happy that PipeWire now provides us with the ability of offering musicians and sound engineers a new home in Fedora Workstation.

You can follow our efforts on PipeWire on the PipeWire twitter account.

Hardware support and firmware

In parallel with everything mentioned above we where looking at the hardware landscape surrounding desktop linux. One of the first things we realized was horribly broken was firmware support under Linux. More and more of the hardware smarts was being found in the firmware, yet the firmware access under Linux and the firmware update story was basically non-existent. As we where discussing this problem internally, Peter Jones who is our representative on UEFI standards committee, pointed out that we probably where better poised to actually do something about this problem than ever, since UEFI was causing the firmware update process on most laptops and workstations to become standardized. So we teamed Peter up with Richard Hughes and out of that collaboration fwupd and LVFS was born. And in the years since we launched that we gone from having next to no firmware available on Linux (and the little we had only available through painful processes like burning bootable CDs etc.) to now having a lot of hardware getting firmware update support and more getting added almost on a weekly basis.
For the latest and greatest news around LVFS the best source of information is Richard Hughes twitter account.

In parallel to this Adam Jackson worked on glvnd, which provided us with a way to have multiple OpenGL implementations on the same system. For those who has been using Linux for a while I am sure you remembers the pain of the NVidia driver and Mesa fighting over who provided OpenGL on your system as it was all tied to a specific .so name. There was a lot of hacks being used out there to deal with that situation, of varying degree of fragility, but with the advent of glvnd nobody has to care about that problem anymore.

We also decided that we needed to have a part of the team dedicated to looking at what was happening in the market and work on covering important gaps. And with gaps I mean fixing the things that keeps the hardware vendors from being able to properly support Linux, not writing drivers for them. Instead we have been working closely with Dell and Lenovo to ensure that their suppliers provide drivers for their hardware and when needed we work to provide a framework for them to plug their hardware into. This has lead to a series of small, but important improvements, like getting the fingerprint reader stack on Linux to a state where hardware vendors can actually support it, bringing Thunderbolt support to Linux through Bolt, support for high definition and gaming mice through the libratbag project, support in the Linux kernel for the new laptop privacy screen feature, improved power management support through the power profiles daemon and now recently hiring a dedicated engineer to get HDR support fully in place in Linux.

Summary

So to summarize. We are of course not over the finish line with our vision yet. Silverblue is a fantastic project, but we are not yet ready to declare it the official version of Fedora Workstation, mostly because we want to give the community more time to embrace the Flatpak application model and for developers to embrace the pet container model. Especially applications like IDEs that cross the boundary between being in their own Flatpak sandbox while also interacting with things in your pet container and calling out to system tools like gdb need more work, but Christian Hergert has already done great work solving the problem in GNOME Builder while Owen Taylor has put together support for using Visual Studio Code with pet containers. So hopefully the wider universe of IDEs will follow suit, in the meantime one would need to call them from the command line from inside the pet container.

The good thing here is that Flatpaks and Toolbox also works great on traditional Fedora Workstation, you can get the full benefit of both technologies even on a traditional distribution, so we can allow for a soft and easy transition.

So for anyone who made it this far, appoligies for this become a little novel, that was not my intention when I started writing it :)

Feel free to follow my personal twitter account for more general news and updates on what we are doing around Fedora Workstation.
Christian F.K. Schaller photo

Retrace server down

Posted by Fedora Infrastructure Status on September 24, 2021 12:00 PM

The retrace server https://retrace.fedoraproject.org/faf/ is currently unreachable. This is being worked on currently We are investigating this issue. Please see the ticket below: https://pagure.io/fedora-infrastructure/issue/10238 Sorry for any trouble.

There was a misconfiguration in network config that was fixed, bringing the box …

PowerShell on Linux? A primer on Object-Shells

Posted by Fedora Magazine on September 24, 2021 08:00 AM

In the previous post, Install PowerShell on Fedora Linux, we went through different ways to install PowerShell on Fedora Linux and explained the basics of PowerShell. This post gives you an overview of PowerShell and a comparison to POSIX-compliant shells.

Table of contents

Differences at first glance — Usability

One of the very first differences to take note of when using PowerShell for the first time is semantic clarity.

Most commands in traditional POSIX shells, like the Bourne Again Shell (BASH), are heavily abbreviated and often require memorizing.

Commands like awk, ps, top or even ls do not communicate what they do with their name. Only when one already does know what they do, do the names start to make sense. Once I know that ls lists files the abbreviation makes sense.

In PowerShell on the other hand, commands are perfectly self-descriptive. They accomplish this by following a strict naming convention.

Commands in PowerShell are called “cmdlets” (pronounced commandlets). These always follow the scheme of Verb-Noun.

One example: To get all files or child-items in a directory I tell PowerShell like this:

PS > Get-ChildItem

    Directory: /home/Ozymandias42

Mode                 LastWriteTime         Length Name
----                 -------------         ------ ----
d----          14/04/2021    08:11                Folder1
d----          13/04/2021    11:55                Folder2

An Aside:
The cmdlet name is Get-ChildItem not Items. This is in acknowledgement of Set-theory. Each of the standard cmdlets return a list or a set of results. The number of items in a set —mathematicians call this the sets cardinality— can be 0, 1 or any arbitrary natural number, meaning the set can be empty, contain exactly one result or many results. The reason for this, and why I stress this here, is because the standard cmdlets also implicitly implement a ForEach-Loop for any results they return. More about this later.

Speed and efficiency

Aliases

You might have noticed that standard cmdlets are long and can therefore be time consuming when writing scripts. However, many cmdlets are aliased and don’t necessarily depend on the case, which mitigates this problem.

Let’s write a script with unaliased cmdlets as an example:

PS > Get-Process | ForEach-Object {Write-Host $_.Name -ForegroundColor Cyan}

This lists the name of running processes in cyan. As you can see, many characters are in upper case and cmdlets names are relatively long. Let’s shorten them and replace upper case letters to make the script easier to type:

PS > gps | foreach {write-host $_.name -foregroundcolor cyan}

This is the same script but with greatly simplified input.

To see the full list of aliased cmdlets, type Get-Alias.

Custom aliases

Just like any other shell, PowerShell also lets you set your own aliases by using the Set-Alias cmdlet. Let’s alias Write-Host to something simpler so we can make the same script even easier to type:

PS > Set-Alias -Name wh -Value Write-Host

Here, we aliased wh to Write-Host to increase typebility. When setting aliases, -Name indicates what you want the alias to be and -Value indicates what you want to alias to.

Let’s see how it looks now:

PS > gps | foreach {wh $_.name -foregroundcolor cyan}

You can see that we already made the script easier to type. If we wanted, we could also alias ForEach-Object to fe, but you get the gist.

If you want to see the properties of an alias, you can type Get-Alias. Let’s check the properties of the alias wh using the Get-Alias cmdlet:

PS > Get-Alias wh

CommandType     Name                  Version    Source
-----------     ----                  -------    ------
Alias           wh -> Write-Host                       

Autocompletion and suggestions

PowerShell suggests cmdlets or flags when you press the Tab key twice, by default. If there is nothing to suggest, PowerShell automatically completes to the cmdlet.

Differences between POSIX Shells — Char-stream vs. Object-stream

Any scripting will eventually string commands together via pipe | and soon come to notice a few key differences.

In bash what is moved from one command to the next through a pipe is just a string of characters. However, in PowerShell this is not the case.

In PowerShell, every cmdlet is aware of data structures and objects. For example, a structure like this:

{
  firstAuthor=Ozy,
  secondAuthor=Skelly
}

This data is kept as-is even if a command, used alone, would have presented this data as follows:

AuthorNr.  AuthorName
1          Ozy
2          Skelly

In bash, on the other hand, that formatted output would need to be created by parsing with helper tools like awk or cut first, to be usable with a different command.

PowerShell does not require this parsing since the underlying structure is sent when using a pipe rather than the formatted output shown without. So the command authorObject | doThingsWithSingleAuthor firstAuthor is possible.

The following examples shall further illustrate this.

Beware: This will get fairly technical and verbose. Skip if satisfied already.

A few of the most often used constructs to illustrate the advantage of PowerShell over bash, when using pipes, are to:

  • filter for something
  • format output
  • sort output

When implementing these in bash there are a few things that will re-occur time and time again.
The following sections will exemplarise these constructs and their variants in bash and contrast them with their PowerShell equivalents.

To filter for something

Let’s say you want to see all processes matching the name ssh-agent.
In human thinking terms you know what you want.

  1. Get all processes
  2. Filter for all processes that match our criteria
  3. Print those processes

To apply this in bash we could do it in two ways.

The first one, which most people who are comfortable with bash might use is this one:

$ ps -p $(pgrep ssh-agent)

At first glance this is straight forward. ps get’s all processes and the -p flag tells it to filter for a given list of pids.
What the veteran bash user might forget here however is that this might read this way but is not actually run as such. There’s a tiny but important little thing called the order of evaluation.

$() is a subshell. A subshell is run, or evaluated, first. This means the list of pids to filter again is first and the result is then returned in place of the subshell for the waiting outer command ps to use.

This means it is written as:

  1. Print processes
  2. Filter Processes

but evaluated the other way around. It also implicitly combines the original steps 2. and 3.

A less often used variant that more closely matches the human thought pattern and evaluation order is:

$ pgrep ssh-agent | xargs ps

The second one still combines two steps, the steps 1. and 2. but follows the evaluation logic a human would think of.

The reason this variant is less used is that ominous xargs command. What this basically does is to append all lines of output from the previous command as a single long line of arguments to the command followed by it. In this case ps.

This is necessary because pgrep produces output like this:

$ pgrep bash
14514
15308

When used in conjunction with a subshell ps, might not care about this but when using pipes to approximate the human evaluation order this becomes a problem.

What xargs does, is to reduce the following construct to a single command:

$ for i in $(pgrep ssh-agent); do ps $i ; done

Okay. Now we have talked a LOT about evaluation order and how to do it in bash in different ways with different evaluation orders of the three basic steps we outlined.

So with this much preparation, how does PowerShell handle it?

PS > Get-Process | Where-Object Name -Match ssh-agent

Completely self-descriptive and follows the evaluation order of the steps we outlined perfectly. Also do take note of the absence of xargs or any explicit for-loop.

As mentioned in our aside a few hundred words back, the standard cmdlets all implement ForEach internally and do it implicitly when piped input in list-form.

Output formatting

This is where PowerShell really shines. Consider a simple example to see how it’s done in bash first. Say we want to list all files in a directory sorted by size from the biggest to the smallest and listed as a table with filename, size and creation date. Also let’s say we have some files with long filenames in there and want to make sure we get the full filename no matter how big our terminal.

Field separators, column-counting and sorting

Now the first obvious step is to run ls with the -l flag to get a list with not just the filenames but the creation date and the file sizes we need to sort against too.

We will get a more verbose output than we need. Like this one:

$ ls -l
total 148692
-rwxr-xr-x 1 root root      51984 May 16  2020 [
-rwxr-xr-x 1 root root     283728 May  7 18:13 appdata2solv
lrwxrwxrwx 1 root root          6 May 16  2020 apropos -> whatis
-rwxr-xr-x 1 root root      35608 May 16  2020 arch
-rwxr-xr-x 1 root root      14784 May 16  2020 asn1Coding
-rwxr-xr-x 1 root root      18928 May 16  2020 asn1Decoding
[not needed] [not needed]

What is apparent is, that to get the kind of output we want we have to get rid of the fields marked [not needed] in the above example but that’s not the only thing needing work. We also need to sort the output so that the biggest file is the first in the list, meaning reverse sort…

This, of course, can be done in multiple ways but it only shows again, how convoluted bash scripts can get.

We can either sort with the ls tool directly by using the -r flag for reverse sort, and the –sort=size flag for sort by size, or we can pipe the whole thing to sort and supply that with the -n flag for numeric sort and the -k 5 flag to sort by the fifth column.

Wait! fifth ? Yes. Because this too we would have to know. sort, by default, uses spaces as field separators, meaning in the tabular output of ls -l the numbers representing the size is the 5th field.

Getting rid of fields and formatting a nice table

To get rid of the remaining fields, we once again have multiple options. The most straightforward option, and most likely to be known, is probably cut. This is one of the few UNIX commands that is self-descriptive, even if it’s just because of the natural brevity of it’s associated verb. So we pipe our results, up to now, into cut and tell it to only output the columns we want and how they are separated from each other.

cut -f5- -d” “ will output from the fifth field to the end. This will get rid of the first columns.

   283728 May  7 18:13 appdata2solv
    51984 May 16  2020 [
    35608 May 16  2020 arch
    14784 May 16  2020 asn1Coding
        6 May 16  2020 apropos -> whatis

This is till far from how we wanted it. First of all the filename is in the last column and then the filesize is in the Human unfriendly format of blocks instead of KB, MB, GB and so on. Of course we could fix that too in various ways at various points in our already long pipeline.

All of this makes it clear that transforming the output of traditional UNIX commands is quite complicated and can often be done at multiple points in the pipeline.

How it’s done in PowerShell

PS > Get-ChildItem  
| Sort-Object Length -Descending 
| Format-Table -AutoSize 
    Name, 
    @{Name="Size"; Expression=
        {[math]::Round($_.Length/1MB,2).toString()+" MB"} 
    },
    CreationTime
#Reformatted over multiple lines for better readability.

The only actual output transformation being done here is the conversion and rounding of bytes to megabytes for better human readability. This also is one of the only real weaknesses of PowerShell, that it lacks a simple mechanism to get human readable filesizes.

That part aside it’s clear, that Format-Table allows you to simply list the columns wanted by their names in the order you want them.

This works because of the aforementioned object-nature of piped data-streams in PowerShell. There is no need to cut apart strings by delimiters.

Remote Administration with PowerShell — PowerShell-Sessions on Linux!?

Background

Remote administration via PowerShell on Windows has traditionally always been done via Windows Remoting, using the WinRM protocol.

With the release of Windows 10, Microsoft has also offered a Windows native OpenSSH Server and Client.

Using the SSH Server alone on Windows provides the user a CMD prompt unless the default system Shell is changed via a registry key.

A more elegant option is to make use of the Subsystem facility in sshd_config. This makes it possible to configure arbitrary binaries as remote-callable subsystems instead of the globally configured default shell.

By default there is usually one already there. The sftp subsystem.

To make PowerShell available as Subsystem one simply needs to add it like so:

Subsystem powershell /usr/bin/pwsh -sshs --noprofile --nologo

This works —with the correct paths of course— on all OS’ PowerShell Core is available for. So that means Windows, Linux, and macOS.

What this is good for

It is now possible to open a PowerShell (Remote) Session to a properly configured SSH-enabled Server by doing this:

PS > Enter-PSSession 
    -HostName <target-HostName-or-IP> 
    -User <targetUser> 
    -IdentityFilePath <path-to-id_rsa-file>
    ...
    <-SSHTransport>

What this does is to register and enter an interactive PSSession with the Remote-Host. By itself this has no functional difference from a normal SSH-session. It does, however, allow for things like running scripts from a local host on remote machines via other cmdlets that utilise the same subsystem.

One such example is the Invoke-Command cmdlet. This becomes especially useful, given that Invoke-Command has the -AsJob flag.

What this enables is running local scripts as batchjobs on multiple remote servers while using the local Job-manager to get feedback about when the jobs have finished on the remote machines.

While it is possible to run local scripts via ssh on remote hosts it is not as straight forward to view their progress and it gets outright hacky to run local scripts remotely. We refrain from giving examples here, for brevity’s sake.

With PowerShell, however, this can be as easy as this:

$listOfRemoteHosts | Invoke-Command 
    -HostName $_ 
    -FilePath /home/Ozymandias42/Script2Run-Remotely.ps1 
    -AsJob

Overview of the running tasks is available by doing this:

PS > Get-Job

Id     Name            PSJobTypeName   State         HasMoreData     Location             Command
--     ----            -------------   -----         -----------     --------             -------
1      Job1            BackgroundJob   Running       True            localhost            Microsoft.PowerShe…

Jobs can then be attached to again, should they require manual intervention, by doing Receive-Job <JobName-or-JobNumber>.

Conclusion

In conclusion, PowerShell applies a fundamentally different philosophy behind its syntax in comparison to standard POSIX shells like bash. Of course, for bash, it’s historically rooted in the limitations of the original UNIX. PowerShell provides better semantic clarity with its cmdlets and outputs which means better understandability for humans, hence easier to use and learn. PowerShell also provides aliased cmdlets in the case of unaliased cmdlets being too long. The main difference is that PowerShell is object-oriented, leading to elimination of input-output parsing. This allows PowerShell scripts to be more concise.

PHP version 7.3.31, 7.4.25 and 8.0.11

Posted by Remi Collet on September 24, 2021 03:36 AM

RPMs of PHP version 8.0.11 are available in remi repository for Fedora 35 and remi-php80 repository for Fedora 33-34 and Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 7.4.23 are available in remi repository for Fedora 33-34 and remi-php74 repository Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 7.3.30 are available in remi-php73 repository for Enterprise Linux (RHEL, CentOS).

emblem-important-2-24.pngPHP version 7.2 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora 33-35 and EL-8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.0 installation (simplest):

yum-config-manager --enable remi-php80
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

Replacement of default PHP by version 7.3 installation (simplest):

yum-config-manager --enable remi-php73
yum update php\*

or, the modular way (Fedora and EL 8):

dnf module reset php
dnf module enable php:remi-7.3
dnf update php\*

Parallel installation of version 7.3 as Software Collection

yum install php73

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.4
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu65 (version 65.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.3
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php73 / php74 / php80)

Rust, reproducibility and shadow-rs

Posted by Kushal Das on September 24, 2021 03:11 AM

Generally all of our Rust code are reproducible. If you build it in a fixed path, and also use SOURCE_DATE_EPOCH environment variable, the final library or executables will be producible. This is really helpful, for example while building cryptography python wheel, I can keep building it in a reproducible way even with the Rust dependencies.

A few days ago I saw shadow-rs, which can provide a lot of build time information. For example, in khata now I have a way to tell if I am using any custom build and also identify which one. I was a bit worried as shadow allows to store the build time too, but later found that the community already put in the patches so that it follows SOURCE_DATE_EPOCH.

So, I can still have reproducibility if I use the same build toolchain and environment while depending on shadow. Here is some example code for you to play around.

Reducing the effectiveness of a safety feature

Posted by Richard Hughes on September 23, 2021 03:37 PM

We just purchased a 2021 KIA eNIRO to use as our family car. As typical with EVs, this has to produce a fake engine noise to avoid squashing the hard of hearing (or animals) not expecting two tons of metal to be silently moving. When we test drove a 2020 car last year, there was a physical button on the dash to turn this off, on the premise that the noise sometimes isn’t required or appropriate, but it always defaulted to “on” for every start. As the car gets faster the noise also increases in volume, until after about 30km/h fading to nothing. In reverse the same thing happens with some additional beeps. Getting our 2021 car this year, the button was no longer present and the less-than-affectionately known VESS module cannot be muted. I can guess why, someone probably turned it off and squashed something or someone and someone in the UK/US/EU government understandably freaked out. KIA also removed the wire from the wiring loom, and won’t sell the 2020 button cluster, so it’s not even like you can retrofit a new car to act like the old one.

To be super clear: I don’t have a problem with the VESS noise, but because the “speaker” is in the front bumper the solution for going backwards is “turn up the volume”. Living in London means that houses are pretty close together and me reversing into the drive at 2mph shouldn’t submit the house opposite with a noise several times louder than a huge garbage truck. The solution in the various KIA owner forums seems to be “just unplug the VESS module” but this seems at best unethical and probably borderline illegal given it’s a device with the express purpose of trying to avoid hurting someone with your 2 ton lump of metal.

Given VESS is, as you might expect, just another device on the CAN bus and people have reverse engineered the command stream so you can actually just plug the VESS module into a USB device (with a CAN converter) and play with it yourself. My idea would be to make a modchip-like device that plugs into the VESS module using the existing plug, and basically MITM the CAN messages. All messages going from VESS back to the ECU get allow-listed (even though the ECU doesn’t seem to care if the VESS goes AWOL…) and any speed measurements going forward also get passed straight through. The clever part would be to MITM the speed when the “reverse gear” command has been issued, so that the car thinks it’s going about 20km/h backwards. This makes the VESS still make the engine and beeping noise but it’s only about as loud as the VESS module when going forwards when outside the car.

Technically this is quite easy, VESS->txrxCAN->MCU->txrxCAN->ECU and you can probably use an inexpensive Microchip reference board for a prototype. My question is more if this would:

  1. Be ethical
  2. Be legal
  3. Invalidate my insurance
  4. Invalidate the warranty of my nice new shiny car

Feedback very welcome!

EDrawMax ne fonctionne plus

Posted by Didier Fabert (tartare) on September 23, 2021 10:00 AM

EDrawMax Screenshot

Pour la petite histoire, edrawmax est un logiciel multi-plateforme permettant de créer facilement des diagrammes à partir d’une multitude de modèles.

Si comme moi pour des raisons professionnelles, vous utilisez edrawmax pour faire vos schémas et que depuis un petit moment, il ne fonctionne plus avec l’erreur suivante :

edrawmax: error while loading shared libraries: libldap-2.4.so.2: cannot open shared object file: No such file or directory

Il y a un moyen simple derégler le problème : installer le paquet RPM openldap-compat

sudo dnf install openldap-compat

On obtient ensuite une autre erreur

edrawmax: /opt/EdrawMax-10/lib/libnss3.so: version 'NSS_3.65' not found (required by /usr/lib64/libsmime3.so)

Là, pas le choix, on va remplacer les librairies embarquées
On commence par les sauvegarder

sudo mv /opt/EdrawMax-10/lib/libnss3.so /opt/EdrawMax-10/lib/libnss3.so.orig
sudo mv /opt/EdrawMax-10/lib/libnssutil3.so /opt/EdrawMax-10/lib/libnssutil3.so.orig

Si comme moi votre partition /opt est sur la même partition que /usr, on peut faire un lien (hardlink), sinon il faudra les copier (à faire à chaque mise à jour des paquets nss et nss-util)

ln /usr/lib64/libnss3.so /opt/EdrawMax-10/lib/libnss3.so
ln /usr/lib64/libnssutil3.so /opt/EdrawMax-10/lib/libnssutil3.so

Fedora Linux earns recognition from the Digital Public Goods Alliance as a DPG!

Posted by Fedora Magazine on September 23, 2021 08:00 AM

In the Fedora Project community, we look at open source as not only code that can change how we interact with computers, but also as a way for us to positively influence and shape the future. The more hands that help shape a project, the more ideas, viewpoints and experiences the project represents — that’s truly what the spirit of open source is built from.

But it’s not just the global contributors to the Fedora Project who feel this way. August 2021 saw Fedora Linux recognized as a digital public good by the Digital Public Goods Alliance (DPGA), a significant achievement and a testament to the openness and inclusivity of the project.

We know that digital technologies can save lives, improve the well-being of billions, and contribute to a more sustainable future. We also know that in tackling those challenges, Open Source is uniquely positioned in the world of digital solutions by inherently welcoming different ideas and perspectives critical to lasting success.

But, we also know that many regions and countries around the world do not have access to those technologies. Open Source technologies can be the difference between achieving the Sustainable Development Goals (SDGs) by 2030 or missing the targets. Projects like Fedora Linux, which represent much more than code itself, are the game-changers we need. Already, individuals, organizations, governments, and Open Source communities, including the Fedora Project’s own, are working to make sure the potential of Open Source is realized and equipped to take on the monumental challenges being faced.

The Digital Public Goods Alliance is a multi-stakeholder initiative, endorsed by the United Nations Secretary-General. It works to accelerate the attainment of the SDGs in low- and middle-income countries by facilitating the discovery, development, use of, and investment in digital public goods (DPGs). DPGs are Open Source software, open data, open AI models, open standards, and open content that adhere to privacy and other applicable best practices, and do no harm. This definition, drawn from the UN Secretary-General’s 2020 Roadmap for Digital Cooperation, serves as the foundation of the DPG Registry, an online repository for DPGs. 

The DPG Registry was created to help increase the likelihood of discovery, and therefore use of, DPGs. Today, we are excited to share that Fedora Linux was added to the DPG Registry! Recognition as a DPG increases the visibility, support for, and prominence of open projects that have the potential to tackle global challenges. To become a digital public good, all projects are required to meet the DPG Standard to ensure they truly encapsulate Open Source principles. 

As an Open Source leader, Fedora Linux can make achieving the SDGs a reality through its role as a convener of many Open Source “upstream” communities. In addition to providing a fully-featured desktop, server, cloud, and container operating system, it also acts as a platform where different Open Source software and work come together. Fedora Linux by default only ships its releases with purely Open Source software packages and components. While third-party repositories are available for use with proprietary packages or closed components, Fedora Linux is a complete offering with some of the greatest innovations that Open Source has to offer. Collectively this means Fedora Linux can act as a gateway, empowering the creation of more and better solutions to better tackle the challenges they are trying to address.

The DPG designation also aligns with Fedora’s fundamental foundations:

  • Freedom: Fedora Linux was built as Free and Open Source Software from the beginning. Fedora Linux only ships and distributes Free Software from its default repositories. Fedora Linux already uses widely-accepted Open Source licenses.
  • Friends: Fedora has an international community of hundreds spread across six continents. The Fedora Community is strong and well-positioned to scale as the upstream distribution of the world’s most-widely used enterprise flavor of Linux.
  • Features: Fedora consistently delivers on innovation and features in Open Source. Fedora Linux 34 was a record-breaking release, with 63 new approved Changes in the last release.
  • First: Fedora leverages its unique position and resources in the Free Software world to deliver on innovation. New ideas and features are tried out in the Fedora Community to discover what works, and what doesn’t. We have many stories of both.

For us, recognition as a digital public good brings honor and is a great moment for us, as a community, to reaffirm our commitment to contribute and grow the Open Source ecosystem.

This is a proud moment for each Fedora Community member because we are making a difference. Our work matters and has value in creating an equitable world; this is a fantastic and important feeling.

If you have an interest in learning more about the Digital Public Goods Alliance please reach out to hello@digitalpublicgoods.net.

Outreachy final blog post

Posted by Fedora Community Blog on September 23, 2021 08:00 AM
Outreachy with Fedora, Fall 2016

My internship of 3 months with Fedora has come to an end. From submitting the form 2 times and finally making it the 3’rd time, the journey has been quite challenging. My project “Improve Fedora QA dashboard” motive was to make the dashboard more impactful so that it will be simplified for the newcomers and they can easily understand and contribute without any complexity. 

<figure class="wp-block-image size-full is-resized">#FOSDEM keynote - Karen Sandler announcing #OUTREACHY</figure>

When I applied to Outreachy I wasn’t even sure if I’ll make it or not because I already had given two applications and wasn’t selected in any of them. Still, I didn’t lose any hope and made it the third time. When the contribution phase started, I was very nervous. The contribution phase is for one month and you have to keep contributing and I was a beginner. I have contributed to two projects. The first one was Wikipedia which was Python-based and the second one was Improve Fedora QA dashboard. After 15 days of contribution, I tested corona positive and couldn’t complete the 6 tasks of Wikipedia, and made 5-6 contributions to Improve Fedora QA dashboard. I wasn’t expecting to qualify for the internship. The waiting for one more month for the results was the longest period I have ever had faced, but after all ups and downs, fortunately, I made it and it feels like yesterday I got the mail from Outreachy, time flies literally…

The journey of Outreachy was very great. I learned lots of things, no more a noob now. My mentors were Lukas and Josef. Whenever I got stuck into something they always sought to help me. I didn’t even have the confidence in myself that I would be able to complete the tasks, but eventually, I did. I knew JavaScript, basic ReactJS, and Vanilla Js, however, the project was not about all this, I have implemented the things for the first time like react-i18n, docker, etc, the beginning of the internship was very smooth and easy going but as I came to the second task it was a bit challenging for me as I had to implement the same page twice with two different approaches so that mentors can choose the better one but what matters here was my learning, I feel the more complicating the tasks are, the more you build up yourself while making it easy, learned how to google finally, more about ReactJs and flourished my skills.  

If you are a beginner, Outreachy is the best opportunity to start with. It provides you with good mentors, amazing experience and you can build yourself a better developer, and the other perks are you might get an internship opportunity or job offer. I am an open-source enthusiast and always wanted to clear Outreachy. To all the people who want to grab this amazing opportunity don’t be afraid even if you are a newbie, keep working hard, you will definitely make it and will always cherish it. The journey of mine being an Outreachy ’21 intern under Improve Fedora QA dashboard ended today but the experience will always remain in my heart. 

The post Outreachy final blog post appeared first on Fedora Community Blog.