Fedora People

Datanommer Migration

Posted by Fedora Infrastructure Status on January 17, 2022 11:00 AM

We are making some improvements to the performance of the Datanommer database including adding the Timescaledb plugin, a migration to a new database was required as this involved some breaking changes, the migration has already taken place but the required apps will now be required to point to the new …

Comment vérifier le certificat serveur dans Poezio

Posted by Casper on January 16, 2022 04:00 PM

Nous allons voir comment se comporte Poezio lors d'une connexion SSL/TLS avec le serveur, avec sa configuration par défaut.

S'il n'y a pas d'empreinte de certificat stockée dans le fichier de config, alors Poezio va demander à l'utilisateur de valider l'empreinte du certificat reçu, au moment de la connexion. Puis, Poezio va stocker l'empreinte dans son fichier de config (logique). Au lancement suivant, il va comparer l'empreinte du certificat reçu lors de la connexion SSL/TLS, avec celle qu'il a stocké, pour voir si elles sont identiques.

L'utilisateur ne peut pas, ou difficilement, vérifier l'empreinte du certificat qu'on lui demande de valider. C'est génant pour le premier lancement. Mais, pour les lancements suivants, la seule information pertinante affichée par Poezio, est que le certificat a changé (s'il a vraiment changé).

Par défaut, le certificat n'est pas vérifié avec la base CA du système.

Cette façon de procéder peut sembler surprenante, pourtant elle est connue sous l'appellation Trust on first use. Elle est également mise en pratique par le protocole Gemini, un dérivé du HTTP.

Avantages :

  • Si le serveur XMPP utilise la même clé privée pendant 10 ans
  • Si le serveur utilise un certificat auto-signé
  • Si on utilise son propre serveur XMPP, on peut comparer le hash manuellement

Inconvéniants :

  • Il faut demander le hash à l'adminsys, si on est hébergé
  • Si la clé privée change tous les 6 mois, le hash change tous les 6 mois
  • Pas de vérification avec la base CA du système
  • Si on passe par Tor, cette méthode est inutilisable
  • Si on passe par n'importe quel réseau, ligne mobile, hotspot wifi, VPN bon marché

Comment récupérer l'empreinte coté serveur

Dans le cadre de mon audit, il a fallu trouver un moyen pour voir si le hash affiché par Poezio correspondait à la clé privée sur mon serveur. Puisque j'ai accès au serveur, j'ai pû retrouver exactement quel hash était affiché.

La commande suivante travaille sur la clé privée, mais plus précisément sur la partie "publique" de la clé. Elle est convertie au format DER, qui est un format binaire, puis renvoyée dans l'outil openssl-dgst, qui va faire le calcul en SHA256. Le résultat serait identique avec un pipeline vers la commande "sha256sum" :

$ openssl pkey -in file.key -pubout -outform DER | openssl dgst -sha256 -hex -c | awk '{ print toupper ($2) }'

Cette commande présente de nombreux inconvéniants : Il faut avoir accès au serveur en root, ou bien il faut envoyer un mail aux admins (et leur envoyer la commande). De plus, si la clé privée (le fichier .key) change une fois, il faut relancer la commande. Et si jamais, la clé privée change régulièrement (tous les 6 mois par exemple), il faut relancer la commande à chaque fois. Une fois encore, cette solution est difficilement exploitable sur le long-terme.

Empreinte chez casperlefantom.net

N'étant pas vraiment en accord avec la méthode TOFU/TUFU, je me suis dit que je pourrais mettre à disposition un moyen, pour que les utilisateurs puisse effectuer la vérification manuellement. Je me suis amusé à mettre en ligne un fichier statique qui contient toutes les informations.

Mon intuition est que jamais d'autres administrateurs feront pareil. La demande est tellement pointue, voyez plutôt :

https://dl.casperlefantom.net/pub/ssl/fingerprint-for-poezio-client.txt

Comment désactiver la vérification de l'empreinte du certificat

Dans la version 0.13.1 de Poezio, il est possible de désactiver la méthode par comparaison (méthode TOFU/TUFU), pour revenir à un système plus classique. Il suffit de modifier le fichier de config :

ignore_certificate = True

Ou bien, lancer la commande dans Poezio :

/set ignore_certificate True

Ensuite, il convient d'activer la vérification avec la base CA du système. Le chemin ci-après est valide, pour les systèmes Fedora Linux. Il est possible d'indiquer le chemin vers son CA personnel, si vous en avez un :

ca_cert_path = /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt

Commande dans Poezio :

/set ca_cert_path /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt

Enfin, on peut nettoyer le fichier de config en supprimant la ligne qui correspond à l'empreinte du certificat stockée par Poezio :

certificate = 78:2F:71:43:1F:9B...

Référence : TLS in poezio

Configuring Rails system tests for headless and headfull browsers

Posted by Josef Strzibny on January 16, 2022 12:00 AM

Want to run your system tests headless? And why not both ways? Here’s how to extend Rails tasks to run your system tests with the driver of your choice.

Rails 6 came with system tests baked-in, and so if you generate a new Rails app today, you end up with the following setup code:

# test/application_system_test_case.rb
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  driven_by :selenium, using: :chrome, screen_size: [1400, 1400]
end

You’ll need some dependencies for this to work. If you don’t have them, add the following to your Gemfile:

# Gemfile
...
group :test do
  # Use system testing [https://guides.rubyonrails.org/testing.html#system-testing]
  gem "capybara", ">= 3.26"
  gem "selenium-webdriver", ">= 4.0.0"
  gem "webdrivers"
end

A lot of people want to switch the default driver to something else, especially to the headless Chrome for faster tests.

It’s surprisingly easy. You only need to replace the driver’s name in using parameter:

# test/application_system_test_case.rb
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  driven_by :selenium, using: :headless_chrome, screen_size: [1400, 1400]
end

But by doing this change, you lost the ability to watch your tests visually. So why not have both?

Let’s set the driver based on DRIVER environment variable:

# test/application_system_test_case.rb
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  DRIVER = if ENV["DRIVER"]
    ENV["DRIVER"].to_sym
  else
    :headless_chrome
  end

  driven_by :selenium, using: DRIVER, screen_size: [1400, 1400]
end

I kept headless Chrome as default as something you want to run in CI.

To run system tests with a different driver, we just set that variable on the command line:

$ DRIVER=chrome rails test:system

Pretty nice, yet we can do more. We can have a fancy new Rake task to do this job for us:

# lib/tasks/test.rake
namespace :test do
  namespace :system do
    task :with, [:driver] => :environment do |task, args|
      ENV["DRIVER"] = args[:driver]
      Rake::Task["test:system"].invoke
    end
  end
end

This task sets the ENV variable for us and then invokes the regular Rails’ test:system task. Nothing less, nothing more.

By defining the driver argument, we can now choose the driver nicely on the command line:

$ rails test:system:with[chrome]
$ rails test:system:with[firefox]
$ rails test:system:with[headless_chrome]

If, on the other hand, we want to define exact tasks for particular drivers, we can do this too:

# lib/tasks/test.rake
namespace :test do
  namespace :system do
    task :chrome => :environment do |task, args|
      ENV["DRIVER"] = "chrome"
      Rake::Task["test:system"].invoke
    end
  end
end

Then we can run the test:system:chrome task for the headfull Chrome:

$ rails test:system:chrome

And that’s it! Develop with headless browsers and admire your work once in a while with a full experience!

Maintainable Rails system tests with page objects

Posted by Josef Strzibny on January 15, 2022 12:00 AM

Rails system tests often depend on input and CSS selectors. To make our tests more maintainable, we can isolate layout changes within page objects.

This post is about an idea I had a long time ago and came back to recently. It’s from a similar category as my idea for Rails contexts, so it might not be 100% failproof, and I am looking for feedback.

So what is it about? What’s a page object?

A regular system test might look like this:

require "application_system_test_case"

class RegisterUserTest < ApplicationSystemTestCase
  setup do
    @user = users(:unregistered)
  end

  test "registers an account" do
    visit new_user_registration_path

    fill_in "Email", with: user.email
    fill_in "Password", with: user.password
    fill_in "Password confirmation", with: user.password

    click_on "Sign up"

    assert_selector "h1", text: "Dashboard"
  end
end

It’s nice and tidy for something small. But if we start reusing specific flows and selectors, we would have to update many places whenever we change a particular screen.

This might not be a big deal since we can extract private methods and helpers. But it got me thinking.

What if we isolate actions and assertions of a particular screen in a page object?

An example of the registration and dashboard pages could look like this:

# test/pages/test_page.rb
class TestPage
  include Rails.application.routes.url_helpers

  attr_accessor :test, :page

  def initialize(system_test, page)
    @test = system_test
    @page = page
  end

  def visit
    @test.visit page_path
  end

  def page_path
    raise "Override this method with the page path"
  end
end

# test/pages/registration_page.rb
class RegistrationPage < TestPage
  def register(user)
    @test.fill_in "Email", with: user.email
    @test.fill_in "Password", with: "12345671"
    @test.fill_in "Password confirmation", with: "12345671"

    @test.click_on "Sign up"
  end

  def page_path
    new_user_registration_path
  end
end

# test/pages/dashboard_page.rb
class DashboardPage < TestPage
  def assert_logged_in
    @test.assert_selector "h1", text: "Dashboard"
  end

  def page_path
    dashboard_path
  end
end

The basic idea is that a page under test defines its actions (fill_in_user_email, register) and assertions (assert_logged_in). Whenever the fields change or we have to use a different selector, we have one and only one place to update. Any test that uses such a page wouldn’t have to be changed at all.

When we initialize a new page we have to pass the test and page contexts (here system_test and page) to use the testing API from within these page objects.

Since I want to group these pages, I also have to add the test/pages path to the testing configuration for Zeitwerk to pick up:

# config/environments/test.rb
require "active_support/core_ext/integer/time"

Rails.application.configure do
  ...

  config.autoload_paths << "#{Rails.root}/test/pages"
end

This allows us to write the registration test as:

require "application_system_test_case"

class RegisterUserTest < ApplicationSystemTestCase
  setup do
    @user = users(:unregistered)
  end

  test "registers an account" do
    registration = RegistrationPage.new(self, page)
    registration.register(@user)

    dashboard = DashboardPage.new(self, page)
    dashbord.assert_logged_in
  end
end

I find the grouping to pages rather than private methods cleaner and make the tests themselves much shorter.

Let’s say that I am now adding internalization to pages. Instead of going through all my system tests, I only have to open and edit the relevant pages:


# test/pages/registration_page.rb
class RegistrationPage < TestPage
  def register(user)
    @page.fill_in I18n.t("attributes.user.email"), with: user.email
    @page.fill_in I18n.t("attributes.user.password"), with: user.password
    @page.fill_in I18n.t("attributes.user.password_confirmation"), with: user.password

    @page.click_on I18n.t("buttons.register")
  end

  def page_path
    new_user_registration_path
  end
end

The test itself stayed the same.

However, I also felt there is still some disconnect between going from page to page. So another idea is to introduce a TestFlow object that would keep the whole flow together:

class TestFlow
  attr_accessor :test, :page, :history

  def initialize(system_test, system_page, start_page_class)
    @test = system_test
    @page = start_page_class.new(system_test, system_page)
    @page.visit
    @history = [@page]
  end

  def visit(page_class)
    @page = page_class.new(@test, page)
    @page.visit
    @history << @page
  end

  def transition(page_class)
    @page = page_class.new(@test, page)
    assert_transition
    @history << @page
  end

  def assert_transition
    @test.assert_equal @test.current_path, @page.page_url
  end
end

The idea is that we start with one page in the beginning and then change pages with a transition call to ensure we indeed arrived on the page we originally wanted. The @history then remembers the flow and lets us build other features like going back.

To use it, I’ll make a small helper method in application_system_test_case.rb:

require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  driven_by :selenium, using: :chrome, screen_size: [1400, 1400]

  def start_flow(start_page)
    TestFlow.new(self, page, start_page)
  end
end

And then use it by starting flow in setup and calling transition in between the screens:

require "application_system_test_case"

class RegisterUserTest < ApplicationSystemTestCase
  setup do
    @user = users(:unregistered)
    @flow = start_flow(RegistrationPage)
  end

  test "registers an account" do
    @flow.page.register(@user)
    @flow.transition(TeamPage)
    @flow.page.assert_logged_in
  end
end

That’s it. There are no new frameworks or anything like that, just a different take on organizing system tests. Let me know what you think – especially if you think it’s a terrible idea.

Friday’s Fedora Facts: 2022-02

Posted by Fedora Community Blog on January 14, 2022 08:54 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Dutch PHP ConferenceAmsterdam, NL1–2 Julcloses 30 Jan
Open Source Summit NAAustin, TX, US & virtual21–24 Juncloses 14 March
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimNEW
2032528flatpakNEW
</figure>

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345435
F352629
F36 (rawhide)6983
</figure>

Fedora Linux 36

Schedule

  • 2022-01-18 — Deadline for Self-Contained Change proposals
  • 2022-01-19 — Mass rebuild begins
  • 2022-02-08 — F36 branches from Rawhide; Rawhide begins F37 development

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Users are administrators by default in the installer GUI.Self-ContainedFESCo #2708
Enable fs-verity in RPMSystem-WideFESCo #2711
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
GHC compiler parallel version installsSelf-ContainedFESCo #2715
Keylime subpackaging and agent alternativesSelf-ContainedFESCo #2716
Golang 1.18System-WideFESCo #2720
DIGLIMSystem-WideFESCp #2721
LLVM 14System-WideApproved
Ruby 3.1System-WideApproved
%set_build_flags for %build and %checkSystem-WideApproved
Default To Noto FontsSystem-WideFESCo #2729
Hunspell Dictionary dir changeSystem-WideFESCo #2730
Relocate RPM database to /usrSystem-WideFESCo #2731
No ifcfg by defaultSelf-ContainedFESCo #2732
Django 4.0Self-ContainedFESCo #2733
GNU Toolchain UpdateSystem-WideAnnounced
New requirements for akmods binary kernel modules for Silverblue / Kinoite supportSelf-ContainedAnnounced
Malayalam Default Fonts UpdateSelf-ContainedAnnounced
Ibus-table cangjie default for zh_HKSelf-ContainedAnnounced
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-02 appeared first on Fedora Community Blog.

CPE Weekly Update – Week of January 10th – 14th

Posted by Fedora Community Blog on January 14, 2022 10:00 AM

This is a weekly report from the CPE (Community Platform Engineering)
Team. If you have any questions or feedback, please respond to this
report or contact us on #redhat-cpe channel on libera.chat
(https://libera.chat/).

We (CPE team) will be joining Fedora Social Hour on Jan 27th.
Looking forward to seeing a lot of you!
(https://discussion.fedoraproject.org/t/join-us-for-fedora-social-hour-every-week/18869/46)

Highlights of the week

Infrastructure & Release Engineering

Goal of this initiative

Purpose of this team is to take care of day to day business regarding
CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS
infrastructure and preparing things for the new Fedora release
(mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might
take on.

Update

Fedora Infra

  • Mostly quiet holidays with only minor reboots, etc
  • More koji upgrades: all aarch64/armv7 done, s390x kvm and hubs left to do
  • Container builds broken, needs more eyes
  • Centos cert fetching broken, needs more eyes

CentOS Infra including CentOS CI

  • CentOS Linux 8 EOL plan
  • Hardware issues (storage box, 64 compute nodes for CI infra)
  • Kmods SIG DuD discussion (koji plugin vs external script)
  • CI storage for ocp/openshift migration planning

Release Engineering

  • Rawhide compose issues, but we got a good compose yesterday after a bunch of work
  • Mass rebuild of F36 next week

CentOS Stream

Goal of this initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this
new distribution a reality. The goal of this initiative is to prepare
the ecosystem for the new CentOS Stream.

Updates

  • Finished our January planning, working on:
    • Preparing new version of Content Resolver for production, finishing up stuff around the buildroot integration
    • Exploring things around increasing compose quality
    • Business as usual

Datanommer/Datagrepper V.2

Goal of this initiative

The datanommer and datagrepper stacks are currently relying on fedmsg which
we want to deprecate.
These two applications need to be ported off fedmsg to fedora-messaging.
As these applications are ‘old-timers’ in the fedora infrastructure, we would
also like to look at optimizing the database or potentially redesigning it to
better suit the current infrastructure needs.
For a phase two, we would like to focus on a DB overhaul.

Updates

  • Data is migrated, we need to deploy the new code to production now.

CentOS Duffy CI

Goal of this initiative

Duffy is a system within CentOS CI Infra which allows tenants to provision and
access bare metal resources of multiple architectures for the purposes of
CI testing.
We need to add the ability to checkout VMs in CentOS CI in Duffy. We have
OpenNebula hypervisor available, and have started developing playbooks which
can be used to create VMs using the OpenNebula API, but due to the current state
of how Duffy is deployed, we are blocked with new dev work to add the
VM checkout functionality.

Updates

  • Work on backend -> modules to provision vms
  • Legacy API integration

Image builder for Fedora IoT

Goal of this initiative

Integration of Image builder as a service with Fedora infra to allow Fedora IoT migrate their pipeline to Fedora infra.

Updates

  • Team forming this week. Currently waiting on work from the Image Builder team to wrap to unblock us from moving forward

Bodhi

Goal of this initiative

This initiative is to separate Bodhi into multiple sub packages,
fix integration and unit tests in CI, fix dependency management,
and automate part of the release process.
Read ARC team findings in detail.

Updates

  • Team is forming this week and will officially be launching work next Monday

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest
Group that creates, maintains, and manages a high quality set of additional
packages for Enterprise Linux, including, but not limited to,
Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never
conflict with or replace packages in the base Enterprise Linux distributions.
EPEL uses much of the same infrastructure as Fedora, including buildsystem,
bugzilla instance, updates manager, mirror manager and more.

Updates

  • epel9 is growing rapidly:
    • 2589 packages available (355 more in testing)
    • 1158 source rpms (225 more in testing)
  • Positive community response
  • Ongoing documentation improvements

Kindest regards,
CPE Team

The post CPE Weekly Update – Week of January 10th – 14th appeared first on Fedora Community Blog.

Human Interface Guidelines, libadwaita 1.0 edition

Posted by Allan Day on January 13, 2022 03:06 PM

After a lot of hard work, libadwaita 1.0 was released on the last day of 2021. If you haven’t already, check out Alexander’s announcement, which covers a lot of what’s in the new release.

When we rewrote the HIG back in May 2021, the new version expected and recommended libadwaita. However, libadwaita evolved between then and 1.0, so changes were needed to bring the HIG up to date.

Therefore, over the last two or three weeks, I’ve been working on updating the HIG to cover libadwaita 1.0. Hopefully this will mean that developers who are porting to GTK 4 and libadwaita have everything that they need in terms of design documentation but, if anything isn’t clear, do reach out using the usual GNOME design channels.

In the rest of this post, I’ll review what’s changed in the HIG, compared with the previous version.

What’s changed

There’s a bunch of new content in the latest HIG version, which reflects additional capabilities that are present in libadwaita 1.0. This includes material on:

There have also been updates to existing content: all screenshots have been updated to use the latest UI style from libadwaita, and the guidelines on UI styling have been updated, to reflect the flexibility that comes with libadwaita’s new stylesheet.

As you might expect, there have been some general improvements to the HIG, which are unrelated to libadwaita. The page on navigation has been improved, to make it more accessible. A page on selection mode has also been added (we used to have this documented, then dropped the documentation while the pattern was updated). There has also been a large number of small style and structure changes, which should make the HIG an easier read.

If you spot any issues, the HIG issue tracker is open, and you can send merge requests too!

New badge: DevConf.cz 2022 Attendee !

Posted by Fedora Badges on January 12, 2022 04:17 PM
DevConf.cz 2022 AttendeeYou attended the 2022 iteration of DevConf.cz, a yearly open source conference in Czechia!

Installing the latest syslog-ng on Ubuntu and other DEB distributions

Posted by Peter Czanik on January 12, 2022 12:16 PM

The syslog-ng application is part of all major Linux distributions, and you can usually install syslog-ng from the official repositories. If you use just the core functionality of syslog-ng, use the package in your distribution repository (apt-get install syslog-ng), and you can stop reading here. However, if you want to use the features of newer syslog-ng versions (for example, send log messages to MQTT or Apache Kafka), you have to either compile the syslog-ng from source, or install it from unofficial repositories. This post explains you how to do that.

Read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/installing-the-latest-syslog-ng-on-ubuntu-and-other-deb-distributions

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

AdamW's Debugging Adventures: Bootloaders and machine IDs

Posted by Adam Williamson on January 11, 2022 10:08 PM

Hi folks! Well, it looks like I forgot to blog for...checks watch....checks calendar...a year. Wow. Whoops. Sorry about that. I'm still here, though! We released, uh, lots of Fedoras since the last time I wrote about that. Fedora 35 is the current one. It's, uh, mostly great! Go get a copy, why don't you?

And while that's downloading, you can get comfy and listen to another of Crazy Uncle Adam's Debugging Adventures. In this episode, we'll be uncomfortably reminded just how much of the code that causes your system to actually boot at all consists of fragile shell script with no tests, so this'll be fun!

Last month, booting a system installed from Rawhide live images stopped working properly. You could boot the live image fine, run the installation fine, but on rebooting, the system would fail to boot with an error: dracut: FATAL: Don't know how to handle 'root=live:CDLABEL=Fedora-WS-Live-rawh-20211229-n-1'. openQA caught this, and so did one of our QA community members - Ahed Almeleh - who filed a bug. After the end-of-year holidays, I got to figuring out what was going wrong.

As usual, I got a bit of a head start from pre-existing knowledge. I happen to know that error message is referring to kernel arguments that are set in the bootloader configuration of the live image itself. dracut is the tool that handles an early phase of boot where we boot into a temporary environment that's loaded entirely into system memory, set up the real system environment, and boot that. This early environment is contained in the initrd files you can find alongside the kernel on most Linux distributions; that's what they're for. Part of dracut's job is to be run when a kernel is installed to produce this environment, and then other parts of dracut are included in the environment itself to handle initializing things, finding the real system root, preparing it, and then switching to it. The initrd environments on Fedora live images are built to contain a dracut 'module' (called 90dmsquash-live) that knows to interpret root=live:CDLABEL=Fedora-WS-Live-rawh-20211229-n-1 as meaning 'go look for a live system root on the filesystem with that label and boot that'. Installed systems don't contain that module, because, well, they don't need to know how to do that, and you wouldn't really ever want an installed system to try and do that.

So the short version here is: the installed system has the wrong kernel argument for telling dracut where to find the system root. It should look something like root=/dev/mapper/fedora-root (where we're pointing to a system root on an LVM volume that dracut will set up and then switch to). So the obvious next question is: why? Why is our installed system getting this wrong argument? It seemed likely that it 'leaked' from the live system to the installed system somehow, but I needed to figure out how.

From here, I had kinda two possible ways to investigate. The easiest and fastest would probably be if I happened to know exactly how we deal with setting up bootloader configuration when running a live install. Then I'd likely have been able to start poking the most obvious places right away and figure out the problem. But, as it happens, I didn't at the time remember exactly how that works. I just remembered that I wind up having to figure it out every few years, and it's complicated and scary, so I tend to forget again right afterwards. I kinda knew where to start looking, but didn't really want to have to work it all out again from scratch if I could avoid it.

So I went with the other possibility, which is always: figure out when it broke, and figure out what changed between the last time it worked and the first time it broke. This usually makes life much easier because now you know one of the things on that list is the problem. The shorter and simpler the list, the easier life gets.

I looked at the openQA result history and found that the bug was introduced somewhere between 20211215.n.0 and 20211229.n.1 (unfortunately kind of a wide range). The good news is that only a few packages could plausibly be involved in this bug; the most likely are dracut itself, grub2 (the bootloader), grubby (a Red Hat / Fedora-specific grub configuration...thing), anaconda (the Fedora installer, which obviously does some bootloader configuration stuff), the kernel itself, and systemd (which is of course involved in the boot process itself, but also - perhaps less obviously - is where a script called kernel-install that is used (on Fedora and many other distros) to 'install' kernels lives (this was another handy thing I happened to know already, but really - it's always a safe bet to include systemd on the list of potential suspects for anything boot-related).

Looking at what changed between 2021-12-15 and 2021-12-29, we could let out grub2 and grubby as they didn't change. There were some kernel builds, but nothing in the scriptlets changed in any way that could be related. dracut got a build with one change, but again it seemed clearly unrelated. So I was down to anaconda and systemd as suspects. On an initial quick check during the vacation, I thought anaconda had not changed, and took a brief look at systemd, but didn't see anything immediately obvious.

When I came back to look at it more thoroughly, I realized anaconda did get a new version (36.12) on 2021-12-15, so that initially interested me quite a lot. I spent some time going through the changes in that version, and there were some that really could have been related - it changed how running things during install inside the installed system worked (which is definitely how we do some bootloader setup stuff during install), and it had interesting commit messages like "Remove the dracut_args attribute" and "Remove upd-kernel". So I spent an afternoon fairly sure it'd turn out to be one of those, reviewed all those changes, mocked up locally how they worked, examined the logs of the actual image composes, and...concluded that none of those seemed to be the problem at all. The installer seemed to still be doing things the same as it always had. There weren't any tell-tale missing or failing bootloader config steps. However, this time wasn't entirely wasted: I was reminded of exactly what anaconda does to configure the bootloader when installing from a live image.

When we install from a live image, we don't do what the 'traditional' installer does and install a bunch of RPM packages using dnf. The live image does not contain any RPM packages. The live image itself was built by installing a bunch of RPM packages, but it is the result of that process. Instead, we essentially set up the filesystems on the drive(s) we're installing to and then just dump the contents of the live image filesystem itself onto them. Then we run a few tweaks to adjust anything that needs adjusting for this now being an installed system, not a live one. One of the things we do is re-generate the initrd file for the installed system, and then re-generate the bootloader configuration. This involves running kernel-install (which places the kernel and initrd files onto the boot partition, and writes some bootloader configuration 'snippet' files), and then running grub2-mkconfig. The main thing grub2-mkconfig does is produce the main bootloader configuration file, but that's not really why we run it at this point. There's a very interesting comment explaining why in the anaconda source:

# Update the bootloader configuration to make sure that the BLS
# entries will have the correct kernel cmdline and not the value
# taken from /proc/cmdline, that is used to boot the live image.

Which is exactly what we were dealing with here. The "BLS entries" we're talking about here are the things I called 'snippet' files above, they live in /boot/loader/entries on Fedora systems. These are where the kernel arguments used at boot are specified, and indeed, that's where the problematic root=live:... arguments were specified in broken installs - in the "BLS entries" in /boot/loader/entries. So it seemed like, somehow, this mechanism just wasn't working right any more - we were expecting this run of grub2-mkconfig in the installed system root after live installation to correct those snippets, but it wasn't. However, as I said, I couldn't establish that any change to anaconda was causing this.

So I eventually shelved anaconda at least temporarily and looked at systemd. And it turned out that systemd had changed too. During the time period in question, we'd gone from systemd 250~rc1 to 250~rc3. (If you check the build history of systemd the dates don't seem to match up - by 2021-12-29 the 250-2 build had happened already, but in fact the 250-1 and 250-2 builds were untagged for causing a different problem, so the 2021-12-29 compose had 250~rc3). By now I was obviously pretty focused on kernel-install as the most likely related part of systemd, so I went to my systemd git checkout and ran:

git log v250-rc1..v250-rc3 src/kernel-install/

which shows all the commits under src/kernel-install between 250-rc1 and 250-rc3. And that gave me another juicy-looking, yet thankfully short, set of commits:

641e2124de6047e6010cd2925ea22fba29b25309 kernel-install: replace 00-entry-directory with K_I_LAYOUT in k-i 357376d0bb525b064f468e0e2af8193b4b90d257 kernel-install: Introduce KERNEL_INSTALL_MACHINE_ID in /etc/machine-info 447a822f8ee47b63a4cae00423c4d407bfa5e516 kernel-install: Remove "Default" from list of suffixes checked

So I went and looked at all of those. And again...I got it wrong at first! This is I guess a good lesson from this Debugging Adventure: you don't always get the right answer at first, but that's okay. You just have to keep plugging, and always keep open the possibility that you're wrong and you should try something else. I spent time thinking the cause was likely a change in anaconda before focusing on systemd, then focused on the wrong systemd commit first. I got interested in 641e212 first, and had even written out a whole Bugzilla comment blaming it before I realized it wasn't the culprit (fortunately, I didn't post it!) I thought the problem was that the new check for $BOOT_ROOT/$MACHINE_ID would not behave as it should on Fedora and cause the install scripts to do something different from what they should - generating incorrect snippet files, or putting them in the wrong place, or something.

Fortunately, I decided to test this before declaring it was the problem, and found out that it wasn't. I did this using something that turned out to be invaluable in figuring out the real problem.

You may have noticed by this point - harking back to our intro - that this critical kernel-install script, key to making sure your system boots, is...a shell script. That calls other shell scripts. You know what else is a big pile of shell scripts? dracut. You know, that critical component that both builds and controls the initial boot environment. Big pile of shell script. The install script - the dracut command itself - is shell. All the dracut modules - the bits that do most of the work - are shell. There's a bit of C in the source tree (I'm not entirely sure what that bit does), but most of it's shell.

Critical stuff like this being written in shell makes me shiver, because shell is very easy to get wrong, and quite hard to test properly (and in fact neither dracut nor kernel-install has good tests). But one good thing about it is that it's quite easy to debug, thanks to the magic of sh -x. If you run some shell script via sh -x (whether that's really sh, or bash or some other alternative pretending to be sh), it will run as normal but print out most of the logic (variable assignments, tests, and so on) that happen along the way. So on a VM where I'd run a broken install, I could do chroot /mnt/sysimage (to get into the root of the installed system), find the exact kernel-install command that anaconda ran from one of the logs in /var/log/anaconda (I forget which), and re-run it through sh -x. This showed me all the logic going on through the run of kernel-install itself and all the scripts it sources under /usr/lib/kernel/install.d. Using this, I could confirm that the check I suspected had the result I suspected - I could see that it was deciding that layout="other", not layout="bls", here. But I could also figure out a way to override that decision, confirm that it worked, and find that it didn't solve the problem: the config snippets were still wrong, and running grub2-mkconfig didn't fix them. In fact the config snippets got wronger - it turned out that we do want kernel-install to pick 'other' rather than 'bls' here, because Fedora doesn't really implement BLS according to the upstream specs, so if we let kernel-install think we do, the config snippets we get are wrong.

So now I'd been wrong twice! But each time, I learned a bit more that eventually helped me be right. After I decided that commit wasn't the cause after all, I finally spotted the problem. I figured this out by continuing with the sh -x debugging, and noticing an inconsistency. By this point I'd thought to find out what bit of grub2-mkconfig should be doing the work of correcting the key bit of configuration here. It's in a Fedora-only downstream patch to one of the scriptlets in /etc/grub.d. It replaces the options= line in any snippet files it finds with what it reckons the kernel arguments "should be". So I got curious about what exactly was going wrong there. I tweaked grub2-mkconfig slightly to run those scriptlets using sh -x by changing these lines in grub2-mkconfig:

echo "### BEGIN $i ###"
"$i"
echo "### END $i ###"

to read:

echo "### BEGIN $i ###"
sh -x "$i"
echo "### END $i ###"

Now I could re-run grub2-mkconfig and look at what was going on behind the scenes of the scriptlet, and I noticed that it wasn't finding any snippet files at all. But why not?

The code that looks for the snippet files reads the file /etc/machine-id as a string, then looks for files in /boot/loader/entries whose names start with that string (and end in .conf). So I went and looked at my sample system and...found that the files in /boot/loader/entries did not start with the string in /etc/machine-id. The files in /boot/loader/entries started with a69bd9379d6445668e7df3ddbda62f86, but the ID in /etc/machine-id was b8d80a4c887c40199c4ea1a8f02aa9b4. This is why everything was broken: because those IDs didn't match, grub2-mkconfig couldn't find the files to correct, so the argument was wrong, so the system didn't boot.

Now I knew what was going wrong and I only had two systemd commits left on the list, it was pretty easy to see the problem. It was in 357376d. That changes how kernel-install names these snippet files when creating them. It names them by finding a machine ID to use as a prefix. Previously, it used whatever string was in /etc/machine-id; if that file didn't exist or was empty, it just used the string "Default". After that commit, it also looks for a value specified in /etc/machine-info. If there's a /etc/machine-id but not /etc/machine-info when you run kernel-install, it uses the value from /etc/machine-id and writes it to /etc/machine-info.

When I checked those files, it turned out that on the live image, the ID in both /etc/machine-id and /etc/machine-info was a69bd9379d6445668e7df3ddbda62f86 - the problematic ID on the installed system. When we generate the live image itself, kernel-install uses the value from /etc/machine-id and writes it to /etc/machine-info, and both files wind up in the live filesystem. But on the installed system, the ID in /etc/machine-info was that same value, but the ID in /etc/machine-id was different (as we saw above).

Remember how I mentioned above that when doing a live install, we essentially dump the live filesystem itself onto the installed system? Well, one of the 'tweaks' we make when doing this is to re-generate /etc/machine-id, because that ID is meant to be unique to each installed system - we don't want every system installed from a Fedora live image to have the same machine ID as the live image itself. However, as this /etc/machine-info file is new, we don't strip it from or re-generate it in the installed system, we just install it. The installed system has a /etc/machine-info with the same ID as the live image's machine ID, but a new, different ID in /etc/machine-id. And this (finally) was the ultimate source of the problem! When we run them on the installed system, the new version of kernel-install writes config snippet files using the ID from /etc/machine-info. But Fedora's patched grub2-mkconfig scriptlet doesn't know about that mechanism at all (since it's brand new), and expects the snippet files to contain the ID from /etc/machine-id.

There are various ways you could potentially solve this, but after consulting with systemd upstream, the one we chose is to have anaconda exclude /etc/machine-info when doing a live install. The changes to systemd here aren't wrong - it does potentially make sense that /etc/machine-id and /etc/machine-info could both exist and specify different IDs in some cases. But for the case of Fedora live installs, it doesn't make sense. The sanest result is for those IDs to match and both be the 'fresh' machine ID that's generated at the end of the install process. By just not including /etc/machine-info on the installed system, we achieve this result, because now when kernel-install runs at the end of the install process, it reads the ID from /etc/machine-id and writes it to /etc/machine-info, and both IDs are the same, grub2-mkconfig finds the snippet files and edits them correctly, the installed system boots, and I can move along to the next debugging odyssey...

Issue with the solution to the 6s problem

Posted by Adam Young on January 11, 2022 06:54 PM

I recently came across a posted solutions to the 6s problem. I’m going to argue that several these solutions are invalid. Or, more precisely, I am going to argue that they are only conidered valid due to a convention in notation.

Part of the problem definition states that you cannot add additional digits to get to the solution, only operators. The operators that are used start with addition, subtraction, multiplication, division, and factorial. To solve some of the more difficult lines of the problem, they introduce the square root operator. This, however, is the degenerate form of the a fractional exponent. In other words, you can write either

<figure class="wp-block-image">{\sqrt {2}}</figure>

or

<figure class="wp-block-image">2^{{1/2}}</figure>

Note that in the bottom case, you introduce two new digits: a 1 and a 2.

To be fair, the factorial operator is also shorthand for a fairly long operations. If it was written in product notation, it would be:

<figure class="wp-block-image"></figure>

Which also introduces and additional 1.

This arbitrary distinction occured to me when I was looking at the solution for the 8s problem. It occurred to me that 2^3 is 8, and so a more elegant solution would be to take the cube root of 8 for each digit and sum them. However, this explicitly violates the rules of the puzzle, as the symbol for the cube root is the same as square root, but with a superscript 3.

Why do I care: because there is a pattern with notation that mixes the default case with the more explicit non-default expansions. For example, look at these two network devices names:

enp1s0f0np0 and enP4p4s0f0np0.

You have to look close to parse the difference. It is easier to see when they are stacked:

  • enp1s0f0np0
  • enP4p4s0f0np0

The fact that the second one is longer helps your eye see that the thrid chracter is a lowercase p in the first and uppercase in the second. Why? That field indicates some aspect of the internal configuration of the machine, something about the bridge to which the device is attached. The first is attached to bridge 0, which is the default, and is thus elided. The second is attached to bridge 4, and is thus explicitly named.

Yeah, it is a pain to differentiate.

So the solution to the problem is based on context sensitive parsing of the definition of the problem, to include the fact that the square root is considered a standard symbol without a digit to explicitly state what root is being taken.

Let’s take that option off the table. Are there still solutions to the 6s problem when defined more strictly. What are the set of acceptable operators that can be used to solve this puzzle? What is the smallest set?


My polyamorous relationship with operating systems: FreeBSD, openSUSE, Fedora & Co.

Posted by Peter Czanik on January 11, 2022 09:08 AM

Recently, I have posted blogs and articles about three operating systems (or rather OS families) I use, and now people ask which one is my “true” love. It’s not easy, but I guess, the best way to describe it is that both FreeBSD and openSUSE are true ones, and Fedora & Co. is a workplace affair :-) This is why I’m writing that it is a polyamorous relationship. Let me explain!

My first ever opensource operating system was FreeBSD. I got an account on the faculty server in 1994, a FreeBSD 1.X system. A few months later, I got the task to install Linux and a year later I ended up using S.u.S.E. Linux on the second faculty server. Soon, I was running a couple of Linux and FreeBSD servers at the university and elsewhere as a part-time student job. SuSE Linux also became my desktop operating system. I have always liked state-of-the art hardware, and while I felt FreeBSD to be a lot more mature on the server-side, it did not play well on a desktop. 25+ years later, it is still the case…

SUSE Linux, which later turned into openSUSE is still my desktop OS after 25 years. Of course, just like anybody else, I tried many other distributions. I was flirting with Gentoo Linux (due to its similarity to FreeBSD) and Fedora Linux (did I mention that I love having the latest hardware?), but I’ve always returned to openSUSE within months, as soon as it ran on my new hardware.

FreeBSD became my primary server OS around the year 2000. Web servers, especially those running PHP applications, were common targets for attacks. The FreeBSD jail system, or as Linux users know it: containers, was a perfect solution for this problem, over a decade earlier than Docker and over 1.5 decades earlier than Kubernetes became available. Jails are still my preferred container technology. Unlike the early days, there are now easy-to-use tools to manage them: I use BastilleBSD.

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

As I mentioned, Fedora & Co. is a workplace affair. I love the Fedora community; I have more friends there than in the openSUSE and FreeBSD communities combined. But the single reason I run Fedora, RHEL, CentOS and all the other RHEL clones is syslog-ng, my current job. The vast majority of syslog-ng users run syslog-ng on RHEL and compatible systems. So, I use these operating systems only for work. Except a couple of times for a few months, when openSUSE does not run on new hardware.

So, which is the true one? There is no definite answer. When it comes to operating systems, I live in a polyamorous relationship. You can read more on the various operating systems I use in my earlier blogs:

Anaconda is getting a new suit

Posted by Fedora Community Blog on January 11, 2022 08:00 AM

It’s quite some time since we created the current GTK based UI for Anaconda: the OS installer for Fedora, RHEL, CentOS. For a long time we (the Anaconda team) were looking for possibilities to modernize and improve the user experience. In this post, we would like to explain what we are working on, and—most of all—inform you about what you can expect in the future.

First, we need to express that we decided to share this information pretty early. We are currently at the stage where we have made the decisions. We have a ‘working prototype’ of the solution already available but don’t expect screenshots and demos yet!

What you can expect?

We will rewrite the new UI as a web browser-based UI using existing Cockpit technology. We are taking this approach because Cockpit is a mature solution with great support for the backend (Anaconda DBus). The Cockpit team is also providing us with great support and they have significant knowledge which we could use. We thank them for helping us a lot with the prototype and creating a foundation for the future development. 

We also decided for this step to be consistent with the rest of the system. More and more projects have support in Cockpit. By this step we should make the system more consistent between different applications. The great UX improvement should be easier remote installations compared to the current VNC solution. You can expect a lot of other improvements but let’s wait and see :).

Furthermore, we are building the new UI on top of the Anaconda modularization effort which we are implementing quite some time now. It’s great to see the fruits of our work which helps us now with the creation of the new UI. That also means that users of Fedora shouldn’t be much impacted by the changes during development of the new UI. A big part of Anaconda is now Anaconda modules with DBus APIs and we are reusing that API. We haven’t yet decided the approach for upstream development. We will tell you more about this in the future.

At the current state, we cannot communicate yet the expected day of the new UI or the minimum viable product availability. However, we will make sure to keep you informed about our progress from time to time, ensuring you know what to expect

We are thrilled about this new change and hopefully you are too! We look forward to give you something to play with!

The post Anaconda is getting a new suit appeared first on Fedora Community Blog.

Single attribute in-place editing with Rails and Turbo

Posted by Josef Strzibny on January 11, 2022 12:00 AM

Turbo can largely simplify our front-end needs to achieve a single-page application feel. If you have ever wondered how to do a single attribute in-place update with Turbo, this post is for you.

I’ll assume you have Turbo (with turbo-rails gem) installed, and you already have a classic model CRUD done. If you don’t, just generate a standard scaffold. I’ll use the User model and the name attribute, but it can be anything.

At this point, you might have a controller for the model looking like this:

class UsersController < ApplicationController
  before_action :set_user, only: %i[ show edit update destroy ]

  ...

  # GET /users/1/edit
  def edit
  end

  # PATCH/PUT /users/1 or /users/1.json
  def update
    respond_to do |format|
      if @user.update(user_params)
        format.html { redirect_to user_path(@user), notice: "User was successfully updated." }
        format.json { render :show, status: :ok, location: user_path(@user) }
      else
        format.html { render :edit, status: :unprocessable_entity }
        format.json { render json: @user.errors, status: :unprocessable_entity }
      end
    end
  end

  private
    # Use callbacks to share common setup or constraints between actions.
    def set_user
      @user = User.find(params[:id])
    end

    # Only allow a list of trusted parameters through.
    def user_params
      params.require(:user).permit(:name)
    end
end

You should also have all the standard views that go with it, namely views/users/show.html.erb, that we’ll modify for in-place editing of the user’s name.

We make a specific page for this change to support editing a specific attribute (here a name).

The controller change is easy. We add edit_name method next to your original edit:

class UsersController < ApplicationController
  before_action :set_user, only: %i[ show edit edit_name update destroy password_reset ]

  # GET /users/1/edit
  def edit
  end

  # GET /users/1/edit_name
  def edit_name
  end

  # PATCH/PUT /users/1 or /users/1.json
  def update
    respond_to do |format|
      if @user.update(user_params)
        format.html { redirect_to user_path(@user), notice: "User was successfully updated." }
        format.json { render :show, status: :ok, location: user_path(@user) }
      else
        format.html { render :edit, status: :unprocessable_entity }
        format.json { render json: @user.errors, status: :unprocessable_entity }
      end
    end
  end

  private
    # Use callbacks to share common setup or constraints between actions.
    def set_user
      @user = User.find(params[:id])
    end

    # Only allow a list of trusted parameters through.
    def user_params
      params.require(:user).permit(:name)
    end
end

Notice that there is no need to change how update works, it can do the job for all the attributes at once.

And let’s not forget to make the new path accessible with a change to routes.rb file:

Rails.application.routes.draw do
  ...

  resources :users do
    member do
      get 'edit_name'
    end
  end

  # Defines the root path route ("/")
  root "application#index"
end

Now that we have a new route and controller method to render the form for the name change, we implement the views.

We’ll add a standard view for the edit_name action (views/users/edit_name.html.erb):

<%= form_with model: @user, url: user_path(@user) do |form| %>
  <%= form.text_field :name %>
  <%= form.submit "Save" %>
<% end %>

And then wrap it with turbo_frame_tag call:

<%= turbo_frame_tag :user_name do %>
  <%= form_with model: @user, url: user_path(@user) do |form| %>
    <%= form.text_field :name %>
    <%= form.submit "Save" %>
  <% end %>
<% end %>

Wrapping everything in turbo_frame_tag gives this form a unique identifier and determines the area that gets swapped later.

Notice that we don’t need a specific model ID for turbo_frame_tag (like the examples leveraging dom_id) as we will swap the content on the model’s show page where other user entries don’t exist.

Once prepared, we make another turbo_frame_tag on the show page with the same ID. This tells Turbo that it can swap it with the frame we defined in the previous step:

...
<%= turbo_frame_tag :user_name do %>
  Name: <%= link_to @user.name, edit_name_user_path(@user) %>
<% end %>
...

A link_to pointing to the specific path for editing the name will trigger the action, and Turbo does the rest!

Freezing your Node.js dependencies with yarn.lock and –frozen-lockfile

Posted by Josef Strzibny on January 11, 2022 12:00 AM

When Yarn introduced a lock file (similar to Gemfile.lock), it did it with an unexpected twist. If you need reproducible builds, yarn.lock is not enough.

What is a lock file? Lock files ensure that the defined dependencies from files such as package.json get pinned to specific versions. This later ensures parity on developers’ workstations, CI, and production.

Many people probably depend on Yarn doing the right thing and installing only the pinned versions from yarn.lock on yarn install. But, unfortunately, this is not the case…

The default behavior of yarn install is that the yarn.lock file gets updated if there is any mismatch between package.json and yarn.lock. Weird, right?

(In comparison, other package managers such as RubyGems would only ever look at lock files and install the pinned versions from there.)

Luckily a solution exists. The documentation for the Classic Yarn (1.x) says:

If you need reproducible dependencies, which is usually the case with the continuous integration systems, you should pass –frozen-lockfile flag.

So your yarn install command for CI and production should look like this:

$ yarn install --silent --production=true --frozen-lockfile

There is a long-standing issue for making this a default, but the developers decided to leave it for a new Yarn version which is developed under the name Berry.

Some also say that you don’t need it as you can use pinned versions directly in package.json. This only true to some extend, though, because you would have to specify all transitive dependencies as well.

If you still run without the --frozen-lockfile flag, fix it today. It will save you some headaches later.

Also note, that the --frozen-lockfile flag is changed to --immutable in modern versions of Yarn and it’s a default for CI mode.

Episode 305 – Norton, Ethereum, NFT, and Apes

Posted by Josh Bressers on January 10, 2022 12:01 AM

Josh and Kurt talk about Norton creating an Ethereum mining pool. This is almost certainly a bad idea, we explain why. We then discuss the reality of NFTs and the case of stolen apes. NFTs can be very confusing. The whole world of cryptocurrency is very confusing for normal people. None of this is new, there have always been con artists, there will always be con artists.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2681-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_305_Norton_Ethereum_NFT_and_Apes.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_305_Norton_Ethereum_NFT_and_Apes.mp3</audio>

Show Notes

Google and Facebook fined for cookies practices

Posted by Fabio Alessandro Locati on January 10, 2022 12:00 AM
The CNIL, France’s data regulator, fined Meta (Facebook) and Google for violating the GDPR for a total of 210M€. More specifically: Google LLC (USA) got fined 90M€ Google Ireland Limited got fined 60M€ Facebook Ireland Limited got fined 60M€ Also, if the companies will not fix the issue within three months, an additional penalty of 100'000€/day will be added. There are two facts that I think are very interesting about these fines: the reason behind the fines the fines issuer

Pluton is not (currently) a threat to software freedom

Posted by Matthew Garrett on January 09, 2022 12:59 AM
At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

Nouveaux relais Tor

Posted by Casper on January 08, 2022 06:00 AM

Aujourd'hui, cela fait 1 an jour-pour-jour que 4 nouveaux relais sont entrés en production. Ces relais Tor viennent compléter une flotte de 6 relais, à moyenne vitesse.

Au niveau des ports d'écoute, les nouveaux relais fonctionnent sur des ports non-standards ORPort (Onion Routing Port) et DIRPort (Directory Port).

J'ai choisi les ports de la plage 26000 pour leurs ORPort. L'idéal est de choisir le port 443, s'il n'est pas disponible alors essayer de prendre le port 9080. S'ils ne sont pas disponibles, essayer les ports 993 et 995. S'ils ne sont pas disponibles non-plus, alors prendre un port aléatoire dans la plage 20000-30000.

Dans tous les cas, en tant que particulier, n'essayez pas de monter un noeud de sortie. Il existe des assos pour ça. "Nos-Oignons" est une association loi 1901 qui s'occupe de faire des sorties Tor.

Nos-Oignons : Association Nos-Oignons

Au niveau du parc

Tous mes relais ont une connectivité IPv6, et privilégient les connexions IPv6 pour les connexions sortantes vers les autres relais. J'ai déployé 2 relais par machine, avec 2 process séparés, gérés par systemd. La charge des machines est stable.

Pour la première fois, les machines sont exploitées à leur pleine capacité. Ci-après l'output de la commande "uptime" :

nsa.casperlefantom.net (8 cores, nouvelle machine) :
23:34:52 up 9 days, 11:29,  1 user,  load average: 1,25, 1,25, 1,26

nsd.casperlefantom.net (dual-core) :
23:36:07 up 9 days, 11:17,  1 user,  load average: 1.08, 1.04, 1.04

nse.casperlefantom.net (dual-core) :
22:00:51 up 9 days,  9:41,  1 user,  load average: 0.98, 0.87, 0.89

OrNetStats Screenshot

https://nusenu.github.io/OrNetStats/casperlefantom.net.html

Friday’s Fedora Facts: 2022-01

Posted by Fedora Community Blog on January 07, 2022 10:07 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
CentOS Dojo @ FOSDEMvirtual3–4 Febcloses 9 Jan
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345481
F352509
F36 (rawhide)6973
</figure>

Fedora Linux 36

Schedule

  • 2022-01-18 — Deadline for Self-Contained Change proposals
  • 2022-01-19 — Mass rebuild begins
  • 2022-02-08 — F36 branches from Rawhide; Rawhide begins F37 development

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Users are administrators by default in the installer GUI.Self-ContainedFESCo #2708
Enable fs-verity in RPMSystem-WideFESCo #2711
Switch GnuTLS to allowlistingSystem-WideApproved
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
Wayland By Default with NVIDIA proprietary DriverSystem-WideApproved
GHC compiler parallel version installsSelf-ContainedFESCo #2715
Keylime subpackaging and agent alternativesSelf-ContainedFESCo #2716
Golang 1.18System-WideFESCo #2720
DIGLIMSystem-WideFESCp #2721
LLVM 14System-WideFESCo #2726
Ruby 3.1System-WideFESCp #2727
%set_build_flags for %build and %checkSystem-WideFESCo #2728
Default To Noto FontsSystem-WideFESCo #2729
Hunspell Dictionary dir changeSystem-WideFESCo #2730
Relocate RPM database to /usrSystem-WideFESCo #2731
No ifcfg by defaultSelf-ContainedAnnounced
Django 4.0Self-ContainedAnnounced
GNU Toolchain UpdateSystem-WideAnnounced
New requirements for akmods binary kernel modules for Silverblue / Kinoite supportSelf-ContainedAnnounced
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-01 appeared first on Fedora Community Blog.

Reading a log out of a docker file

Posted by Adam Young on January 07, 2022 04:50 PM

I have to pull the log out of a docker process to figure out why it is crashing. The Docker container name is ironic_ipxe.

cat $( docker inspect ironic_ipxe  | jq -r  '.[] | .LogPath' )

Trouble with signing and notarization on macOS for Tumpa

Posted by Kushal Das on January 07, 2022 09:10 AM

This week I released the first version of Tumpa on Mac. Though the actual changes required for building the Mac app and dmg file were small, but I had to reap apart those few remaining hairs on my head to get it working on any other Mac (than the building box). It was the classic case of Works on my laptop.

The issue

Tumpa is a Python application which uses PySide2 and also Johnnycanencrypt which is written in Rust.

I tried both briefcase tool and manual calling to codesign and create-dmg tools to create the tumpa.app and the tumpa-0.1.3.dmg.

After creating the dmg file, I had to submit it for notarisation to Apple, following:

xcrun /Applications/Xcode.app/Contents/Developer/usr/bin/altool --notarize-app --primary-bundle-id "in.kushaldas.Tumpa" -u "kushaldas@gmail.com" -p "@keychain:MYNOTARIZATION" -f macOS/tumpa-0.1.3.dmg

This worked successfully, after a few minutes I can see that the job passed. So, I can then staple the ticket on the dmg file.

xcrun stapler staple macOS/tumpa-0.1.3.dmg

I can install from the file, and run the application, sounds great.

But, whenever someone else tried to run the application after installing from dmg, it showed the following.

mac failure screenshot

Solution

It took me over 4 hours to keep trying all possible combinations, and finally I had to pass --options=runtime,library to the codesign tool, and that did the trick. Not being able to figure out how to get more logs on Mac was making my life difficult.

I had to patch briefcase to make sure I can keep using it (also created the upstream issue).

--- .venv/lib/python3.9/site-packages/briefcase/platforms/macOS/__init__.py	2022-01-07 08:48:12.000000000 +0100
+++ /tmp/__init__.py	2022-01-07 08:47:54.000000000 +0100
@@ -117,7 +117,7 @@
                     '--deep', str(path),
                     '--force',
                     '--timestamp',
-                    '--options', 'runtime',
+                    '--options', 'runtime,library',
                 ],
                 check=True,
             )

You can see my build script, which is based on input from Micah.

I want to thank all of my new friends inside of SUNET who were excellent helping hands to test the multiple builds of Tumpa. Later many folks from IRC also jumped in to help to test the tool.

PHP version 8.0.15RC1 and 8.1.2RC1

Posted by Remi Collet on January 07, 2022 07:06 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.1.2RC1 are available as SCL in remi-test repository and as base packages in the remi-php81-test repository for Fedora 33-34 and Enterprise Linux.

RPM of PHP version 8.0.15RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 35 or in the remi-php80-test repository for Fedora 33-34 and Enterprise Linux.

 

emblem-notice-24.pngPHP version 7.4 is now in security mode only, so no more RC will be released, this is also the last one for 7.4.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Update of system version 8.1:

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.1.2RC1 is also in Fedora rawhide for QA.

emblem-notice-24.pngEL-9 packages are built using RHEL-9.0-Beta

emblem-notice-24.pngEL-8 packages are built using RHEL-8.5

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.1.0RC3 is also available

Software Collections ( php74, php80)

Base packages (php)

CES 2022: my favorite announcement comes from AMD, and why it's interesting for syslog-ng

Posted by Peter Czanik on January 07, 2022 04:35 AM

For the past few days, the IT news has been abuzz with announcements from CES. As usual, I’m following them on Engadget. I must admit, that there were just a very few announcements which really caught my attention. And my favorite announcement is the most boring of them all :-)

  • Foldable tablet by ASUS: I still use my Google Pixel C tablet almost every day. It’s almost six years old and waiting for replacement. The ASUS tablet is larger and has more accurate colors, two features for the photography maniac in me. Being folded gives it a more book-like feeling when using it for reading. It also has an optional keyboard accesory, just like the Pixel C, so it’s not just a content consumption device.

  • Color changing car is a promising concept by BMW. You can express your mood by the color, but it has more practical useses as well: turning it light in bright sunshine and dark in cold can also help in regulating temperature.

  • Autonomous tractor by John Deere is more about my university research: precision agriculture. I worked on some of the foundations, like soil sampling and correlating the results with aerial photographs. Those and much more are already in practice today. This tractor brings precision farming concepts even further.

To me the best of show is something completely boring: AMD Ryzen 7 5800X3D. It is a CPU. Why is it interesting? It has 100MB of cache. I do regular peak performance testing of syslog-ng. It seems to me, that performance is correlated both to single core performance and cache size. I did not have a chance to test syslog-ng on the latest EPIC or Power10 CPUs, but my AMD Ryzen 7 5800X desktop CPU I use for photo editing beats any ARM, Intel or Power CPUs I tested previously with syslog-ng. And the 5800X3D has almost 3x as large cache, as my current CPU. I must say that I am amazed about the advancement of semiconductor technology and how it helps to deliver more capabilities with less power.

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Why to Start With ESP32

Posted by Zamir SUN on January 06, 2022 09:17 PM

Disclaimer: I’m just a beginner of learning embedded, I wrote all of my embedded articles based on the view of myself as a beginner, so different ideas from embedded professions are highly unavoidable.

Last spring, I’ve wrote about start learning embedded with STM32. However, soon after I wrote that, I realize that the price of STM32 series has gone up to an incredible level. Which makes me wonder if that’s still a good choice.

Recently I started to think about the beginner’s question again. This time, I have some different thoughts: if the learning material can be more related with daily life, the learner will take it much easier. Nowadays IoT is definitely a hot topic. So if the MCU can have some sort of IoT capabilities, it will definitely make the learner happier. So MCU with real wireless communication function is a better choice. Currently, there are a bunch of wireless protocol in real life. The most widely mentioned including (but not limited to)

and more.

In my option, it would be much easier for the beginner if he or she can just connect the MCU to other existing devices, especially mobile phone and laptop. So WiFi and Bluetooth (including BLE) outstands others of all those listed.

So this time, the criterias are

  • The chips or boards should be easily available.
  • Tutorials should be easily available.
  • People should be able to develop and debug for the MCU on any major OS (Linux, MacOS, Windows).
  • There is an IDE that is easy to use even for people who do not have embedded experience.
  • The chip should have WiFi or BLE
  • Only 32bits MCUs

There aren’t much choice matching such criteria, especially in where I live. I already mentioned why I do not like cc26xx and nRF5x for beginner, then there are only ESP8266 series, ESP32 series, or WinnerMicro w600. Of all these, ESP8266 and ESP32 series have the best user community, and materials are most widely available online. But ESP8266 seems to be not recommended for new design (NRND), I think ESP32 series outstands othere.

So let’s talk more about ESP32.

ESP32 is a series of wireless MCU produced by Espressif. As the time of writing, they are

  • ESP32, which contains Xtensa 32-bit LX6 microprocessor(s) with 2.4G WiFi and Bluetooth 4.2 BR/EDR and BLE support.
  • ESP32 S2, which contains a Xtensa 32-bit LX7 microprocessor with 2.4G WiFi
  • ESP32 S3, which contains Xtensa 32-bit LX7 microprocessors with 2.4G WiFi and Bluetooth 5(LE)
  • ESP32 C3, which contains a 32-bit RISC-V microprocessor with 2.4G WiFi and Bluetooth 5(LE).

What’s more, Espressif provides Arduino support for ESP32, ESP32 S2, ESP32 C3. In case you are a newbie reader, Arduino is an open-source hardware and software company, its Arduino IDE is famous for it being easy to use. I hear that even artists can use Arduino to do some embedded related artists without much trouble.

Now, people can freely decide to use ESP32 with Arduino which is much easier to start with, or to use ESP-IDF framework provided by Espressif directly. Espressif even wrote good get started guide for both Arduino-ESP32 and ESP-IDF.

There are so many ESP32 boards available on the internet. The biggest difference for them are mostly only about periphrals. So just purchase as you prefer. Or if you don’t know what to start with, buying a minimal ESP32 board like Node32S also should work, as it should be pretty chip (less than USD $3 here).

QCoro 0.5.0 Release Announcement

Posted by Daniel Vrátil on January 06, 2022 07:00 PM

It took a few months, but there’s a new release of QCoro with some new cool features. This change contains a breaking change in CMake, wich requires QCoro users to adjust their CMakeLists.txt. I sincerely hope this is the last breaking change for a very long time.

Major highlights in this release:

  • Co-installability of Qt5 and Qt6 builds of QCoro
  • Complete re-work of CMake configuration
  • Support for compiling QCoro with Clang against libstdc++

Co-installability of Qt5 and Qt6 builds of QCoro

This change mostly affects packagers of QCoro. It is now possible to install both Qt5 and Qt6 versions of QCoro alongside each other without conflicting files. The shared libraries now contain the Qt version number in their name (e.g. libQCoro6Core.so) and header files are also located in dedicated subdirectories (e.g. /usr/include/qcoro6/{qcoro,QCoro}). User of QCoro should not need to do any changes to their codebase.

Complete re-work of CMake configuration

This change affects users of QCoro, as they will need to adjust CMakeLists.txt of their projects. First, depending on whether they want to use Qt5 or Qt6 version of QCoro, a different package must be used. Additionally, list of QCoro components to use must be specified:

find_package(QCoro5 REQUIRED COMPONENTS Core Network DBus)

Finally, the target names to use in target_link_libraries have changed as well:

  • QCoro::Core
  • QCoro::Network
  • QCoro::DBus

The version-less QCoro namespace can be used regardless of whether using Qt5 or Qt6 build of QCoro. QCoro5 and QCoro6 namespaces are available as well, in case users need to combine both Qt5 and Qt6 versions in their codebase.

This change brings QCoro CMake configuration system to the same style and behavior as Qt itself, so it should now be easier to use QCoro, especially when supporting both Qt5 and Qt6.

Support for compiling QCoro with Clang against libstdc++

Until now, when the Clang compiler was detected, QCoro forced usage of LLVM’s libc++ standard library. Coroutine support requires tight co-operation between the compiler and standard library. Because Clang still considers their coroutine support experimental it expects all coroutine-related types in standard library to be located in std::experimental namespace. In GNU’s libstdc++, coroutines are fully supported and thus implemented in the std namespace. This requires a little bit of extra glue, which is now in place.

Full changelog

  • QCoro can now be built with Clang against libstdc++ (#38, #22)
  • Qt5 and Qt6 builds of QCoro are now co-installable (#36, #37)
  • Fixed early co_return not resuming the caller (#24, #35)
  • Fixed QProcess example (#34)
  • Test suite has been improved and extended (#29, #31)
  • Task move assignment operator checks for self-assignment (#27)
  • QCoro can now be built as a subdirectory inside another CMake project (#25)
  • Fixed QCoroCore/qcorocore.h header (#23)
  • DBus is disabled by default on Windows, Mac and Android (#21)

Thanks to everyone who contributed to QCoro!


Download

You can download QCoro 0.4.0 here or check the latest sources on QCoro GitHub.

More About QCoro

If you are interested in learning more about QCoro, go read the documentation, look at the first release announcement, which contains a nice explanation and example or watch recording of my talk about C++20 coroutines and QCoro this years’ Akademy.

Kiwi TCMS Enterprise 10.5.1

Posted by Kiwi TCMS on January 06, 2022 11:45 AM

We're happy to announce Kiwi TCMS Enterprise version 10.5.1!

IMPORTANT: this is a small release which contains minor improvements and bug-fixes.

You can explore everything at https://public.tenant.kiwitcms.org!

Kiwi TCMS Enterprise v10.5.1-mt

  • Based on Kiwi TCMS v10.5

  • Update django-python3-ldap from 0.13.0 to 0.13.1

  • Update kiwitcms-github-app from 1.3.1 to 1.3.2

    Private images:

    quay.io/kiwitcms/enterprise         10.5.1-mt       c4d745bd914c   806MB
    

IMPORTANT: version tagged and Enterprise container images are available only to subscribers!

How to upgrade

Backup first! Then execute the commands:

cd path/containing/docker-compose/
docker-compose down
docker-compose pull
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Refer to our documentation for more details!

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Community Blog monthly summary: December 2021

Posted by Fedora Community Blog on January 06, 2022 08:00 AM
Community Blog update

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.

Stats

In December, we published 15 posts. The site had 5,262 visits from 3,445 unique viewers. 181 visits came from search engines, while 66 came from Fedora Magazine and 12 came from Twitter.

The most read post last month was EPEL 9 is now available with 150 views.

Badges

  • Community Messenger I (1 post)
    • carlwgeorge

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly summary: December 2021 appeared first on Fedora Community Blog.

Releasing Tumpa for Mac

Posted by Kushal Das on January 05, 2022 01:50 PM

I am happy to announce the release of Tumpa (The Usability Minded PGP Application) for Mac. This release contains the old UI (and the UI bugs), but creates RSA4096 keys by default. Right now Tumpa will allow the following:

  • Create new RSA4096 OpenPGP key. Remember to click on the “Authentication” subkey checkbox if you want to use the key for ssh.
  • Export the public key.
  • You can reset the Yubikey from the smartcard menu.
  • Allows to upload the subkeys to Yubikey (4 or 5).
  • Change the user pin/admin pin of the Yubikey.
  • Change the name and public key URL of the Yubikey.

The keys are stored at ~/.tumpa/ directory, you can back it up in an encrypted USB drive.

You can download the dmg file from my website.

$ wget https://kushaldas.in/tumpa-0.1.3.dmg
$ sha256sum ./tumpa-0.1.3.dmg 
6204cf3253fbe41ada91429684fccc0df87257f85345976d9468c8adf131c591  ./tumpa-0.1.3.dmg

Download & install from the dmg in the standard drag & drop style. If you are using one of the new M1 box, remember to click on “Open in Rosetta” for the application.

Tumpa opening on Mac

Click on “Open”.

Here is a GIF recorded on Linux, the functions are same in Mac.

Tumpa gif

Saptak (my amazing comaintainer) is working on a new website. He is also leading the development of the future UI, based on usability reports. We already saw a few UI issues on Mac (specially while generating a new key), those will be fixed in a future release.

Feel free to open issues as you find, find us in #tumpa channel on Libera.chat IRC network.

Cockpit 260

Posted by Cockpit Project on January 05, 2022 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly.

Here are the release notes from Cockpit 260 and cockpit-podman 39:

Certificate login validation: Action required on updates

Earlier Cockpit/sssd versions did not check trust or revocation status of a presented client certificate, and thus certificate/smart card login was secure and supported only when matching the entire binary certificate against the Identity Management’s database. With sssd 2.6.1 and Cockpit 260, the certificate signature and revocation status is now validated against the CA configured in sssd, and any non-trusted certificate is rejected. This includes the case when the local sssd has no configured CA, which may break certificate logins after updating cockpit and sssd.

Thus if you use certificate login, you need to set up the trusted CA in sssd. Please see the certificate authentication documentation for details.

This issue was assigned CVE-2021-3698.

Client: Show previously used hosts

Cockpit client now shows list of hosts that were used before so you can log in with one click. It is also possible to remove unwanted hosts from this list.

Screenshot from 2021-12-10 08-09-06

Client: Support port specification

Cockpit Client now supports specifying the port number when connecting, with the usual syntax like user@host.example.net:22222. This is the same form used in Cockpit bastion host configurations.

Podman: Create container in pod

It is now possible to create a container in an existing pod using the Create container in pod button.

screenshot of ## create container in pod

Create container dialog now shows in which pod the container is created

screenshot of ## create container in pod

Podman: Set restart policy

For system containers it is now possible to set a restart policy for new containers.

screenshot of ## podman restart policy

Podman: Allow inserting multiple environment variables

Podman users can now insert multiple environment variables in bulk by copy pasting an list of environment files the variables have to be formatted as FOO=bar separated by newlines.

bridge: Warning on missing cockpit-system package

Previously, when attempting to connect to a system which had cockpit-bridge installed, but not cockpit-system, you’d get a generic “Not found” error, with no hints about how to fix that.

cockpit-bridge now detects this problem and issues a more helpful message.

screenshot of warning on missing cockpit-system package

Try it out

Cockpit 260 and cockpit-podman 39 are available now:

Week5 Blog Post

Posted by Vanessa Christopher on January 04, 2022 07:00 PM

Hi again!

This should be a very interesting one, let's dive right into it...

What makes Fedora packaging project exciting?

Well...Anyone can be a package maintainer (cool right?!) yes you must not know how to code! You just need to be willing to learn and be persistent.

This particular project is not just exciting but important as well

How?

RPM is an open source program for installing, uninstalling and managing software packages in Linux.

So here's how this works

RPM packaging helps you install softwares using a simple command line ... let's use chrome for an example.

Steps to install chrome using the Fedora Linux Terminal

Install Third Party Repositories

$ sudo dnf install fedora-workstation-repositories

Enable the Google Chrome repo

$ sudo dnf config-manager --set-enabled google-chrome

Finally, install Chrome

$ sudo dnf install google-chrome-stable

How Packages Are Made

Fedora docs gives us detailed expalanations so let's get right into it.

Getting started

https://docs.fedoraproject.org/en-US/package-maintainers/Joining_the_Package_Maintainers/

Setting up envirnonment / Building the RPM

https://docs.fedoraproject.org/en-US/package-maintainers/Packaging_Tutorial_GNU_Hello/

https://docs.fedoraproject.org/en-US/package-maintainers/New_Package_Process_for_New_Contributors/

Creating a review request

https://docs.fedoraproject.org/en-US/package-maintainers/New_Package_Process_for_New_Contributors/#create_your_review_request

Add package to source code management (SCM)

https://docs.fedoraproject.org/en-US/package-maintainers/New_Package_Process_for_New_Contributors/#add_package_to_source_code_management_scm_system_and_set_owner

And for another interesting part. When packages are made, after a period of time the spec files needs to be updated, and here's how packages are updated.

RPM Package Updates

1) Fork the project from upstream

Forking the project simply means creating a copy of the upstream project for yourself. That way you can make modifications without affecting the original project.

2) Create a pull request (PR)

After making modifications to your copy of the project (fork), you may want to add those changes to the main project (upstream). In that case you create a PR and wait for your changes to be merged to the main project (upstream)

3) Build for the Fedora system

  • Clone your project
fedpkg co <package name> 
  • Navigate to the Project directory
cd <package name> 
  • Because the repo is up to date for rawhide after the PR is merged, we just build
fedpkg build 
  • For other branches: f35/f34
fedpkg switch branch <branch name>
git merge rawhide
git push
fedpkg build
  • Push the update using the bodhi web interface because you can choose both the f35 and f34 builds and select the bug ID etc. all at once there.

Before the outreachy program I didn't know softwares are even packaged by people :smile: I thought we had bots doing all the work. Fedora is just like any other Linux distro like Ubuntu, Manjaro and the likes, but fedora has its own unique commands.

Some packages takes longer than the others to finish because it is an endless journey of learning . Through this exciting journey I learnt how to set up my Fedora work station, how to package, and i feel this great fufilment that i am contributing to the society and to open source! Thank you Outreachy.

27 Years with the Perfect OS

Posted by Peter Czanik on January 04, 2022 10:50 AM

If you are a longtime FreeBSD user, you probably know everything I have to say, and, what’s more, you can probably add a few more points. But hopefully, there will be some Linux or even Windows users among readers who might learn something new!

FreeBSD is not just a kernel but a complete operating system. It has everything to boot and use the system: networking utilities, text editors, development tools and more. Why is that a big deal? Well, because all these components are developed together, they work perfectly together! And a well-polished system is also easier to document. One of my favorite pieces of documentation is the FreeBSD Handbook which covers most of the operating system and is (most of the time) up to date.

Of course, not everything can be integrated into the base operating system, and this is where FreeBSD ports and packages can be useful. The ports system allows a clean separation of the base system and third-party software which allows you to install third-party software on top of a FreeBSD base system. There are tens of thousands ready-to-use software packages to choose from. For example, all the graphical desktop applications are in ports, just as various web servers or more up-to-date development tools.

<figure><figcaption>

FreeBSD: the power to serve

</figcaption> </figure>

You can read the rest of my article in the FreeBSD Journal at https://issue.freebsdfoundation.org/publication/?m=33057&i=733207&p=14&ver=html5

Fedora Has Too Many Security Bugs 3

Posted by Robbie Harwood on January 04, 2022 05:00 AM

(Previously part 2 and part 1.

Right now, there are 1917 open CVE bugs against Fedora. This is a decrease of 172 from last year - so again, we report good news. Gratitude toward maintainers who have been reducing their backlog.

Year breakdown:

2005: 1
2011: 1
2012: 4
2013: 4
2014: 5
2015: 17
2016: 71
2017: 227
2018: 341
2019: 225
2020: 311
2021: 710

While the bug that was last year's tail (a 2009 bug) has disappeared, the tail is now much longer with the addition of the 2005 bug. The per-year deltas are:

2005: +1
2006: N/A
2007: N/A
2008: N/A
2009: -1
2010: N/A
2011: -1
2012: -5
2013: -1
2014: ±0
2015: ±0
2016: -1
2017: -25
2018: -26
2019: -35
2020: -390
2021: +302

(N/A is reported where neither last year's run nor this year's run had bugs in that year bucket, while ±0 indicates no change in the number year to year. The 2021 change is somewhat expected since there's a lag between CVEs being assigned numbers and being disclosed.)

Unfortunately, the balance has shifted back toward EPEL: EPEL has 1035 of the 1917 total, a change of +77. This has outsized impact because EPEL is much smaller than non-EPEL Fedora.

For ecosystems, the largest ones I see are:

mingw: 99 (-41)
python: 95 (-14)
nodejs: 85 (-14)
rubygem: 20 (-7)
php: 13 (-6)

and it's nice to see a reduction on all of them.

Finally, to close as before, there have been no changes to Fedora policy around security handling, nor is there a functioning Security Team at this time. Obviously no one should be forced into that role, but if anyone wants a pet project: the incentive structure here is still wrong.

For completeness, since bugzilla has changed a bit, here's my script operating on the CSV file:

#!/usr/bin/python3

import csv
import re

from collections import defaultdict

with open("Bug List 2.csv", "r") as f:
    db = list(csv.DictReader(f))

print(f"total bugs: {len(db)}")

years = defaultdict(int)
r = re.compile(r"CVE-(\d{4})-")
for bug in db:
    match = r.search(bug["Summary  "])
    if match is None:
        continue
    year = match.group(1)
    years[year] += 1

for key in sorted(years.keys()):
    print(f"    {key}: {years[key]}")

epel = [bug for bug in db if bug["Product  "] == "Fedora EPEL"]

print(f"epel #: {len(epel)}")

components = defaultdict(int)
for bug in db:
    components[bug["Component  "]] += 1

# This spews - but uncomment to identify groups visually
# for c in sorted(components.keys()):
#     print(f"{c}: {components[c]}")

def ecosystem(e: str) -> int:
    count = 0
    for c in components:
        if c.startswith(f"{e}-"):
            count += components[c]
    return count

print(f"mingw ecosystem: {ecosystem('mingw')}")
print(f"python ecosystem: {ecosystem('python')}")
print(f"nodejs ecosystem: {ecosystem('nodejs')}")
print(f"rubygem ecosystem: {ecosystem('rubygem')}")
print(f"php ecosystem: {ecosystem('php')}")

Using OATH in FreeIPA

Posted by Zamir SUN on January 03, 2022 02:17 PM

OATH has been the choice of Two-factor authentication (2FA) for many companies and websites. And it’s also exists in FreeIPA.

As a user, it’s pretty straight forward to enable OATH token in FreeIPA on you own given your organization has enabled the choice for you.

Firstly, login into FreeIPA with your own account. Click your own users from the ‘Active users’ table.

Click the drop-down button ‘Action’ then you’ll see ‘Add OTP Token’. Click it.

Then it will ask you to choose whether it’s TOTP or HOTP. Choose on your own preference. You can file in something in Description to make it easier for you to tell your tokens apart. Then click ‘Add’.

On the next screen, you’ll be offered a QE code.

If you want to use a mobile app, then just scan the QR code with the mobile app you want to use. Done.

However, if you want to store the OATH token in a hardware token, like CanoKey or Yubikey, you’d better click ‘Show configuration uri’. Then configure it using the configuration tool. In my case, I’m using ykman with a CanoKey Pigeon.

ykman -r "Canokeys" oath accounts uri "otpauth://Hotp/username@EXAMPLE.COM:12345678-90ab-cdef-1234-567890abcdef?digits=6&secret=SOMESECRET&period=30&algorithm=SHA1&issuer=username%40EXAMPLE.COM"

Now everything has been set. Go on and configure your hardware token as needed, and you are good to go.

If you are using HOTP and your token now has a big skew now, you can go to the login page of FreeIPA, click ‘Sync OTP Token’ and sync your token there.

Easy, isn’t it?

2021 blog review

Posted by Kushal Das on January 03, 2022 10:31 AM

Last year I wrote only a few blog posts, 19 exactly. That also reduced the views, to around 370k from 700k year before (iirc).

The post about Getting TLS certificates for Onion services was the highest read post in this year, 9506 views.

A major part of the year went to thinking if we can survive the year, India's medical system broke down completely (the doctors and staff did some amazing job using whatever was available). Everyone I know lost someone in COVID, including in our family. All 3 of us were down in COVID from the end of April & the recovery was long. For a few days in between I could not remember any name.

After the COVID issues came down in brain (that is after getting the vaccines), next we were also waiting for our move to Sweden.

At the beginning of 2022, things look a bit settled for us. In the last few weeks of 2021, I managed to start writing again. I am hoping to continue this. You can also read about the 2018 or 2017, 2016 reviews.

Episode 304 – Will we ever fix all the vulnerabilities?

Posted by Josh Bressers on January 03, 2022 12:01 AM

Josh and Kurt talk about the question will we ever fix all the vulnerabilities? The question came from Reddit and is very reasonable, but it turns out this is REALLY hard to discuss. The answer is of course “no”, but why it is no is very complicated. Far more complicated than either of us thought it would be.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2675-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_304_Will_we_ever_fix_all_the_vulnerabilities.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_304_Will_we_ever_fix_all_the_vulnerabilities.mp3</audio>

Show Notes

My 2022 Focus

Posted by Tomas Tomecek on January 02, 2022 04:27 PM

It’s Sunday evening, January 2nd and I finally made myself to start writing this blog post. It was on my plate for more than four weeks: I guess it’s hard to write when you don’t have the frame. I did not really know what I want to achieve with this writing: new year sounded like a great excuse.

This is my first blog post with no technical content. We’ll see if one of many going forward.

ساخت USB قابل boot با نرم افزار balenaEtcher

Posted by Fedora fans on January 01, 2022 07:30 AM
balenaEtcher

balenaEtcherنرم افزارهای گوناگونی برای bootable کردن Flash USB و یا SD کارت ها وجود دارد که به وسیله ی آنها می توان سیستم عامل مورد نظر خود را روی یک USB و یا SD کارت نصب کرد و سیستم خود را با آن Boot کرد. قبلا نمونه هایی مانند gnome-multi-writer, UNetbootin و mediawriter معرفی شدند. در این مطلب قصد دارم تا نرم افزار دیگری در این زمینه به شما معرفی کنم که balenaEtcher نام دارد و قابل نصب و استفاده بر روی سیستم عامل های مختلف می باشد.

ویژگی های balenaEtcher:

  • نصب بر روی سیستم عامل های مختلف
  • دارای رابط گرافیکی زیبا
  • رایگان و Open source
  • بررسی Flash
  • امکان انتخاب drive

نصب balenaEtcher:

روش های مختلفی برای نصب balenaEtcher وجود دارد که به برخی از آنها اشاره می کنیم.

روش ۱:

به وب سایت رسمی نرم افزار balenaEtcher مراجعه کنید و نسخه ی مخصوص به سیستم عامل خود را دانلود کنید:

https://www.balena.io/etcher

به عنوان مثال فایل دانلود شده برای لینوکس با نام balena-etcher-electron-1.7.3-linux-x64.zip می باشد که ابتدا باید آن را unzip کرد:

$ unzip balena-etcher-electron-1.7.3-linux-x64.zip

پس از استخراج یک فایل با فرمت AppImage خواهیم داشت که باید به آن مجوز اجرایی داد:

$ chmod +x balenaEtcher-1.7.3-x64.AppImage

اکنون می توانید با کلیک کردن بر روی آن برنامه را اجرا کنید یا از دستور زیر استفاده کنید:

$ ./balenaEtcher-1.7.3-x64.AppImage

روش ۲:

می توانید به صفحه releases پروژه بر روی GitHub کنید و نسخه مخصوص سیستم عامل خود را دانلود کنید:

https://github.com/balena-io/etcher/releases

به عنوان نمونه برای فدورا لینوکس 64 بیتی می خواهیم بسته ی rpm را دانلود کنیم:

$ wget -c https://github.com/balena-io/etcher/releases/download/v1.7.3/balena-etcher-electron-1.7.3.x86_64.rpm

اکنون با دستور زیر می توان فایل rpm دانلود شده را نصب کرد:

# dnf install balena-etcher-electron-1.7.3.x86_64.rpm

نکته اینکه فایلی که شما دانلود می کنید ممکن است نام آن و یا نسخه آن متفاوت باشید که در دستورهای بالا باید نام فایل مناسب را بنویسید.

 

روش ۳:

در این روش می توان مخزن نرم افزار balenaEtcher را با دستور زیر بر روی سیستم خود نصب کنید:

# curl -1sLf \ 'https://dl.cloudsmith.io/public/balena/etcher/setup.rpm.sh' \ | bash

 

 

سپس با استفاده از مدیر بسته ی DNF آن را نصب کنید:

# dnf install -y balena-etcher-electron

 

در ادامه تصویری از محیط برنامه balenaEtcher را مشاهده می کنید:

balena-etcher-electron

The post ساخت USB قابل boot با نرم افزار balenaEtcher first appeared on طرفداران فدورا.

Happy New Year!

Posted by Jon Chiappetta on January 01, 2022 05:07 AM

Happy New Year! – 2022 – From DynaDock

4 metrics to measure sustainable open source investments.

Posted by Justin W. Flory on December 31, 2021 09:38 PM

The post 4 metrics to measure sustainable open source investments. appeared first on Justin W. Flory's blog.

Justin W. Flory's blog - Free & Open Source, travel, and life reflections

How do we understand value when we talk about sustainability? What does investing in open source mean? The meaning is different for many people because of an implicit understanding of what open source means.

This post is a reflection on the past year in my work with the UNICEF Venture Fund. We integrated new open source tools to capture metrics and data about open source repositories connected to UNICEF portfolio companies and created a shortlist of key metrics that map to business sustainability metrics. Now, we are better positioned to look back on past, current, and upcoming portfolio companies and mentor support programs.

As we move into 2022, this post covers my current thinking on these points:

  1. Defining investments.
  2. How do these investments impact sustainability?
  3. CHAOSS metrics as an open source tool for an investment lens on sustainability.
  4. What next?

Defining investments.

When we talk about investing in open source, what do we mean? What are the known inputs? What are the expected outputs? “Investments” and “investing” are broad terms. Investments typically mean sizeable financial injections of support and growth, but can also include non-financial investments too. Investments can also take the form of both time and energy (i.e. electricity and digital infrastructure).

The UNICEF Venture Fund provides equity-free funding for start-up companies building open source solutions of interest to UNICEF. All the start-up companies are registered companies in UNICEF program countries. As part of the Venture Fund’s location in the Office of Innovation, it is also a vehicle for UNICEF to explore frontier technology areas through the investments. When a start-up company is receiving investment from UNICEF, the company receives both funding and tailored mentorship about business and open technology.

A question I want to know is, what is the impact of the received funding plus guided mentorship? How does this approach enable the companies to be successful after graduating? What discoveries or knowledge could be shared with others to assist the development of their own open programs?

To summarize, an investment can be financial or non-financial. Financial investments include direct funding, grants, venture capital, fellowships, or any other exchange of capital. Non-financial investments include time spent in coaching sessions, personalized content for companies, and shared digital infrastructure. Neither list is exhaustive.

How do these investments impact sustainability?

<figure class="alignright">Logo for Bitergia's Cauldron hosted analytics platform. A key part of metrics for UNICEF Venture Fund investments.<figcaption>Bitergia Cauldron.io</figcaption></figure>
<style> .wp-duotone-filter-61d8766cc7806 img { filter: url( #wp-duotone-filter-61d8766cc7806 ); } </style> <filter id="wp-duotone-filter-61d8766cc7806"> <fecolormatrix type="matrix" values=".299 .587 .114 0 0 .299 .587 .114 0 0 .299 .587 .114 0 0 0 0 0 1 0"> <fecomponenttransfer color-interpolation-filters="sRGB"> <fefuncr tablevalues="0.78039215686275 1" type="table"> <fefuncg tablevalues="0 0.94901960784314" type="table"> <fefuncb tablevalues="0.35294117647059 0.47058823529412" type="table"> </fecomponenttransfer> </filter>

Data makes introspection easier. Bitergia’s Cauldron.io was a champion tool for kickstarting an open source metrics strategy for the UNICEF Venture Fund. Its introduction as a tool opened up a wider span of data to look at. There are new opportunities to ask questions and explore growth, scale, and sustainability.

In order to come to a conclusion on sustainability impact, we need streamlined data to test a thesis. The Venture Fund team improved internal processes to how metrics are collected from portfolio companies. The team is unifying behind fewer tools and methods to ensure we see the same data and have the same view of the data points we measure. This also provides a fresh opportunity to review how we measure open source impact across portfolio companies. Many have dashboards on Cauldron.io, but data needs a storyteller for it to make meaning. So, the next step is to ask questions with this new data and frame a thesis to measure and test the sustainability of Venture Fund investments into open source.

Many have traveled before me on the same trail of thought. I started first with the Community Health Analytics Open Source Software (CHAOSS) project and its metrics releases. This served as the initial point of brainstorming to frame questions and different scenarios of risk, evolution, DEI, and value.

CHAOSS metrics as an open source tool for an investment lens on sustainability.

I reviewed the latest release of CHAOSS metrics and narrowed down four metrics I want to measure in the next year. I also shared thoughts on why collect this data and how to do it. This blog post is no more than me wondering out loud, to help me frame an analytical approach for this metrics strategy.

The four metrics are detailed below:

  1. Contribution Attribution
  2. Contributors
  3. Collaboration Platform Activity
  4. Labor Investment
<figure class="wp-block-image alignwide size-large is-style-default">A hand holds a pen and is writing on a sheet of notebook paper. They appear to be making a list.<figcaption>Take note of your dependencies and contributors.
Photo by Glenn Carstens-Peters on Unsplash.</figcaption></figure>

Contribution Attribution

Question: Who has contributed to an open source project and what attribution information about people and organizations is assigned for contributions?

chaoss.community/metric-contribution-attribution/

This metric is insightful because it is targeted deeply into team and project culture. This metric is a good representation of how much the project leans into an open source model of building their project. This work ethos and intention to forge on an open source path is difficult to understand at times. If a team takes care to attribute their software dependencies and other contributors to their code (if any), this is a good sign that the team accepts collaboration as a value and encourages working with others.

I would measure this across two types of contributions: attributions for software dependencies including those with permissive licenses, and for any other direct contributors to the code and how they are recognized for their participation. This could be filtered in a red-yellow-green light approach:

  1. Red: No attributions are made, or all attributions are inadequate.
  2. Yellow: One of two attributions are made, or one attribution type is inadequately attributed.
  3. Green: All dependencies and used works are correctly attributed.
<figure class="wp-block-image alignwide size-large is-style-default">View looking down at a small farmer's market, where a woman sits behind several different cases of vegetables. A man hands payment to the woman for unseen goods. This is connected how knowing your customers can also be like knowing your community.<figcaption>Spend more time getting to know who participates and why.
Photo by Alex Hudson on Unsplash.</figcaption></figure>

Contributors

Question: Who are the contributors to a project?

chaoss.community/metric-contributors/

This metric explores a more human dimension of the people and participants to an open source project. The metric defines contributors and contributions broadly, as “anyone who contributes to the project in any way.” Understanding the people participating in a community, their motivations, goals, and why they choose to be in that community is important to understand sustainability. Otherwise, you may lose out on good opportunities to attract contributions from people who are already engaged, and new engagements may be difficult because of a mismatch of expectations.

This metric is more a means than it is an end; that is, it provides opportunities to ask more questions than provide detailed answers. Nevertheless, it does provide some guidance towards understanding contributors in a project, and it can lead to some concrete actions based on gathered insights. For example, this metric will enable deeper looks in areas of diversity, equity, and inclusion.

Since I work with start-up companies with small, lean development teams, I look to understand the motivations of the developers working on their projects and where the motivations may align with another open source solution. This enables the two communities to leverage their combined brainstorming for meeting complimentary goals around development and innovation.

To collect this data, I would have the team define what areas of contribution they seek for their open source solutions and then map those desired contributions to a specific project area or different team members. This enables a form of consistent accountability for checking expectations with reality and understanding team capacity. Each area could be a key-value pair, where the value is the project area, team lead, or delegated team member for the type of contribution solicited.

<figure class="wp-block-image alignwide size-large is-style-default">The dashboard of an older plane is shown, with several different meters, switches, and control knobs. In many ways, the places where we collaborate on our projects can also be as complicated, and we can miss out on some useful features if we are not looking in the right place.<figcaption>There are many ways to collaborate, but the question is, are you counting the right ways?
Photo by Kai Dahms on Unsplash.</figcaption></figure>

Collaboration Platform Activity

Question: What is the count of activities across digital collaboration platforms (e.g., GitHub, GitLab, Slack, email) used by a project?

chaoss.community/metric-collaboration-platform-activity/

Collaboration platform activity is one effective proxy metric for community engagement if measured accurately. The metric does not define collaboration as much as it provides a data structure to measure it. It abstracts collaboration into key data points like timestamp, sender, whether the platform has threaded or non-threaded discussions, data collection date, and platform message identifier. To a degree, collaboration can be abstracted out in this way: a person takes any given action at a given time in a given way, and this action is measured as project-related activity on the collaboration platform.

There are a few possible approaches to collecting this data from UNICEF Venture Fund companies. Each approach does not cancel out another, but each approach could be combined with the others:

  1. Measure common git activity like commits, issues, pull/merge requests. We already measure this data, but use it only in connection to validating Venture Fund workplans for each team with UNICEF portfolio manager(s).
  2. Count communications like comments, reviews, public messages, and other outreach. Communications strategies and tools are typically inferred from common git activity. Measuring for engagement and stratifying those metrics into a smaller group could allow for deeper insights to the evolution of early-stage open source communities.
  3. Make community hubs first-class citizens in the data curation process to infer about informal engagement. Both open source projects and UNICEF Venture Fund portfolio companies use a variety of tools to communicate, especially in view of COVID-19 and its seismic impact on how we work. Platforms like Discord, Telegram, Mattermost, Slack, Rocket.chat, Matrix, and others are focal points where projects collaborate, ask questions, and support others. Bringing this data stream into the mix offers deeper insights into how teams engage and build community around their work, and also guidance on when to push for contribution opportunities at the right time.

The satisfaction of these three options in their totality is not enough. To leverage the fullest impact, these metrics must tie into each other, and need to be connected back to a narrative. Why is this data being collected and what actions are influenced by the knowledge of this data? The data collection enables the evaluation of sustainability and understanding the birth, growth, and evolution of an open source technology product. Influenced actions can include moving more human resources (i.e. contractors or staff) to support a project, adopting a new open source best practice, and/or engaging new customers, talent, or other leads based on participation in the community.

Measuring collaboration platform activity is not black and white. Many new questions would likely come forward as part of measuring this activity. Yet this is the point—it lays the foundation for the next layer to the data collection, analysis, and reporting process around sustainability.

<figure class="wp-block-image alignwide size-large is-style-default">A man is facing forward with his back to the camera. He wears a heavy coat and a construction hard hat. The background is blurred and unclear. In this way, we can think of labor investment from a human-centered approach first.<figcaption>What is the impact of an investment on fair and equitable labor?
Photo by Jon Tyson on Unsplash.</figcaption></figure>

Labor Investment

Question: What was the cost of an organization for its employees to create the counted contributions (e.g., commits, issues, and pull requests)?

chaoss.community/metric-labor-investment/

This metric is perhaps the most ambitious of the group. How do you measure labor investment into an open source project? Or literally, the number of person-hours that go into software design, development, co-creation, and community management? It feels like a gargantuan effort, but there may be better ways to measure this in connection to other data the UNICEF Venture Fund is already connected about the businesses.

Measuring labor investment impacts two narratives: the rate of development on the open source work, and the impact of UNICEF investment into a company backing an open source work.

Firstly, understanding the rate of development on an open source work is easier to infer by understanding who is allocated on a project and how much of their time they dedicate to it. If a team of three contributors spares a few hours a week, it will mean something different compared to a team of five engineers spread across different disciplines working full-time. Mapping the labor investment for open source projects supported by UNICEF would enable better planning by understanding the typical labor investment in open source workplan tasks as piloted by other Venture Fund portfolio companies.

Secondly, this gives us a new way of talking about the impact of UNICEF Venture Fund investments as an investment not only in software products but also in labor. It gives us insight into the investment of labor in software engineering talent among portfolio companies. How does this measurement change over time of the investment? Do projects receive more or less investment of labor during the 12 month period we work with them? This could also be used as a proxy metric for the impact of our unique mentorship and coaching opportunities.

What next?

Knowing is half the journey. Even if the knowledge is not yet firmly rooted. The analysis and introspection are from me as an individual working among the UNICEF Venture Fund and do not represent the views and beliefs of UNICEF or the UN in any capacity. My intent is that by sharing this analysis in the open, it allows for a space where conversation can spark where it could not before. It also invites others to share ideas, feedback, and constructive criticism of an emerging metrics strategy for investments made into the open source ecosystem.

Next, more layers can be added and internal and external validation can help to keep this moving forward. An implementation plan would be the next step to follow this post. The implementation plan considers the process of how start-up companies move through the Venture Fund from start to finish. Who interacts with the companies and when? At what point is a company ready to begin building in a new metric or count in their monthly metrics? Do they understand the implications and assessments of these metrics? At what points in the process is data already being collected? Could these new data requests be added to existing requests? And so on.

I hope to formalize some of this new reporting and metrics strategy in upcoming cohorts in 2022, as part of a renewed effort into communicating how our open source investments tie into sustainable impact towards the U.N. Sustainable Development Goals.

This post will serve as a milestone marker on the metrics strategy discussion in the coming one to two months. See you in 2022.


Featured photo by Edward Howell on Unsplash. Modified by Justin W. Flory. CC BY-SA 4.0.

Looking back on 2021 and ahead to an amazing new year

Posted by Fedora Magazine on December 31, 2021 08:00 AM

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

We’ve made it to the end of 2021, and I’m filled with so many emotions. On the one hand, I’m extremely proud of the work we have done this year. But on the other hand… when I wrote last year’s love letter, I thought we’d surely be able to celebrate our successes in person this year. Unfortunately, the global situation doesn’t seem to be getting any better. Our usual European winter events — DevConf.CZ and FOSDEM — are both virtual again. While I continue to hold out hope that we’ll be able to share a meal together soon, there are no clear dates in sight.

So, as we close out 2021 and approach the two-year mark of the pandemic, I’d like us all to take a moment to reflect on how we’ve continued to be a thriving community this year. Nest With Fedora brought together over 700 Fedorans—nearly twice the size of Nest 2020. We expanded our annual Fedora Women’s Day to Fedora Week of Diversity, celebrating the rich diversity that makes Fedora a great community. And we upgraded the way we communicate, bringing more conversation to Discussion and adding a new chat server using the open Matrix protocol. And all of that featured our new logo, introduced this spring.

Of course, as much as we love the Friends foundation, this community is about more than just having fun together. We also produce an excellent operating system. Fedora Workstation 34 led the way among major desktop distributions by featuring GNOME 40—a significant improvement to the widely-used desktop environment. We also changed the default audio system to PipeWire. And even though we broke our on-time streak with Fedora Linux 35, that just shows how seriously we take quality — we want to be leading edge, not “bleeding edge”, and we continue to demonstrate that in what we deliver to users.

And, thanks to all of our users for returning the love — for the second year in a row, we’re the Best Linux Distro in Linux Unplugged “Tuxies”, and our friends over at Destination Linux have some really nice end-of-year compliments as well. Phoronix says “Fedora had a stellar 2021 and continued running at the forefront of Linux innovations.” Even OMG Ubuntu! got in on the action, naming us one of the top five distros of the year and noting “In short, if you want to ride the crest of the open source wave near the front, Fedora 35 is the one to choose.”

Beyond the desktop, the Server Working Group, which had been stagnant for a while, sprung back to life this year. The team has been busy updating documentation and helping to keep Fedora Server valuable to sysadmins. Fedora CoreOS added the aarch64 architecture this year. And the Cloud Working Group made big improvements to Fedora Cloud by switching the default filesystem to BTRFS and adding hybrid BIOS & UEFI support.

Because the work we do is so technically sound, and our community embodies the best of open source, Amazon Web Services announced that the next version of AWS Linux will be based on Fedora Linux. This isn’t just validation of all the effort we put into the project, but it presents a great opportunity to grow our community. I’m looking forward to working with AWS engineers in Fedora in the coming year.

As we look forward to 2022, so much is still uncertain. But no matter what, Fedora will continue to be an inclusive, welcoming, and open-minded community that creates an innovative platform for hardware, clouds, and containers that enables software developers and community members to build tailored solutions for their users. (Yes, I just copy/pasted our Vision Statement and Mission Statement.) I really hope to see many of you in Detroit for Flock this summer or at other events around the world. But even if we need to continue with alternate ways to connect, I know we’ll all be here for each other the way we have for the last 18 years. As our big Community Outreach Revamp completes, we’ll be looking for new and returning local Fedora Ambassadors to help.

To everyone who has contributed to Fedora in any way, thank you. Packagers, blog writers, doc writers, testers, designers, artists, developers, spin builders, meeting chairs, sysadmins, Ask Fedora answerers, DEI team, and more—you kicked ass this year and it shows. Stay safe and healthy, and we’ll meet again in person soon!

Update on Linux hibernation support when lockdown is enabled

Posted by Matthew Garrett on December 31, 2021 03:36 AM
Some time back I wrote up a description of my proposed (and implemented) solution for making hibernation work under Linux even within the bounds of the integrity model. It's been a while, so here's an update.

The first is that localities just aren't an option. It turns out that they're optional in the spec, and TPMs are entirely permitted to say they don't support them. The only time they're likely to work is on platforms that support DRTM implementations like TXT. Most consumer hardware doesn't fall into that category, so we don't get to use that solution. Unfortunate, but, well.

The second is that I'd ignored an attack vector. If the kernel is configured to restrict access to PCR 23, then yes, an attacker is never able to modify PCR 23 to be in the same state it would be if hibernation were occurring and the key certification data will fail to validate. Unfortunately, an attacker could simply boot into an older kernel that didn't implement the PCR 23 restriction, and could fake things up there (yes, this is getting a bit convoluted, but the entire point here is to make this impossible rather than just awkward). Once PCR 23 was in the correct state, they would then be able to write out a new swap image, boot into a new kernel that supported the secure hibernation solution, and have that resume successfully in the (incorrect) belief that the image was written out in a secure environment.

This felt like an awkward problem to fix. We need to be able to distinguish between the kernel having modified the PCRs and userland having modified the PCRs, and we need to be able to do this without modifying any kernels that have already been released[1]. The normal approach to determining whether an event occurred in a specific phase of the boot process is to "cap" the PCR - extend it with a known value that indicates a transition between stages of the boot process. Any events that occur before the cap event must have occurred in the previous stage of boot, and since the final PCR value depends on the order of measurements and not just the contents of those measurements, if a PCR is capped before userland runs, userland can't fake the same PCR value afterwards. If Linux capped a PCR before userland started running, we'd be able to place a measurement there before the cap occurred and then prove that that extension occurred before userland had the opportunity to interfere. We could simply place a statement that the kernel supported the PCR 23 restrictions there, and we'd be fine.

Unfortunately Linux doesn't currently do this, and adding support for doing so doesn't fix the problem - if an attacker boots a kernel that doesn't cap a PCR, they can just cap it themselves from userland. So, we're faced with the same problem: booting an older kernel allows the system to be placed in an identical state to the current kernel, and a fake hibernation image can be written out. Solving this required a PCR that was being modified after kernel code was running, but before userland was started, even with existing kernels.

Thankfully, there is one! PCR 5 is defined as containing measurements related to boot management configuration and data. One of the measurements it contains is the result of the UEFI ExitBootServices() call. ExitBootServices() is called at the transition from the UEFI boot environment to the running OS, and the kernel contains code that executes before it. So, if we measure an assertion regarding whether or not we support restricted access to PCR 23 into PCR 5 before we call ExitBootServices(), this will prevent userspace from spoofing us (because userspace will only be able to extend PCR 5 after the firmware extended PCR 5 in response to ExitBootServices() being called). Obviously this depends on the firmware actually performing the PCR 5 extension when ExitBootServices() is called, but if firmware's out of spec then I don't think there's any real expectation of it being secure enough for any of this to buy you anything anyway.

My current tree is here, but there's a couple of things I want to do before submitting it, including ensuring that the key material is wiped from RAM after use (otherwise it could potentially be scraped out and used to generate another image afterwards) and, uh, actually making sure this works (I no longer have the machine I was previously using for testing, and switching my other dev machine over to TPM 2 firmware is proving troublesome, so I need to pull another machine out of the stack and reimage it).

[1] The linear nature of time makes feature development much more frustrating

comment count unavailable comments

CentOS Linux 8 EOL

Posted by Fabio Alessandro Locati on December 31, 2021 12:00 AM
In December 2020, the CentOS Project announced a series of changes. The three most important are: the creation of CentOS Stream and the consequent rename of CentOS (the classic Linux distribution the project is known for) in CentOS Linux the anticipation to today (31/12/2021) of the End Of Life for CentOS Linux 8 the fact that CentOS Linux 8 is going to be the last and that from now on, only CentOS Stream will have new releases That announcement created a lot of different sentiments in the community and even more among the CentOS Linux users.

Johnnycanencrypt 0.6.0 released

Posted by Kushal Das on December 30, 2021 06:36 AM

A few days ago I released 0.6.0 of Johnnycanencrypt. It is a Python module written in Rust for OpenPGP using the amazing sequoia-pgp library. It allows you to access/use Yubikeys (without gpg-agent) directly from your code.

This release took almost an year. Though most of the work was done before, but I was not in a state to do a release.

Major changes

  • We can now sign and decrypt using both Curve25519 and RSA keys on the smartcard (we support only Yubikeys)
  • Changing of the password of the secret keys
  • Updating expiry date of the subkeys
  • Adding new user ID(s)
  • Revoking user ID(s)

I also released a new version of Tumpa which uses this. An updated package for Debian 11.

Run Distrobox on Fedora Linux

Posted by Fedora Magazine on December 29, 2021 08:00 AM

Distrobox is a tool that allows you to create and manage container-based development environments without root privileges.

Distrobox can use either podman or docker to create containers using the Linux distribution of your choice.

The created container will be tightly integrated with the host, allowing sharing of the HOME directory of the user, external storage, external USB devices, graphical apps (X11/Wayland), and audio.

As a project, it is inspired by Container Toolbx (all the props to them!), but it aims to have broader compatibility with hosts and containers, without having to require a dedicated image to use in Distrobox.

It is divided into 4 parts:

  • distrobox-create – creates the container
  • distrobox-enter – to enter the container
  • distrobox-init – it’s the entrypoint of the container (not meant to be used manually)
  • distrobox-export – it is meant to be used inside the container, useful to export apps and services from the container to the host

Today we will take a look on how to use it in Fedora (Workstation and Silverblue/Kinoite) to have diverse environments based on multiple Linux distributions, right in your terminal.

Why would you want to use Distrobox

Using containers for development environments in the terminal is already greatly tackled by the Container Toolbx project, but you may sometimes have the necessity of a specific Linux distribution, or to export an application or a service from inside to the container, back to the host.

Generally speaking, it is a tool that resolves some problems like:

  • Provide a mutable environment on an immutable OS, like Endless OS, Fedora Silverblue, OpenSUSE MicroOS or SteamOS3
  • Provide a locally privileged environment for sudoless setups (eg. company-provided laptops, security reasons, etc…)
  • To mix and match a stable base system (eg. Debian Stable, Ubuntu LTS, Red Hat) with a bleeding-edge environment for development or gaming (eg. Arch or OpenSUSE Tumbleweed or Fedora with latest Mesa)
  • Leverage high abundance of curated distro images for docker/podman to manage multiple environments

How does it differ from Toolbx?

Distrobox aims to maintain a broad compatibility with distributions both on the host side and on the container side by using the official distribution’s OCI images for the containers. It supports all the major distributions and it maintains a curated table of supported and tested container images.

Installation

Installing Distrobox is quite straightforward, you can simply use the following command:

curl https://raw.githubusercontent.com/89luca89/distrobox/main/install | sudo sh

Or if you do not want (or cannot) use sudo, you can install it without root permissions:

curl https://raw.githubusercontent.com/89luca89/distrobox/main/install | sh -s -- -p ~/.local/bin/

It is also available from copr:

sudo dnf copr enable alciregi/distrobox
sudo dnf install distrobox

Distrobox depends on either podman or docker to work. We will today explore the podman route.
On Silverblue/Kinoite you’re already good to go, on Workstation or a Spin you need to install podman, so run:

sudo dnf install podman

 Getting started

To start using Distrobox, you can simply type:

luca-linux@x250:~$ distrobox-create

to create your first container. By default, it uses fedora-toolbox 35 image.
You can specify a custom name and image by passing the flags:

luca-linux@x250:~$ distrobox-create --name ubuntu-20 --image ubuntu:20.04

The above command will create a distrobox based on the plain OCI image of Ubuntu 20.
You can use a diverse ecosystem of distributions from various registries. For example, we may want to use even more bleeding edge software from AUR:

luca-linux@x250:~$ distrobox-create --name arch-distrobox --image archlinux:latest

Or we want to use an old application  that only supports Debian 8:

luca-linux@x250:~$ distrobox-create --name debian8-distrobox --image debian:8

In case the container image is not present on the host, you’ll be prompted to download it during the distrobox creation.
After the creation is done you can simply

luca-linux@x250:~$ distrobox-enter --name arch-distrobox

To enter the container and start playing around.

<figure class="wp-block-image size-full"><figcaption>Arch Linux distrobox</figcaption></figure>

Playing around in the container

Now that we’re inside our distrobox, we can proceed to customize it as much as we want, for example we can install that nice package that’s only in AUR:

<figure class="wp-block-image size-full"><figcaption>Installing the atom package inside the Arch Linux distrobox</figcaption></figure>

Now we can simply launch our application to use as a normal application:

<figure class="wp-block-image size-full"><figcaption>Running Atom from the Arch Linux distrobox</figcaption></figure>

Exporting from the container to the host

In case we installed something that we use a lot from inside the distrobox, we can export it back to the host to use it more easily, without having to launch them every time from the terminal.

We can use distrobox-export to export our app back to the host, for example:

luca-linux@x250:~$ distrobox-enter --name arch-distrobox
luca-linux@arch-distrobox:~$ distrobox-export --app atom

Will result in:

<figure class="wp-block-image size-full"></figure>

Now the application behaves and appear as a normally installed graphical application, with also icons, themes and fonts integration with the host.

But we can export also simple binaries and systemd services.

Say you want to install Syncthing from Ubuntu’s repositories on your Fedora Silverblue system. Simply run:

luca-linux@x250:~$ distrobox-enter --name ubuntu-21
luca-linux@ubuntu-21:~$ sudo apt install syncthing

Now export syncthing’s service from the container back to the host by running:

luca-linux@ubuntu-21:~$ distrobox-export --service syncthing@ --extra-flags
 Service ubuntu-21-syncthing@.service successfully exported.
 OK
 ubuntu-21-syncthing@.service will appear in your services list in a few seconds.
 To check the status, run:
         systemctl --user status ubuntu-21-syncthing@.service
 To start it, run:
         systemctl --user start ubuntu-21-syncthing@.service
 To start it at login, run:
         systemctl --user enable ubuntu-21-syncthing@.service

Now back on the host you can run:

luca-linux@x250:~$ systemctl --user enable --now ubuntu-21-syncthing@$USER

And you’re good to go:

luca-linux@x250:~$ systemctl --user status ubuntu-21-syncthing@luca-linux
 ● ubuntu-21-syncthing@luca-linux.service - Syncthing - Open Source Continuous File Synchronization for luca.di.maio
      Loaded: loaded (/home/luca-linux/.config/systemd/user/ubuntu-21-syncthing@.service; enabled; vendor preset: enabled)
      Active: active (running) since Wed 2021-12-22 18:10:56 CET; 1 day 2h ago
        Docs: man:syncthing(1)
    Main PID: 1210423 (distrobox-enter)
    CGroup: /user.slice/user-1000.slice/user@1000.service/ubuntu\x2d22\x2dsyncthing.slice/ubuntu-21-syncthing@luca-linux.service
              ├─1210423 /bin/sh /home/luca-linux/.local/bin/distrobox-enter -H -n ubuntu-21 --  /usr/bin/syncthing -no-browser -no-restart -logflags=0 -allow-newer-config
              └─1210445 podman --remote exec --user=luca-linux --workdir=/home/luca-linux [...]
 [....]

Installing an old or unavailable application

What if you need specifically an old application on your new system? You really need that good old deb from 2014 and there is no Flatpak available? You can resort to Distrobox:

luca-linux@x250:~$ distrobox-create --name old-ubuntu --image ubuntu:14.
luca-linux@x250:~$ distrobox-enter --name old-
luca-linux@old-ubuntu:~$ sudo dpkg -i ./that-old-program.
luca-linux@old-ubuntu:~$ distrobox-export --app that-old-program
luca-linux@old-ubuntu:~$ distrobox-export --bin /usr/bin/that-old-program --export-path ~/.local/bin

Now you have your vintage environment and install that old deb package you have found online without messing around with alien, old glibc, or littering your main operating system.

This is also handy for apps that are not rpm-packaged and do not offer a Flatpak.

Exiting a distrobox

At any time you can exit the distrobox by simply using exit, or pressing Ctrl+D:

luca-linux@x250:~$ hostname
 x250
luca-linux@x250:~$ distrobox-enter
luca-linux@fedora-toolbox-35:~$ hostname
 fedora-toolbox-35
luca-linux@fedora-toolbox-35:~$ exit
 logout
luca-linux@x250:~$

Executing commands directly into a distrobox

You can specify custom commands to execute in the distrobox instead of the shell.
For example:

luca-linux@x250:~$ distrobox-enter --name fedora-toolbox-35 -- sudo dnf update -y
 Fedora 35 - x86_64                                                                 1.4 MB/s |  79 MB     00:56
 Fedora 35 openh264 (From Cisco) - x86_64                                           2.0 kB/s | 2.5 kB     00:01
 Fedora Modular 35 - x86_                                                           1.3 MB/s | 3.3 MB     00:02
 Fedora 35 - x86_64 - Updates                                                       2.3 MB/s |  17 MB     00:07
 Fedora Modular 35 - x86_64 - Updates                                               1.2 MB/s | 2.8 MB     00:02
 Dependencies resolved.
[...]

This could be useful in scripts, and it’s used by the distrobox-export utility to integrate the container exports with the host.

Tips and Tricks

As you may have noticed reading this article, different Linux distributions are supported by distrobox for its containers.
 You can find a complete list here in the project’s page: https://github.com/89luca89/distrobox#containers-distros.

 It supports all the major distributions from old to super-new versions like

  • Debian – from 8 to current unstable (and all the derivates)
  • Ubuntu – from 14.04 to 22.04
  • Centos/Red Hat/Alma Linux/Rocky Linux/Amazon Linux – from 7 to 8 and stream 8 and 9
  • Fedora (tested 30 to 35)
  • Archlinux
  • Alpine Linux
  • Slackware
  • Void
  • Kali Linux (if you want your pentesting stuff on Silverblue)

This gives you the flexibility to use any type of software inside any distribution of your choice.

Duplicating an existing distrobox

It comes handy to also have the ability to duplicate your existing distrobox. This is useful during for example distrobox updates, or to rename a distrobox, or simply snapshot it and save the image.

luca-linux@x250:~$ distrobox-create --name cloned-arch --clone arch-distrobox
luca-linux@x250:~$ distrobox-enter --name cloned-arch
luca-linux@cloned-arch:~$ 

Backup and restore a distrobox

To save, export and reuse an already configured container, you can leverage podman save together with podman import to create snapshots of your environment.

To save a container to an image with podman:

podman container commit -p distrobox_name image_name_you_choose
podman save image_name_you_choose:latest | gzip >image_name_you_choose.tar.gz

This will create a tar.gz of the container of your choice at that exact moment.
Now you can backup that archive or transfer it to another host, and to restore it just run

podman import image_name_you_choose.tar.gz

And create a new container based on that image:

distrobox-create --image image_name_you_choose:latest --name distrobox_name
distrobox-enter --name distrobox_name

And you’re good to go, now you can reproduce your personal environment everywhere in simple (and scriptable) steps.

Managing your distroboxes

To manage your running containers, you can simply use your container manager of choice:

luca-linux@x250:~$ podman ps -a
 CONTAINER ID  IMAGE              COMMAND               CREATED      STATUS          PORTS       NAMES
 3bd26417ec22  /ubuntu:21.10      /usr/bin/entrypoi...  2 days ago   Up 2 days ago               ubuntu-21
 36101d9e2d17  archlinux:latest   /usr/bin/entrypoi...  3 hours ago  Up 3 hours ago              arch-distrobox

 You can delete an existing distrobox using

podman stop your_distrobox_name
podman rm your_distrobox_name

You can read more about Podman in this Magazine Article.

Conclusion

In conclusion, distrobox can be a handy tool both on Fedora Workstation and on Silverblue/Kinoite, allowing both backward and forward compatibility with software and freedom to use whatever distribution you’re more comfortable with.

The project is still in active development, so any type of contribution and reporting bugs is welcome.

F35 retrospective results

Posted by Fedora Community Blog on December 28, 2021 08:00 AM

After the release of Fedora Linux 35, I conducted a retrospective survey. I wanted to see how contributors felt about the release process and identify possible areas we can improve. There was no particular reason to start this with F35 except for that’s when I got around to doing it. So how did F35 go? Let’s look at the results from the 63 responses.

General observations

Although I intended the survey to be about the process of producing Fedora Linux 35, many of the open-ended answers were about the end result. The good news is that the comments were largely positive. But for the purposes of this post, we’ll focus just on the process-related responses.

Stress

I asked respondents to rate the stress of F35 compared to previous releases. 21 people (33%) said it was the same and 19 (30%) said it was less stressful. Only 9 people (14%) felt F35 was more stressful.

<figure class="wp-block-image size-full"></figure>

I wondered about how the stress varied by role, so I separated it by the role identified in one of the questions. The choices were package maintainer, QA tester, Beta user, and other. Respondents could select multiple roles.

<figure class="wp-block-table">
Package MaintainerQA TesterBeta UserOther
More Stressful5361
Same Stress1210114
Less Stressful73120
</figure>

QA testers were more likely to see F35 as being the same stress as previous releases. Overall, the percentage distribution held pretty close across the different roles.

<figure class="wp-block-image size-full"></figure>

Beta delay impact

The F35 Beta was delayed, which shortened the post-Beta “thaw” to one week. I was curious how that affected contributors. Overall, it mostly didn’t. The majority of respondents had a neutral opinion. The number of people who thought it make their work harder was roughly the same as the number of people who thought it made their work easier.

<figure class="wp-block-image size-full"></figure>

As you might expect, package maintainers were more likely to say it made their work harder. Beta testers favored the delay a bit, and QA testers were evenly split.

<figure class="wp-block-image size-full"></figure>

What could make it easier to contribute?

A few key themes popped up in the answer to this question. The first was having more time in the day or more free time. I’m sorry to say that there’s not much I can do for you there.

The second theme was better onboarding and communication. Some of that will be addressed as a result of the QA team’s retrospective. In particular, we’re developing training and documentation for Bugzilla usage. One of the other comments focused on making sure the release notes properly called out potentially breaking changes (for example, firewalld 1.0).

Improving the ease of contribution is important for attracting and retaining new contributors. If we end up setting a goal of doubling the number of active contributors, we must make improvements here.

What went well? How could we improve?

It’s important for retrospectives to include what went well, not just went poorly. I thought this comment was particularly apt: “No heroics.” In part due to our willingness to delay the release when blockers were still unfixed, people generally felt less of a need to work long nights. There was more time to get the tests done.

On the other hand, we weren’t entirely without rushes. WirePlumber had several late fixes that nearly caused it to be a last-minute drop from the final release. And some contributors feel that the QA team focuses on the x86_64 architecture too much, to the detriment of ARM testing. And one person commented “more QA testing” would help, which is always true.

One specific suggestion was to “leave old libs in -compat packages until 3rd party repositories such as Rpmfusion have had a chance to rebuild their packages.” This seems like a good policy.

What’s next

11 people offered suggestions for the F36 retrospective survey. Most of them are either comments or out of scope. But I do plan to have some different questions next time. In particular, if events or policy changes that happen between now and April have an impact. With the stress question in particular, I’m less concerned about the specific values and more with the trends that develop over time.

If you’d like to poke at the data yourself results (with some fields excluded for privacy) are available on my Fedorapeople page.

If you left contact information for follow up, I’ll be starting those conversations this week.

The post F35 retrospective results appeared first on Fedora Community Blog.

Firewalld Fedora 34 -> 35 Masquerade between Zones not working anymore

Posted by Jens Kuehnel on December 27, 2021 12:33 PM

I updated my firewall from 34 to 35 and my firewall was not working anymore. There is a not good documented change with the release of firewalld 1.0 that hit me.

The fix is simple if you found it.

firewall-cmd --permanent --new-policy policy_int_to_ext
firewall-cmd --permanent --policy policy_int_to_ext --add-ingress-zone public
firewall-cmd --permanent --policy policy_int_to_ext --add-egress-zone external
firewall-cmd --permanent --policy policy_int_to_ext --set-priority 100
firewall-cmd --permanent --policy policy_int_to_ext --set-target ACCEPT
firewall-cmd --permanent --zone=external --add-masquerade
systemctl restart firewalld
firewall-cmd --info-policy policy_int_to_ext

Source

Using your OpenPGP key on Yubikey for ssh

Posted by Kushal Das on December 27, 2021 07:52 AM

Last week I wrote about how you can generate ssh keys on your Yubikeys and use them. There is another way of keeping your ssh keys secure, that is using your already existing OpenPGP key (along with authentication subkey) on a Yubikey and use it for ssh.

In this post I am not going to explain the steps on how to move your key to a Yubikey, but only the steps required to start using it for ssh access. Feel free to have a look at Tumpa if you want an easy way to upload keys to your card.

Enabling gpg-agent for ssh

First we have to add gpg-agent.conf file with correct configuration. Remember to use a different pinentry program if you are on Mac or KDE.

❯ echo "enable-ssh-support" >> ~/.gnupg/gpg-agent.conf
❯ echo "pinentry-program $(which pinentry-gnome)" >> ~/.gnupg/gpg-agent.conf
❯ echo "export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)" >> ~/.bash_profile
❯ source ~/.bash_profile 
❯ gpg --export-ssh-key <KEYID> > ~/.ssh/id_rsa_yubikey.pub

At this moment your public key (for ssh usage) is at ~/.ssh/id_rsa_yubikey.pub file. You can use it in the ~/.ssh/authorized_keys file on the servers as required.

We can then restart the gpg-agent using the following command and then also verify that the card is attached and gpg-agent can find it.

❯ gpgconf --kill gpg-agent
❯ gpg --card-status

Enabling touch policy on the card

We should also enable touch policy on the card for authentication operation. This means every time you will try to ssh using the Yubikey, you will have to touch the interface (it will be flashing the light till you touch it).

❯ ykman openpgp keys set-touch aut On
Enter Admin PIN: 
Set touch policy of authentication key to on? [y/N]: y

If you still have servers where you have only the old key, ssh client will be smart enough to ask you the passphrase for those keys.

Episode 303 – Log4j Christmas Spectacular!

Posted by Josh Bressers on December 27, 2021 12:01 AM

Josh and Kurt start the show with the reading of a security themed Christmas poem. We then discuss some of the new happenings around Log4j. The basic theme is that even if we were over-investing in Log4j, it probably wouldn’t have caught this. There are still a lot of things to unpack with this event. We are sure we’ll be talking about it well into the future.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2670-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_303_Log4j_Christmas_Spectacular.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_303_Log4j_Christmas_Spectacular.mp3</audio>

Log before Christmas poem

‘Twas the night before Christmas, when all through the stack
Not a scanner was scanning, not even a rack,

The SBOMs were uploaded to the portal with care,
In hopes that next year would be boring and bare

The interns were nestled all snug at their beds;
While visions of dashboards danced in their heads;

The CISO in their ‘kerchief, and I in my cap,
Had just slept our laptops for a long winter’s nap,

When all of a sudden the pager went ack ack
I sprang to my laptop with worries of attack

Away to the browser I flew like a flash,
Tore open the window and cleared out the cache

The red of the dashboard the glow of the screen
Gave a lustre of disaster my eyes rarely seen

When what to my wondering eyes did we appear,
But a new advisory and eight vulnerabilities to fear,

Like a little old hacker all ready to play,
I knew in a moment it must be Log4j

More rapid than gigabit its coursers they came,
And it whistled, and shouted, and called them by name:

“Now, Log4Shell! now CVE! now ASF and NVD!
On, CISA! on, LunaSec! on, GossiTheDog!

To the top of the HackerNews! to the top of the wall!
Now hack away! hack away! hack away all!”

Like the bits that before the wild CDN fly by
When they meet with a firewall, they mount to the sky;

So up to the cloud like bastards they flew
With tweets full of vulns, and Log4j too—

And then, in a twinkling, I read in the slack
The wailing and screaming of each analyst called back

As I drew in my head, and was turning around,
Down the network Log4j came with a bound.

It was dressed in a hoodie, black and zipped tight,
The clothes were all swag from a conference one night

A bundle of vulns it had checked in its git
And it looked like a pedler just being a twit

The changelog—how it twinkled! its features, how merry!
Its versions were like roses, its logo like a cherry!

Its droll little mouth was drawn up like an at,
And the beard on its chin made it look stupid and fat

The stump of a diff it held tight in its teeth,
And the bits, they encircled the repo like a wreath;

It had a flashy readme an annoying little fad
That shook when it downloaded, like a disk drive gone bad

It was chubby and plump, an annoying old package,
And I laughed when I saw it, in spite of the hackage

A wink of its bits and a twist of its head
Soon gave me to know I had everything to dread

It spoke not a word, but went straight to its work,
And pwnt all the servers; then turned with a jerk,

And laying its patches aside of its nose,
And giving a nod, up the network it rose;

It sprang to its packet, to its team gave them more,
And away they all fled leaving behind a back door

But I heard it exclaim, ere it drove out of sight—
“Merry Christmas you nerds, Log4j won tonight!”

DynaDock ~ macOS App ~ Xcode+Objective-C

Posted by Jon Chiappetta on December 26, 2021 10:41 PM
<figure class="aligncenter size-large"></figure>

I created a macOS app called DynaDock which can dynamically flip through a set of low-information-icons while being active in your dock. It only has to two screens so far (month+day and wind-direction) but more could be added in the future.

Source Code: github.com/stoops/DynaDock