Fedora People

Episode 307 – Got vulnerabilities? Introducing GSD

Posted by Josh Bressers on January 24, 2022 12:01 AM

Josh and Kurt talk about the Global Security Database (GSD) project. This is a Cloud Security Alliance (CSA) effort to build community around vulnerability identifiers.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2689-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/forcedn/opensourcesecuritypodcast/Episode_307_Got_vulnerabilities_Introducing_GSD.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/forcedn/opensourcesecuritypodcast/Episode_307_Got_vulnerabilities_Introducing_GSD.mp3</audio>

Show Notes

Exploring GIT Commit Hashes & Generating Cryptographic Zeros

Posted by Jon Chiappetta on January 23, 2022 09:03 PM

So I was trying to research what goes into generating a GIT commit hash and I thought I would try to personalize the cryptographic hash output a little bit. My computer isn’t that powerful but it may be possible to generate more zeros!

import time, hashlib, subprocess

head = subprocess.check_output("git cat-file commit HEAD | sed -e 's/> .*$/> %d %s/'", shell=True)
secs = int(time.time())
rnds = (secs - 99999999)
offs = "-0500"
begs = ("0" * 6)
ghsh = ""

while (not ghsh.startswith(begs)):
    rnds = (rnds + 1)
    comm = (head % (secs, offs, rnds, offs))
    data = ("commit %d\0%s" % (len(comm), comm))
    ghsh = hashlib.sha1(data).hexdigest()

chk = raw_input("Commit hash [%s]?: " % (ghsh)).strip()
if (chk.lower().startswith("y")):
    comd = ("GIT_COMMITTER_DATE='%d %s' git commit -a --amend --no-edit --date='%d %s'" % (rnds, offs, secs, offs))
    subprocess.call(comd, shell=True)

Contribute at the Fedora Linux 36 Test Week for Kernel 5.16

Posted by Fedora Community Blog on January 22, 2022 08:00 AM

The kernel team is working on final integration for kernel 5.16. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, January 23, 2022 through Sunday, January 29, 2022. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

The post Contribute at the Fedora Linux 36 Test Week for Kernel 5.16 appeared first on Fedora Community Blog.

Week7 Blog Post

Posted by Vanessa Christopher on January 22, 2022 04:30 AM

Hello there! and welcome once again to my blog.

This is about a journey of adventure, headaches, victory dances and learning!

What was my original internship project timeline?

Week 1-2: Learn the Fedora package maintainer process, and improve existing packages via pull requests to practice the workflow.

Week 3-5: Go over the NeuroFedora packaging queue and identify a list of tools for packaging

Week 6-9: Follow the Fedora package maintaining process, and submit packages for review.

Week 10: Follow the review process and make necessary improvements to submitted packages.

Week 12: Submit packages for QA, test them, update documentation.

What have I accomplished in the first half of the internship

I would say being an open source contributor is the biggest achievement so far as this is my first open source project, also i contribute to an organization as big as Fedora.

For this first half i've being able to package python spec files,make updates and even review other packages, learn more about git as i practice. Generally i think I and my mentor have already gone over list 😄

What project goals took longer than expected?

The projects I found most challenging was learning how to write blogs for non technical audience, but my mentor carefully took me through a session where he explained the bits of writing for a non technical audience and i'm sure with more practice i'll get better.

Also building some packages took longer than expected, but with some help of course i had my victory dance.

What would I do differently if I were starting the internship over?

If i was to start my internship over again? well i wouldn't change much because i love my learning curve and speed, though some times it's challenging and i spend more time on projects than expected but my mentor tries to help me out and modify my timing to help me understand better.

But... i think i should have started taking notes earlier as i forget alot of little details 😄

What is my new plan for the second half of the internship?

I plan on making notes at every step and also timing myself to catch up with my given tasks.

My Outreachy internship has really been awesome, it gave me the opportunity to be part of a bigger community, to learn things i never knew about, to meet awesome people and become a better version of myself.

Friday’s Fedora Facts: 2022-03

Posted by Fedora Community Blog on January 21, 2022 09:44 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Dutch PHP ConferenceAmsterdam, NL1–2 Julcloses 30 Jan
WWeAreDevelopers World Congress 2022Berlin, DE14–15 Juncloses 31 Jan
OpenSource 101virtual29 Marcloses 4 Feb
Open Source Summit NAAustin, TX, US & virtual21–24 Juncloses 14 March
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimNEW
2032528flatpakNEW
</figure>

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345418
F352757
F36 (rawhide)7155
</figure>

Fedora Linux 36

Schedule

  • 2022-02-08 — F36 branches from Rawhide; Rawhide begins F37 development

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Users are administrators by default in the installer GUI.Self-ContainedFESCo #2708
Enable fs-verity in RPMSystem-WideFESCo #2711
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
GHC compiler parallel version installsSelf-ContainedFESCo #2715
Keylime subpackaging and agent alternativesSelf-ContainedFESCo #2716
Golang 1.18System-WideFESCo #2720
DIGLIMSystem-WideFESCp #2721
Default To Noto FontsSystem-WideApproved
Hunspell Dictionary dir changeSystem-WideApproved
Relocate RPM database to /usrSystem-WideApproved
No ifcfg by defaultSelf-ContainedFESCo #2732
Django 4.0Self-ContainedApproved
GNU Toolchain UpdateSystem-WideAnnounced
New requirements for akmods binary kernel modules for Silverblue / Kinoite supportSelf-ContainedFESCo #2735
Malayalam Default Fonts UpdateSelf-ContainedFESCo #2736
Ibus-table cangjie default for zh_HKSelf-ContainedFESCO #2737
MLT-7Self-ContainedAnnounced
Ruby on Rails 7.0Self-ContainedAnnounced
Wayland by Default for SDDMSelf-ContainedAnnounced
Authselect: Move State Files to /etcSelf-ContainedAnnounced
Silverblue and Kinoite will have /var on its own Btrfs subvolumeSelf-ContainedAnnounced
Cockpit File SharingSelf-ContainedAnnounced
</figure>

Fedora Linux 37

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Python Dist RPM provides to only provide PEP503-normalized namesSelf-ContainedAnnounce
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-03 appeared first on Fedora Community Blog.

CPE Weekly Update – Week of January 17th – 22nd

Posted by Fedora Community Blog on January 21, 2022 01:00 PM

This is a weekly report from the CPE (Community Platform Engineering)
Team. If you have any questions or feedback, please respond to this
report or contact us on #redhat-cpe channel on libera.chat
(https://libera.chat/). 

We (CPE team) will be joining Fedora Social Hour on Jan 27th.
Looking forward to seeing a lot of you!
(https://discussion.fedoraproject.org/t/join-us-for-fedora-social-hour-every-week/18869/46)

Highlights of the week

Infrastructure & Release Engineering

Goal of this initiative

Purpose of this team is to take care of day to day business regarding
CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS
infrastructure and preparing things for the new Fedora release
(mirrors, mass branching, new namespaces etc.). The ARC (which is a
subset of the team) investigates possible initiatives that CPE might
take on.

Update

Fedora Infra

  • All koji builders/hubs upgraded to F35 and ready for mass rebuild ( 🤞 )
  • Additional s390x disk space appeared, so added 10 more s390x builders.
  • Fixed IPA issue with certs ( known upgrade bug)
  • Difficult container builds failing issue solved.

CentOS Infra including CentOS CI

  • CentOS Linux 8 EOL plan
  • Hardware issues (storage box, 64 compute nodes for CI infra)
  • Kmods SIG DuD discussion (koji plugin vs external script)
  • CI storage for ocp/openshift migration completed and working faster/better !
  • CentOS CI tenants Survey (for the upcoming DC move)

Release Engineering

  • Mass rebuild starts today
  • Several rawhide issues fixed and composes have been good the last few days.

CentOS Stream

Goal of this initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this
new distribution a reality. The goal of this initiative is to prepare
the ecosystem for the new CentOS Stream.

Updates

  • The NFV repo was added to CentOS Stream 8, work is happening now on the repo files in centos-release
  • Module branching work is ongoing
  • Libffi is causing some interesting breakage in ELN
  • GCC bugs in ELN
  • Koji/brew Inheritance discussions are still happening
  • Testing Content Resolver with production data before deployment

Datanommer/Datagrepper V.2

Goal of this initiative

The datanommer and datagrepper stacks are currently relying on fedmsg which
we want to deprecate.
These two applications need to be ported off fedmsg to fedora-messaging.
As these applications are ‘old-timers’ in the fedora infrastructure, we would
also like to look at optimizing the database or potentially redesigning it to
better suit the current infrastructure needs.
For a phase two, we would like to focus on a DB overhaul.

Updates

  • It’s done! Data is migrated, the new code is now running in prod.

CentOS Duffy CI

Goal of this initiative

Duffy is a system within CentOS CI Infra which allows tenants to provision and
access bare metal resources of multiple architectures for the purposes of
CI testing.
We need to add the ability to checkout VMs in CentOS CI in Duffy. We have
OpenNebula hypervisor available, and have started developing playbooks which
can be used to create VMs using the OpenNebula API, but due to the current state
of how Duffy is deployed, we are blocked with new dev work to add the
VM checkout functionality.

Updates

  • Legacy API
  • Node Pools & Ansible Backend

Image builder for Fedora IoT

Goal of this initiative

Integration of Image builder as a service with Fedora infra to allow Fedora IoT
migrate their pipeline to Fedora infra.

Updates

  • Officially kicked off this week
  • Fact finding at the moment
  • Met with Peter Robinson and team from Fedora IoT
    • Need to figure out a way to run their pipeline
    • At least 1 koji plugin to be written + deployed
  • Meeting with Image Builder team tomorrow
    • They are currently blocked by auth, development underway
    • Need to get an idea of their API and what they expect from us

Bodhi

Goal of this initiative

This initiative is to separate Bodhi into multiple sub packages,
fix integration and unit tests in CI, fix dependency management,
and automate part of the release process.
Read ARC team findings in detail at: https://fedora-arc.readthedocs.io/en/latest/bodhi/index.html

Updates

  • splitting the codebase into separate python packages
  • migrating from CentOS CI to Zuul

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest
Group that creates, maintains, and manages a high quality set of additional
packages for Enterprise Linux, including, but not limited to,
Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never
conflict with or replace packages in the base Enterprise Linux distributions.
EPEL uses much of the same infrastructure as Fedora, including buildsystem,
bugzilla instance, updates manager, mirror manager and more.

Updates

  • epel9 up to 1346 source packages (increase of 188 from last week)
  • Two talks submitted and accepted for the February CentOS Dojo
    • State of EPEL
    • EPEL Packaging Hackfest

Kindest regards,
CPE Team

The post CPE Weekly Update – Week of January 17th – 22nd appeared first on Fedora Community Blog.

PHP version 8.0.15 and 8.1.2

Posted by Remi Collet on January 21, 2022 10:37 AM

RPMs of PHP version 8.1.2 are available in remi-php81 repository for Fedora 33-35 and Enterprise Linux (RHEL, CentOS).

RPMs of PHP version 8.0.15 are available in remi repository for Fedora 35 and remi-php80 repository for Fedora 33-34 and Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 7.4.27.

emblem-important-2-24.pngPHP version 7.3 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository and as module for Fedora and EL ≥ 8.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.1 installation (simplest):

yum-config-manager --enable remi-php81
yum update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

Replacement of default PHP by version 8.0 installation (simplest):

yum-config-manager --enable remi-php80
yum update

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

yum-config-manager --enable remi-php74
yum update

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

Parallel installation of version 7.4 as Software Collection

yum install php74

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.5
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu69 (version 69.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.3
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php74 / php80 / php81)

Contribute at the Fedora Linux 36 Test Week for Kernel 5.16

Posted by Fedora Magazine on January 21, 2022 08:00 AM

The kernel team is working on final integration for kernel 5.16. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Sunday, January 23, 2022 through Sunday, January 29, 2022. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Using Python to access a Solid Pod

Posted by Kushal Das on January 21, 2022 07:42 AM

solid logo

From the project website:

Solid is a specification that lets people store their data securely in decentralized data stores called Pods. Pods are like secure personal web servers for your data.

We can host these Pods in personal servers or at any provider. Everything is tied up based on the user identity, called WebID. It is an HTTP URI described as RDF document.

You can decide who/what can access your data. Applications/humans can use Solid authentication and identify to the Pod server to access the data using open protocols.

How to get a Pod?

The website lists current vendors who provide Pod services. If you want to play around locally, you can run community server on your local system. Or just create an account at solidcommunity.net, and use that to learn more.

Using Python with Solid

We already have a solid-file Python module. You can install it via pip in a virtual environment.

$ python3 -m venv .venv
$ source .venv/bin/activate
$ python3 -m pip install solid-file

For the rest of the example code, I am going to use my Pod at the solidcommunity.net, feel free to replace the username/password and URL values in the code as required.

USERNAME = "kushaldas"
PASSWORD = "******************************************"

IDP = 'https://solidcommunity.net'
POD_ENDPOINT = "https://kushaldas.solidcommunity.net/public/"

from solid.auth import Auth
from solid.solid_api import SolidAPI

auth = Auth()
api = SolidAPI(auth)
auth.login(IDP, USERNAME, PASSWORD)

Here we are importing the module, creating an Auth object and identify using username and password.

Then we will check if a folder exist or not (it does not exist yet), and create the folder in this case.

folder_url = f"{POD_ENDPOINT}/languages/"
if not api.item_exists(folder_url):
    print(api.create_folder(folder_url))

The output is <Response [201 Created]>.

Next, we create two text files.

data = io.BytesIO("I ❤️ 🦀".encode("utf-8"))
file_url = f"{folder_url}hello.txt"
print(api.put_file(file_url, data, 'text/plain'))
data = io.BytesIO(b"Already 10 years of SOPA blackout")
msg_url = f"{folder_url}message.txt"
print(api.put_file(msg_url, data, 'text/plain'))

We can then list all of the items under our subfolder.

folder_data = api.read_folder(folder_url)
files = "\n".join(list(map(lambda x: x.name, folder_data.files)))
print(f'Files in the folder: \n{files}')

We can then try to read one of the files we just now created.

resp = api.get(file_url)
print(f"**{resp.text}**")

Output:

Files in the folder: 
hello.txt
message.txt

Why am I looking at Solid?

Solid as the specification is evolving along with the community. One usecase for any government organization would be if the citizens can control the access to their own data, and people can authorize who gets to access their data. Solid can become answer to that question. The specification is loose enough to allow building things easily on top of it.

I would love to see Python as a major part of this ecosystem. The solid-file project maintainers are doing a great job. But, we need more related projects, including proper examples of various usecases. Maybe a server too.

In action...@redken_bot with examples

Posted by Pablo Iranzo Gómez on January 20, 2022 04:20 PM
See how to use @redken_bot in Telegram or discord with some examples!

Outreachy Project “Mote” progress update

Posted by Fedora Community Blog on January 20, 2022 08:00 AM

Subhangi Choudhary is working on Mote as an outreachy Intern. This blog post is her experience and project update so far.

Experience so far

I had heard Outreachy internship from one of my seniors who was sharing her experience in my college when I was in my second year. I decided to give it a shot with utmost dedication and learning after understanding how Outreachy can be a great learning experience – needed to excel in the IT industry. Outreachy helps people from under-represented groups and is a life-changing experience for a contributor. I feel happy now that I am working with amazing mentors who guide and motivate me at every step. This opportunity wouldn’t be possible without the support of my parents, friends, and mentors.
I always have shared my knowledge and experience with beginners, and this is a chance for me to prove myself capable and then help other people contribute to Open Source. I am excited for the next 3 months of knowing community members and helping with the project.

I am currently in week 5 of my Outreachy Internship and I must say it’s going great. Every day is a new learning experience for me with lots of new implementations, goals, and tasks. This keeps me motivated and afresh with the project.

Project update

Once the community bonding period was over, My mentor and I started working on detailed timeline and road map.

In the first week, I worked on the landing page of the Mote project and its design and UI. I was trying to implement a calendar from scratch in Vanilla JS. Thankfully my mentor recommended I use an existing customizable calendar plugin called fullcalendar.io. Documentation for this plugin is well maintained and has been extremely helpful as I started implementing the backend. I used the calendar to fetch data of all the meetings that happened in the last month. Once the calendar page loads, a function is called to fetch data and render UI.

However, it will display the recent meetings fetched in the calendar as events of that particular date. We made two views – week and month. In the week view, it will display all the meetings week-wise in the current month. We also worked on today button which will become active in the week view once you navigate to the current date. On clicking the meeting, it will redirect to the meeting details page that shows the minute. I adjusted the aspect ratio to widen the navbar controls to occupy the width taken up by the container for the calendar. But as it varied and changed according to the screen size, I had to do window reloading and set up the width so that the scrollbar gets removed and the calendar width is same in all the screens. I also used bootstrap theming and font awesome to change the colors of the month, week, today, arrow buttons, and the meetings block. I set it according to the matching theme of the Fedora. Also when the calendar was being loaded, the meetings fetched used to display very late to which my mentor suggested to first render the empty calendar and then use the indeterminate spinner that is rendered by default when the modal is loaded up to be replaced or removed when the contents have arrived.

After the calendar implementation, I worked on the search bar where in the dropdown I had to display the list of minutes fetched. It calls the function from the backend as we type the characters and displays results in the dropdown. By default, we see 5 meetings in the dropdown along with a ‘show more’ button which opens to the modal of the list of meetings happened with details like date, time, and channel name. So till now, these are the tasks I have completed under the guidance of my mentors.

Thank you for taking out time to read my blog and have a great day!

The post Outreachy Project “Mote” progress update appeared first on Fedora Community Blog.

Copr - look back at 2021

Posted by Copr on January 20, 2022 12:00 AM

Let me sum up what the Copr team did during 2021

Mock

We did eight releases of Mock.

We moved Mock’s wiki to GitHub Pages to allow indexing by search engines https://rpm-software-management.github.io/mock/ and created a Fedora-based Jekyll container for local documentation testing (https://github.com/praiskup/jekyll-github-pages-fedora-container).

We initiated the discussion about default epel-8-* config

Copr

We did six releases of Copr and upgraded Copr servers to Fedora 35.

We wrote three “4 cool new projects to try in Copr” articles for Fedora Magazine.

We rebuilt all gems from Rubygems.org for Fedora Rawhide.

We started to use AWS Spot instances for builders.

We started to decommission APIv1 and APIv2.

You have an option to run a fedora-review after each build.

We created a new Ansible module copr which is available in community general collection

You can order your builds using batches now

People started using discussion under projects. There are more than one hundred active discussions https://discussion.fedoraproject.org/c/projects-in-copr/54

We redesigned Copr’s home page.

We worked on clean-up scripts resulting in 5+ TB cleaned from our backends.

We did six releases of resalloc with improvements for better throughput and reactions during peeks. https://github.com/praiskup/resalloc/

Copr’s servers got IPv6

We did three releases of prunerepo https://pagure.io/prunerepo

We added lots of builders and some architectures and later we re-add ppc64le architecture, and started using spot AWS instances

We implemented Error Budget and our goal is:

  • 97 % of builds of copr-ping package is finished within 6 minutes (this monitor length of queue and speed of builders)
  • 99,3 % uptime of CDN
  • 99,3 % uptime of copr-backend (dnf repositories) (cca 5h/month)
  • 97.5 % uptime of copr-frontend (WebUI) (cca 18h/month)

There is work in progress on Kerberos authentication in copr-cli.

Statistics:

  • Copr run 2,900,000 builds.
  • People created 15 731 new projects.

Fedora

We created a Fedora Sponsor site to easy find of a sponsor.

We created a video explaining a dist-git.

We proposed Retired Packages change and it got accepted.

We created license-validate tool.

Others

We did four releases of Tito.

We wrote an article about activating no-cost RHEL.

We wrote three articles about storing GPG keys in DNS and persuaded several distributions to put the records in DNS https://github.com/xsuchy/distribution-gpg-keys/#storing-keys-in-dns

New modulemd-tools release with “bld2repo” tool.

Outlook for 2022

We are in the middle of talking with IBM, which should result in the availability of native s390x builders in the early months of 2022

We had an initial meeting about rebase-helper automatically opening PR in src.fedoraproject.org. There is even some code written (by Michal Konečný), but the code is not integrated yet and no user-visible outcome was done yet. ETA is the first half of 2022.

Python team deprecated pyp2rpm and Karolina Surma is writing a new tool from scratch and she will use it for the rebuild of PyPI in Copr in a similar way to how we have done Rubygems. ETA is early months or 2022.

Unify forge events - when an “interesting” event happens on GitHub/GitLab/* sent notification to Fedora Messaging in a unified format. Besides Copr, this will be useful for Packit too.

s we have for 2022 - some of them are inherited from the previous year:

Finish rpm-spec-wizard https://github.com/xsuchy/rpm-spec-wizard

Integration with Koshei - automatic rebuild of your package in your project when dependency change

Enhance Mock --chain to try to set %bootstrap when the standard loop fails. When the set succeeds, rebuild the bootstrapped package again without the %bootstrap macro.

Contribute to fedpkg/koji to have machine-readable output.

Building VM images for Cloud (osbuild-composer)

If you have any idea which can ease packaging (especially the automation) then do not hesitate and share it with us. We may do that!

The Community Packaging Team consists of Pavel Raiskup, Silvie Chlupova, Jakub Kadlcik, and Miroslav Suchý.

Curious what we did in 2020?

Fedora IoT Web Page - Initial Ideas

Posted by Emma Kidney on January 20, 2022 12:00 AM

Just an update on what I've been working on :) Click through to see my process and progress starting to create a web page mock-up for Fedora IoT as part of the Fedora Website Revamp!

As part of the Fedora Website Revamp, I got tasked with creating a mock-up of the Fedora IoT web page. I reference the Fedora IoT logo a lot here. I was unable to locate high quality SVGs, so I just made some quick vectors as placeholders.

Logos

I bounced off of what Mo was creating for the Fedora Workstation page in order to keep things cohesive. I looked at her mood board when brainstorming. I've included a picture below of my initial ideas:

Brainstorm

I wanted to do something with the cubes that are used in the Fedora IoT logo. I compared them to an image on the mood board of an LED keyboard.

Pattern

This was what I came up with! So then I placed in the mock-up along with a placeholder hero image, as well as fading out the gradient toward the end to it can merge into the rest of the page.

Pattern Mockup1

My second idea was to take the square and make it into a gradient cube and sort of morph it into a suitable background.

Here is the draft I made up following on from that idea:

Mockup

I tried to implement a duplicate of the cube within the pattern but I don't think it works too well.

Mockup2

Mockup3

Considering that the IoT edition is aims for edge devices and such, I thought about maybe including a RaspbarryPi as the hero image:

RPi idea

I currently don't have a high quality image of an Raspberry Pi but I was provided with an SVG created by a previous intern. With that as a placeholder, here are my current drafts for the Fedora IoT web page:

Mockup3

Mockup3

Obviously these are just drafts for the moment. More work needs to go into refining the details etc but I just wanted to give you an idea of what I have so far!

For my next steps, I'm in the process of learning Blender with the goal of possibly creating a 3D rendering of a RaspberryPi. I am also going to refine the background with the gradient cuboid and plan out the rest of the web page.

If you have any feedback for me on my current drafts please don't hesitate to either message me @ ekidney:matrix.org or email me @ ekidney@redhat.com :) I am also active on the #design:fedoraproject.org channel.

How to try KDE Plasma 5.24 Beta on Fedora Kinoite

Posted by Timothée Ravier on January 19, 2022 07:30 PM

KDE Plasma 5.24 Beta is here! 🎉

On a classic Fedora system or on other disctributions you can try it with the repos listed on the KDE Wiki. Here is how to safely try it on Fedora Kinoite, using the packages for Fedora 35 made by Marc Deop, a member of the Fedora KDE SIG.

The latest version of KDE Plasma is usually available in Fedora Rawhide (unfortunately not available right now) however rebasing the entire system to a development version involves a lot of uncertainty. Thus it is much safer to change only the KDE Plasma packages and frameworks while keeping a stable system as a base.

As always, make sure to backup your data before trying out beta software that could result in the loss of your personal cat picture collection.

Setting up the RPM repos

Add the following Fedora COPR repos (frameworks, plasma) on your host and inside a toolbox:

<figure class="highlight">
$ cat /etc/yum.repos.d/kde-beta.repo
[copr:copr.fedorainfracloud.org:marcdeop:frameworks]
name=Copr repo for frameworks owned by marcdeop
baseurl=https://download.copr.fedorainfracloud.org/results/marcdeop/frameworks/fedora-$releasever-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://download.copr.fedorainfracloud.org/results/marcdeop/frameworks/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1

[copr:copr.fedorainfracloud.org:marcdeop:plasma]
name=Copr repo for plasma owned by marcdeop
baseurl=https://download.copr.fedorainfracloud.org/results/marcdeop/plasma/fedora-$releasever-$basearch/
type=rpm-md
skip_if_unavailable=True
gpgcheck=1
gpgkey=https://download.copr.fedorainfracloud.org/results/marcdeop/plasma/pubkey.gpg
repo_gpgcheck=0
enabled=1
enabled_metadata=1
</figure>

Downloading the packages from a toolbox

Download the RPM packages from the repo:

<figure class="highlight">
[toolbox]$ echo "bluedevil breeze-cursor-theme breeze-gtk-common breeze-gtk-gtk3 breeze-gtk-gtk4 breeze-icon-theme kactivitymanagerd kde-cli-tools kdecoration kde-gtk-config kdeplasma-addons kdesu kf5-attica kf5-baloo kf5-baloo-file kf5-baloo-libs kf5-bluez-qt kf5-filesystem kf5-frameworkintegration kf5-frameworkintegration-libs kf5-kactivities kf5-kactivities-stats kf5-karchive kf5-kauth kf5-kbookmarks kf5-kcmutils kf5-kcodecs kf5-kcompletion kf5-kconfig-core kf5-kconfig-gui kf5-kconfigwidgets kf5-kcoreaddons kf5-kcrash kf5-kdbusaddons kf5-kdeclarative kf5-kded kf5-kdelibs4support kf5-kdelibs4support-libs kf5-kdesu kf5-kdnssd kf5-kdoctools kf5-kfilemetadata kf5-kglobalaccel kf5-kglobalaccel-libs kf5-kguiaddons kf5-kholidays kf5-khtml kf5-ki18n kf5-kiconthemes kf5-kidletime kf5-kimageformats kf5-kinit kf5-kio-core kf5-kio-core-libs kf5-kio-doc kf5-kio-file-widgets kf5-kio-gui kf5-kio-ntlm kf5-kio-widgets kf5-kio-widgets-libs kf5-kirigami2 kf5-kitemmodels kf5-kitemviews kf5-kjobwidgets kf5-kjs kf5-knewstuff kf5-knotifications kf5-knotifyconfig kf5-kpackage kf5-kparts kf5-kpeople kf5-kpty kf5-kquickcharts kf5-krunner kf5-kservice kf5-ktexteditor kf5-ktextwidgets kf5-kunitconversion kf5-kwallet kf5-kwallet-libs kf5-kwayland kf5-kwidgetsaddons kf5-kwindowsystem kf5-kxmlgui kf5-kxmlrpcclient kf5-modemmanager-qt kf5-networkmanager-qt kf5-plasma kf5-prison kf5-purpose kf5-solid kf5-sonnet-core kf5-sonnet-ui kf5-syntax-highlighting kf5-threadweaver khotkeys kinfocenter kmenuedit kscreen kscreenlocker ksystemstats kwayland-integration kwayland-server kwin kwin-common kwin-libs kwin-wayland kwin-x11 kwrited layer-shell-qt libkscreen-qt5 libksysguard libksysguard-common libkworkspace5 oxygen-sound-theme pam-kwallet plasma-breeze plasma-breeze-common plasma-browser-integration plasma-desktop plasma-desktop-doc plasma-discover plasma-discover-flatpak plasma-discover-libs plasma-discover-notifier plasma-disks plasma-drkonqi plasma-integration plasma-lookandfeel-fedora plasma-milou plasma-nm plasma-nm-openconnect plasma-nm-openvpn plasma-nm-vpnc plasma-pa plasma-systemmonitor plasma-systemsettings plasma-thunderbolt plasma-vault plasma-workspace plasma-workspace-common plasma-workspace-geolocation plasma-workspace-geolocation-libs plasma-workspace-libs plasma-workspace-wayland plasma-workspace-x11 polkit-kde powerdevil qqc2-desktop-style sddm-breeze sddm-kcm xdg-desktop-portal-kde" > packages.list
[toolbox]$ mkdir -p rpm && cd rpm
[toolbox]$ dnf download --arch=x86_64,noarch $(cat ../packages.list)
</figure>

The list can be generated from the following commands:

<figure class="highlight">
[toolbox]$ dnf repository-packages copr:copr.fedorainfracloud.org:marcdeop:frameworks list | grep copr:copr.fedorainfracloud.org:marcdeop:frameworks | grep -vE "(debug|devel|\.src)" | cut -f1 -d\ | sed 's/\.x86_64//' | sed 's/\.noarch//' > frameworks.list
[toolbox]$ dnf repository-packages copr:copr.fedorainfracloud.org:marcdeop:plasma list | grep copr:copr.fedorainfracloud.org:marcdeop:plasma | grep -vE "(debug|devel|\.src)" | cut -f1 -d\ | sed 's/\.x86_64//' | sed 's/\.noarch//' > plasma.list
[toolbox]$ rpm -qa | sed "s/.noarch//" | sed "s/.x86_64//" | sed "s/\.fc35//" | sed "s/\-[^-]*$//" | sed "s/\-[^-]*$//" > installed.list
[toolbox]$ comm -12 <(cat installed.list | sort) <(cat frameworks.list plasma.list | sort) > packages.list
</figure>

Overriding the packages

Use ostree to pin your current (hopefully working) deployment and then rpm-ostree to create a new deployment with (a lot) of package overrides:

<figure class="highlight">
[host]$ sudo ostree admin pin 0
[host]$ cd rpm
[host]$ sudo rpm-ostree override replace ./*.rpm
</figure>

And reboot.

Fedora Kinoite 35 with the KDE Plasma 5.24 Beta

Rolling back

You can either simply boot the previous deployment, or rollback to it:

<figure class="highlight">
[host]$ sudo rpm-ostree rollback
</figure>

or reset all your overrides:

<figure class="highlight">
[host]$ sudo rpm-ostree override reset --all
</figure>

and reboot.

Conclusion

This is just the first step to make trying Beta versions of KDE Plasma and Apps easier on Fedora Kinoite. There is a lot of work in progress to make this process much easier in the future.

Keeping POWER relevant in the open source world

Posted by Peter Czanik on January 19, 2022 01:00 PM

I’m not a POWER (or recently: Power) expert, only an enthusiastic user and advocate. Still, in the past couple of weeks a number of people from around the world asked my opinion how the POWER architecture could be kept relevant. This blog is really just an opinion, as I do not have the financial means to go ahead. It is full of compromises some people are not willing to make. However, I think this is the safest and fastest way forward.

Why? Is there a problem?

Power 10 was just released, and used in some of the most powerful servers ever. Power became an officially supported architecture in major Linux distributions. Why do I talk about becoming irrelevant? Is there really a problem?

Well, it all depends on the perspective. IBM treats Power as an enterprise platform, just like mainframes. And as long as they run AIX and IBMi with a couple of proprietary commercial applications, they are right. However, as far as I know, a good part of Power boxes run Linux. And Linux is a volume play. The more users and developers work on a platform the better chance it has for survival. This is how 32 bit Power support was dropped many years ago from most distributions, even if some people still have Apple Macs and Genesi Pegasos boxes running. And this is how 64bit big-endian support was removed from mainstream distributions as well.

Power 9 had a huge momentum, in most parts software support for Power 9 is now in par with x86 and ARM. Unfortunately it is not enough to reach a momentum, it needs to be maintained as well. Raptor Computing did a fantastic job making Power more affordable. Those machines reached key developers in major projects. However their prices are going up due to supply chain issues and they do not plan on Power 10 any time soon (as it would require to use some closed source software components in the firmware).

The OpenPower foundation is planing to solve the volume play in its Power Pi project, but it is still years away. Currently there is no CPU that could be used on the planned $250 board and normally it takes 1.5 years or more to go from planning to a mass produced CPU.

You might say, that there are free resources available for open source developers. There is GitHub CI support for Power and various universities provide remote access to interested open source developers to Power servers. However most developers consider having a system on their desk locally as the best way to develop software. ARM and even RiscV have a huge advantage with the average developer now.

The Power architecture is handled as first class citizen in most major Linux distributions and even in FreeBSD, but by the time we have affordable Power hardware to grow the number of Power users and developers, many of them might already drop this level of support for Power.

Affordable hardware quickly

The previous section of my blog can be easily summarized in a single sentence in a TL;DR; style: we need affordable Power hardware quickly to keep and expand the momentum. Obviously it needs compromises as well.

Keep the dream alive

Old Macs are big-endian, just as network processors from NXP. Some Power developers still want big-endian systems to keep the dream alive. But support for big-endian systems is mostly gone from Linux distributions, and when it comes to developing common utilities or even programming languages, most developers are no more even aware that a world exists outside of little-endian. As much as I love the PowerPC laptop project, I see it now as a dead end: producing hardware for an ever shrinking software ecosystem.

Power 9

As much as I’d love to see a Power 10 desktop, I do not expect it to be affordable any time soon. Right now only 15 and 30 core variants are available for high end servers. Even if Power 9 is not so power-efficient and can be outperformed in some cases by some of the latest x86 CPUs, it is already available and at a relatively good price.

Not fully open source desktop board

Raptor Computing did a fantastic job at creating fully owner controlled boards where even the smallest bit of software controlling the board is open source. However even their smaller board is a full server board. Removing server components, like remote management capabilities, could bring costs down, just like components with closed firmware. My experience with firmware is that open source does not mean necessarily better, rather the opposite (yes, I am aware that this statement contradicts my title: open source evangelist).

I would not want to compete with IBM or Raptor Computing with server boards. Both have done their optimizations in enterprise manageability or having a fully open source stack down to the lowest level. On the other hand, while using a server board in the desktop technically works, simplifying it down to the desktop both on the hardware and software side can help to make it more affordable and thus reach more users. Hopefully a lot more users.

Roadmap

Obviously, creating a more affordable Power 9 board quickly is just a first step. It helps to reach more users and developers than the current IBM and Raptor Computing offerings. It also helps to make sure that efforts of the OpenPower Foundation are not wasted and Power support stays as first class citizen in major Linux distributions.

Power 10

I do not know the Power 10 CPU roadmap and if there will be any smaller versions of Power 10, but I really hope so. Those could be used in desktop systems once available.

Power Pi

Of course the ultimate target is a board that anyone can afford without thinking twice. Just like a Raspberry Pi. The Power Pi is planing to fulfill this idea. It might be here sooner or later than lower end Power 10 systems.

Libre-soc

The Libre-soc project is also building a Power CPU with many ground breaking ideas. Unfortunately a generally available version is expected to arrive even later than the Power 10 based desktop or Power Pi.

TL;DR;

Power itself is probably not in direct danger, but Linux and open source are definitely becoming an endangered species on Power due to the lack of a large active community. This situation could be improved with more affordable Power hardware. In short term a Power 9 board could be used for this purpose, on a longer term there are many open possibilities ranging from SoC to Power 10 (or later).

Obviously, my post only covers one aspect of a problem: keeping the open source community around POWER healthy. I have no idea about the engineering or financial side. I wonder about your opinion and if anyone will step up and implement something along these lines.

<figure><figcaption>

a PowerPC CPU on Mars :-)

</figcaption> </figure>

Running Penpot locally, Docker-free, with Podman!

Posted by Máirín Duffy on January 19, 2022 02:59 AM

Penpot is a new free & open source design tool I have been using a lot lately. It is a tool the Fedora Design Team has picked up (we have a team area on the public https://penpot.app server where we collaborate and share files) and that we have been using for the Fedora website redesign work.

Penpot Logo (A pot of pencils next to the words "penpot" in a clean sans-serif font)

As I’ve used it over a longer length of time, I’ve noticed some performance issues (particularly around zooming and object selection / movement.) Now, there’s a number of factors on my side that might be causing it. For example, I have ongoing network issues (we spent part of Christmas break rewiring our house and wireless AP setup, which helped a bit, but now it seems my wireless card can’t switch APs if the laptop is moved between floors, lol.) In any case, I knew that Penpot can be run locally using containers, and I wanted to try that to see if it helped with the performance issues I was seeing.

To get started, I hopped over to Penpot’s main GitHub repo and found the link for the Penpot Technical Guide. This is the exact document you need to get started running Penpot yourself.  The “Getting Started” chapter was all I needed.

As I skimmed through the instructions my heart sank just a little bit when I saw mention of docker-compose. Now, I am no super über container tech whiz by any stretch: I’m a UX designer. I understand the basics of the technology at an abstract level and even a little bit on the technical level but I am not a person who is all containers, all kubernetes, all-the-time. I do know enough to know that, at least historically, applications that require docker-compose to run are a Big Fat Headache if you prefer using Podman.

 

Podman logo: three selkie seals with purple eyes above the text "podman"

Since I first got my new laptop between 1-2 months ago now, I have been avoiding installing Docker on it. I really believe in the Podman project and its approach to containers. Being as stubborn as I am, I decided to maintain my Docker-free status and just go ahead and try to get Penpot running anyway, since I had heard about podman-compose and that there have been many improvements with compatibility for docker-compose-based applications since I last did any kind of deep dive (probably 2 years ago) on it….

…. and it worked!

Like, “Just Worked” worked. No debugging, no hacking, no sweat. So here you go:

 

Running Penpot using Podman on Fedora 35, Step-by-Step

1. Install Podman

Install podman, along with podman-compose, podman-docker (aliases docker commands for you), and cockpit to manage it because it’s awesome.

sudo dnf install podman cockpit cockpit-podman podman-compose podman-docker podman-plugins

2. Clone the Penpot code

Grab the code. Git clone the penpot repo locally. Let’s say to ~/penpot.

git clone https://github.com/penpot/penpot.git

3. Run podman-compose

Run podman-compose on the Penpot docker file. Go into the ~/penpot/docker/images directory, and run podman-compose.

cd penpot/docker/images
podman-compose -p penpot -f docker-compose.yaml up -d

Any time podman prompts you about which registry you should use (it asked me 5 times), choose the docker.io registries. I tried using quay.io and the Fedora registries, but they are missing some components and the setup seems to fail as a result.

The selection prompt looks something like this:

? Please select an image:
registry.fedoraproject.org/penpotapp/exporter:latest
registry.access.redhat.com/penpotapp/exporter:latest
▸ docker.io/penpotapp/exporter:latest
quay.io/penpotapp/exporter:latest

4. Create your Penpot user

Create your Penpot user. (Penpot’s container doesn’t have working SMTP to do this through the front-end.)

docker exec -ti penpot_penpot-backend_1 ./manage.sh create-profile

5. Use Penpot!

All that’s left to do is to visit your local penpot in your browser. The URL should be http://localhost:9001 – if you get a weird SSL error, it’s because you used https. I am assuming since you’re connecting to your own machine that it’s ok to forego SSL!

Bonus: Create a desktop icon for your local Penpot

Wouldn’t it be nice if you could have a desktop icon to launch your local containerized Penpot? Yes, it would 🙂 So here are some (admittedly GNOME-centric, sorry!) steps on how to do that. (If this works for you on other desktops or if you have hints for other desktops, let us know in the comments!)

To do this, you’ll need to install a menu editor tool. I usually use a tool called alacarte, but while it’s available in Fedora’s DNF repos, it’s not in GNOME software. For your benefit I tested out one that is – it is called AppEditor.

Go ahead and install AppEditor from GNOME Software (you’ll need Flathub enabled) and open it up.

Screenshot of AppEditor showing all the fields listed in the table that follows below

You can use whichever browser you prefer, but I use Firefox so these instructions are for Firefox. 🙂 If you know how to do this for other browsers (I think Epiphany has a feature built-in to do this, but I decided not to do it because it doesn’t have access to my password manager) please drop a comment.

In AppEditor, click the “New Application” icon in the upper left, it looks like this: "New Entry" icon from AppEditor - a tiny document with a + symbol.

You’ll then get a form to fill out with the details of your new application launcher. Here’s how I filled mine out:

Form field Entry
Display name Penpot
Comment UI design and prototyping tool
Show in Launcher [On]
Command Line firefox %u –new-window http://localhost:9001
Working Directory [Leave blank]
Launch in Terminal [Off]

By default, your new application launcher will have a generic blue icon that looks like this:
Diamond-shaped icon in 3 shades of blue with two interlocking gears

You can use the Penpot favicon located at https://penpot.app/images/favicon.png – but it is small and lacking alpha transparency. I have scaled that logo slightly up (I know, it’s not smooth, sorry!) and added alpha to it so it will look nicer for you, download it here:

Penpot logo - a square penpot with three pencils in it, black and white lineart style

Here’s how it looks in action:

Screenshot of the GNOME application launcher with a Penpot app icon visible

Container Troubleshooting

If you run into any issues with your local Penpot, the cockpit & cockpit-podman packages you installed will be of great use.

Cockpit is a web-based open source OS management console. It has a plugin for managing Podman containers, and it is really, really nice.

Here’s how to use it – you just run this command as root to enable the cockpit service:

sudo systemctl enable --now cockpit.socket

Then visit http://localhost:9090, and log in using the same username and password you use to log into your Fedora desktop.

(If you have any issues, see the firewall suggestion on the Cockpit upstream get started instructions.)

Click on the “Podman containers” tab on the right. If you click on one of the running Penpot containers, you can get a console open into the container.

Screenshot of the Cockpit Podman web UI. It has a hidden list of container images at the top, and a list of 5 containers (penpot-backend, penpot-frontend, etc.) underneath. All containers are listed as running.

Screenshot of Cockpit Podman web UI. A running penpot container listing is expanded, and details, logs, and console tabs are visible.

That’s it!

I hope that this helps somebody out there! If you have more tips / tricks / ideas to share please share in the comments 🙂

Another use for the syslog-ng elasticsearch-http destination: Zinc

Posted by Peter Czanik on January 18, 2022 02:51 PM

There is a new drop-in replacement for Elasticsearch, at least if you don’t mind the limitations and the alpha status. However, it definitely lives up to the promise that it provides an Elasticsearch-compatible API for data ingestion. I tested it with the elasticsearch-http() destination of syslog-ng, and it worked perfectly after I modified the URL in the configuration example I found.

So, what is Zinc? It is a search engine written in Go that provides an Elasticsearch-compatible API for data ingestion. You cannot use Kibana with it, only its own web interface. If you are not into graphs and dashboards, and want to search text messages, then it is perfect. The application itself is a single binary and it does not have any external dependencies. It is lightweight and easy to configure, as practically there are no configuration options at all.

Note: Zinc is still in alpha state. There are no guarantees that later versions will be compatible at any level. Error messages can sometimes be cryptic and you might run into unexpected behavior.

You can read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/another-use-for-the-syslog-ng-elasticsearch-http-destination-zinc

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Mindshare Committee Quarterly Report – Q4 2021

Posted by Fedora Community Blog on January 18, 2022 08:00 AM

The Mindshare Committee is establishing a Quarterly Report, with this post being our first edition. It covers activities from the Mindshare Committee and related teams for the months of October, November, and December of 2021. As we kick off these reports, we welcome feedback on how we can improve in the related Mindshare ticket.

Help Wanted in Q1 2022

Take a look at the links below and see how you can get involved. For tickets and Discussion threads, make sure to comment your interest in getting involved.

Events

Team Updates

CommOps Team

  • Working with Community Outreach Revamp team to develop updated docs page for CommOps
  • Community Blog Activity
    • 18 posts in October
    • 32 posts in November
    • 13 posts in December

DocsForumChatRepo

Community Outreach Revamp (FCOR)

  • Working on updated documentation
  • Presented at the Fedora Linux 35 Release Party
  • Started Marketing Plan & brought in a new contributor to support that effort
  • Wrote Informal Polls deliverable from Objective
  • Revisit Ambassador Group cleanup
  • Planning Ambassador Monthly Call kick-off!

DocsChat

Council Representative

DocsForumChatRepo

Design Team

DocsMailing ListChatRepo

Magazine Team

Fedora MagazineDocsForumChatRepo

Mentored Projects

  • Outreachy participation with 3 interns this round!
  • Fedora participation in Google Summer of Code being reconsidered – we have a discussion thread for the community to chime in
  • Researching more mentored project programs for Fedora to participate in – Hopes to have something in place by 2023
  • Ashlyn Knox is working to complete her degree through a practicum with Fedora working on start.fedoraproject.org and UI/UX for the Flock to Fedora/Nest website
  • Emma Kidney, an intern from the CPE, started ramping up on to the Fedora Website Revamp project as a UX designer
  • Kicking off development of a new unconference: Fedora Mentor Summit

DocsMailing ListChatRepo

Mindshare Committee

DocsMailing ListChatRepo

Websites & Apps Team & Revamp

  • Ongoing planning work on the Websites & Apps Team Revamp Objective
  • Two members of the Program Management team have joined the efforts
  • Outreachy Interns
    • One has joined to work on a revamped Mote
    • Another has joined to help us maintain Fedora Websites
  • Active discussions took place in regards with the future of Easyfix
  • Fedora Websites codebase has been migrated from Python 2 to Python 3
  • Working on CI implementation with live builds on PRs for websites
  • Project management repositories have been set up on GitLab
  • Migration of repositories from Pagure/GitHub to GitLab is underway

DocsMailing ListChatForumRepo

Mindshare Updates from the Fedora Community Action and Impact Coordinator (FCAIC)

Thanks, we’ll see you online!

Thank you for reading the first edition of the Mindshare Committee Quarterly Report. We will publish our next report sometime in the beginning of April. Feel free to join us in the #fedora-mindshare chat on IRC/Element and drop by the Mindshare weekly meetings.

The post Mindshare Committee Quarterly Report – Q4 2021 appeared first on Fedora Community Blog.

Datanommer Migration

Posted by Fedora Infrastructure Status on January 17, 2022 11:00 AM

We are making some improvements to the performance of the Datanommer database including adding the Timescaledb plugin, a migration to a new database was required as this involved some breaking changes, the migration has already taken place but the required apps will now be required to point to the new …

Next Open NeuroFedora meeting: 17 January 1300 UTC

Posted by The NeuroFedora Blog on January 17, 2022 09:08 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 17 January at 1300UTC in #fedora-neuro on IRC (Libera.chat). The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2022-01-17'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Restarting and Offline Updates

Posted by Fedora Magazine on January 17, 2022 08:00 AM

A recurring question that goes around the internet is why Fedora Linux has to restart for updates. The truth is, Linux technically doesn’t need to restart for updates. But there is more than meets the eye. In this short guide we’ll look into why Fedora Linux asks you to restart for offline updates.

Offline Updates

The process of restarting, applying updates, and then restarting again is called Offline Updates. Your computer boots into a special save-mode, where all other systems are disabled and where network access is unavailable. It then applies the updates and restarts.

Why Offline Updates exist

Offline Updates is there to protect you. Computers have become way more complex in the past twenty years. Back in the day, it was possible to apply updates without too much worry since the system itself was smaller and less interconnected. Multitasking was also in its infancy, so users were not actually using the computer and updating it at the same time.

The Linux Kernel can change files without restarting, but the services or application using that file don’t have the same luxury. If a file being used by an application changes while the application is running then the application won’t know about the change. This can cause the application to no longer work the same way. As such, the adage that “Linux doesn’t need to restart to update” is a discredited meme. All Linux distributions should restart.

How Offline Updates work

For Offline Updates to work, there are a few components collaborating under the hood. First, there is the package manager that downloads updates and then stores them. It won’t actually apply the updates directly, but it will tell the next system that there are updates to be applied.

The second part is done by systemd. When systemd starts, it will see if the package manager has prepared any updates. If that’s the case, then systemd won’t go into a full system start-up, but will instead start the package manager and apply the updates. Once the updates are completed systemd will then restart the machine a final time.

<figure class="wp-block-image size-large">Software Center showing two pending updates for Firefox.</figure>

Software update pending for Firefox. See how the Flatpak version of Firefox does not need to restart since Flatpaks are designed with reliability in mind.

Where Offline Updates comes from

This problem was first realized in 2009 and the early whiteboard discussions are still visible. Once a possible solution was designed, it was put in development.

Still, it required multiple components to work together. Changes had to be made to systemd to support this special start-up flow and package managers had to understand the process as well. After that, it was important for users to have a supporting UI, which was included with GNOME Software Center in 2012 and with KDE Discover in 2021.

<figure class="wp-block-image size-large"></figure>

Fedora 18 official artwork. Very wild, but also very reliable.

Finally, the feature was officially deployed in Fedora 18, making Fedora Linux the first distribution that does everything it can to ensure that your system is reliable and stable. It was a long road, but this functionality has now been with us for almost 10 years.

Doing live updates

Now that you’ve been told about Offline Updates and their importance, you’ll of course never do them again… but what if you do? Fedora Linux will not stop you and since we’ve all used DNF at some point, it might be good to talk about live updates as well.

Nothing bad happens

First, there is a good chance that nothing bad happens. Perhaps it’s just a minor update, or the application that it affects is not running at the moment. There will be little issue updating SDL for example, when you’re not running a game.

Do keep in mind that running systems may still have the exploits that a previous version of the program might contain. If you update an application without restarting the application, then you’re still running the old version with its vulnerabilities.

Many expert Linux users, like those who professionally maintain servers, will often instinctively know what application can be updated without any risk. For this specific purpose, you can also only install security-updates, which is discussed in another article. For larger updates, even professionals are encouraged to use dnf offline-upgrades through the terminal.

Firefox restart required

The most common sign of instability is Firefox warning you. When Firefox detects updated packages, it will force you to restart the browser. Firefox can’t reliably run without completely restarting and it will therefor force you to restart.

This also highlights a happy recovery: A complex and security-critical application like Firefox will help you, shielding you from potential crashes or vulnerabilities. While many might consider this a big nuisance, it could be far worse.

<figure class="wp-block-image size-large"></figure>

Demonstrated with Ubuntu 20.04.3 in GNOME Boxes. I tried to trigger this error using Fedora Linux 34 & 35, but in both cases it completely crashed Firefox. Just to drive the point home: this recovery scenario is a fluke.

Crashes

Not every application can recover so gracefully, though, since most will just crash. Firefox might also still crash. While many of you will be familiar with Firefox gracefully terminating, this is still an exception to the rule.

If the system in question is the X Window Server, or the GNOME Shell, then your screen might turn completely black. In many cases, you’ll still be able to complete the updates, but there is no way to know that for sure. Now, the best course of action is to switch to a terminal view.

You can use CtrlAltF3 to enter a text-only instance of Fedora Linux. Once you log-in here, you can use a terminal application like top to see if the update has completed. You might then shut-down all processes and restart the computer.

<figure class="wp-block-image size-full"></figure>

In top you can filter for certain processes by pressing ‘O’ and then typing a filter like ‘COMMAND=dnf’

Blackouts

At this point, there is still hope. Start your computer and see if the system comes back to life. If the system boots into a graphical environment, everything will likely be fine. If not, you’ll have to enter the text-only interface (TTY) and you should look at the history of DNF and see what happened underneath the hood.

$ dnf history list
$ dnf history info {LAST ITEM}
$ dnf history redo {LAST ITEM}

Additional information can be found in the DNF Documentation. There is no guarantee that repeating the last update-action won’t cause the same problems, but it’s the best you can try at this point. Removing third-party drivers (like those from Nvidia) might also help, as this is a known to cause updating-related issues.

System bricking

Finally, you can hard brick your system. If the system that crashes happens to be DNF or systemd, then it might not be possible for the system to continue its update process. When this happens, even restarting the machine will not be enough to restore it.

There is no one answer about what to do now. First, you should get a USB-Stick with Fedora Linux. You could then try to recover the system using systemd-nspawn but that is highly technical. Regular users might just as well reinstall Fedora Linux and start from scratch.

Keep in mind that all your files are still safe. Booting from a USB Stick will not damage them, and if you make sure that you don’t overwrite your existing /home partition, then all your personal data will still be there afterwards.

Closing words

Direct updates are a roll of the dice, and while you might get lucky a lot…. We tend to overestimate ourselves and the chances we have. Many of you have never experienced a system bricking, but on the whole those stories are very common on social media. As such, it’s important to spread the word. Encourage others to restart their computer to apply offline updates, and be careful yourself when you apply updates directly.

In the future, problems like these might go away entirely. Systems like Flatpak and Fedora Silverblue have technologies that make these kinds of crashes nigh impossible, and the Linux Desktop is slowly moving into that direction. The future is bright, but for the time being we should make do with a progress bar, just like some other operating systems.

<figure class="wp-block-image size-large"></figure>

Got any personal update-related horror stories? Feel free to share them in the comments. I would also like to point out that I like memes just as much as the next guy… but they should be jokes, not technical advice.

Boot Guard and PSB have user-hostile defaults

Posted by Matthew Garrett on January 17, 2022 04:37 AM
Compromising an OS without it being detectable is hard. Modern operating systems support the imposition of a security policy or the launch of some sort of monitoring agent sufficient early in boot that even if you compromise the OS, you're probably going to have left some sort of detectable trace[1]. You can avoid this by attacking the lower layers - if you compromise the bootloader then it can just hotpatch a backdoor into the kernel before executing it, for instance.

This is avoided via one of two mechanisms. Measured boot (such as TPM-based Trusted Boot) makes a tamper-proof cryptographic record of what the system booted, with each component in turn creating a measurement of the next component in the boot chain. If a component is tampered with, its measurement will be different. This can be used to either prevent the release of a cryptographic secret if the boot chain is modified (for instance, using the TPM to encrypt the disk encryption key), or can be used to attest the boot state to another device which can tell you whether you're safe or not. The other approach is verified boot (such as UEFI Secure Boot), where each component in the boot chain verifies the next component before executing it. If the verification fails, execution halts.

In both cases, each component in the boot chain measures and/or verifies the next. But something needs to be the first link in this chain, and traditionally this was the system firmware. Which means you could tamper with the system firmware and subvert the entire process - either have the firmware patch the bootloader in RAM after measuring or verifying it, or just load a modified bootloader and lie about the measurements or ignore the verification. Attackers had already been targeting the firmware (Hacking Team had something along these lines, although this was pre-secure boot so just dropped a rootkit into the OS), and given a well-implemented measured and verified boot chain, the firmware becomes an even more attractive target.

Intel's Boot Guard and AMD's Platform Secure Boot attempt to solve this problem by moving the validation of the core system firmware to an (approximately) immutable environment. Intel's solution involves the Management Engine, a separate x86 core integrated into the motherboard chipset. The ME's boot ROM verifies a signature on its firmware before executing it, and once the ME is up it verifies that the system firmware's bootblock is signed using a public key that corresponds to a hash blown into one-time programmable fuses in the chipset. What happens next depends on policy - it can either prevent the system from booting, allow the system to boot to recover the firmware but automatically shut it down after a while, or flag the failure but allow the system to boot anyway. Most policies will also involve a measurement of the bootblock being pushed into the TPM.

AMD's Platform Secure Boot is slightly different. Rather than the root of trust living in the motherboard chipset, it's in AMD's Platform Security Processor which is incorporated directly onto the CPU die. Similar to Boot Guard, the PSP has ROM that verifies the PSP's own firmware, and then that firmware verifies the system firmware signature against a set of blown fuses in the CPU. If that fails, system boot is halted. I'm having trouble finding decent technical documentation about PSB, and what I have found doesn't mention measuring anything into the TPM - if this is the case, PSB only implements verified boot, not measured boot.

What's the practical upshot of this? The first is that you can't replace the system firmware with anything that doesn't have a valid signature, which effectively means you're locked into firmware the vendor chooses to sign. This prevents replacing the system firmware with either a replacement implementation (such as Coreboot) or a modified version of the original implementation (such as firmware that disables locking of CPU functionality or removes hardware allowlists). In this respect, enforcing system firmware verification works against the user rather than benefiting them.
Of course, it also prevents an attacker from doing the same thing, but while this is a real threat to some users, I think it's hard to say that it's a realistic threat for most users.

The problem is that vendors are shipping with Boot Guard and (increasingly) PSB enabled by default. In the AMD case this causes another problem - because the fuses are in the CPU itself, a CPU that's had PSB enabled is no longer compatible with any motherboards running firmware that wasn't signed with the same key. If a user wants to upgrade their system's CPU, they're effectively unable to sell the old one. But in both scenarios, the user's ability to control what their system is running is reduced.

As I said, the threat that these technologies seek to protect against is real. If you're a large company that handles a lot of sensitive data, you should probably worry about it. If you're a journalist or an activist dealing with governments that have a track record of targeting people like you, it should probably be part of your threat model. But otherwise, the probability of you being hit by a purely userland attack is so ludicrously high compared to you being targeted this way that it's just not a big deal.

I think there's a more reasonable tradeoff than where we've ended up. Tying things like disk encryption secrets to TPM state means that if the system firmware is measured into the TPM prior to being executed, we can at least detect that the firmware has been tampered with. In this case nothing prevents the firmware being modified, there's just a record in your TPM that it's no longer the same as it was when you encrypted the secret. So, here's what I'd suggest:

1) The default behaviour of technologies like Boot Guard or PSB should be to measure the firmware signing key and whether the firmware has a valid signature into PCR 7 (the TPM register that is also used to record which UEFI Secure Boot signing key is used to verify the bootloader).
2) If the PCR 7 value changes, the disk encryption key release will be blocked, and the user will be redirected to a key recovery process. This should include remote attestation, allowing the user to be informed that their firmware signing situation has changed.
3) Tooling should be provided to switch the policy from merely measuring to verifying, and users at meaningful risk of firmware-based attacks should be encouraged to make use of this tooling

This would allow users to replace their system firmware at will, at the cost of having to re-seal their disk encryption keys against the new TPM measurements. It would provide enough information that, in the (unlikely for most users) scenario that their firmware has actually been modified without their knowledge, they can identify that. And it would allow users who are at high risk to switch to a higher security state, and for hardware that is explicitly intended to be resilient against attacks to have different defaults.

This is frustratingly close to possible with Boot Guard, but I don't think it's quite there. Before you've blown the Boot Guard fuses, the Boot Guard policy can be read out of flash. This means that you can drop a Boot Guard configuration into flash telling the ME to measure the firmware but not prevent it from running. But there are two problems remaining:

1) The measurement is made into PCR 0, and PCR 0 changes every time your firmware is updated. That makes it a bad default for sealing encryption keys.
2) It doesn't look like the policy is measured before being enforced. This means that an attacker can simply reflash modified firmware with a policy that disables measurement and then make a fake measurement that makes it look like the firmware is ok.

Fixing this seems simple enough - the Boot Guard policy should always be measured, and measurements of the policy and the signing key should be made into a PCR other than PCR 0. If an attacker modified the policy, the PCR value would change. If an attacker modified the firmware without modifying the policy, the PCR value would also change. People who are at high risk would run an app that would blow the Boot Guard policy into fuses rather than just relying on the copy in flash, and enable verification as well as measurement. Now if an attacker tampers with the firmware, the system simply refuses to boot and the attacker doesn't get anything.

Things are harder on the AMD side. I can't find any indication that PSB supports measuring the firmware at all, which obviously makes this approach impossible. I'm somewhat surprised by that, and so wouldn't be surprised if it does do a measurement somewhere. If it doesn't, there's a rather more significant problem - if a system has a socketed CPU, and someone has sufficient physical access to replace the firmware, they can just swap out the CPU as well with one that doesn't have PSB enabled. Under normal circumstances the system firmware can detect this and prompt the user, but given that the attacker has just replaced the firmware we can assume that they'd do so with firmware that doesn't decide to tell the user what just happened. In the absence of better documentation, it's extremely hard to say that PSB actually provides meaningful security benefits.

So, overall: I think Boot Guard protects against a real-world attack that matters to a small but important set of targets. I think most of its benefits could be provided in a way that still gave users control over their system firmware, while also permitting high-risk targets to opt-in to stronger guarantees. Based on what's publicly documented about PSB, it's hard to say that it provides real-world security benefits for anyone at present. In both cases, what's actually shipping reduces the control people have over their systems, and should be considered user-hostile.

[1] Assuming that someone's both turning this on and actually looking at the data produced

comment count unavailable comments

Episode 306 – Open source isn’t broken, it’s an experience

Posted by Josh Bressers on January 17, 2022 12:01 AM

Josh and Kurt talk about the faker and colors NPM events. There is a lot of discussion around open source being broken or somehow failing because of these events. The real answer is open source is an experience. How we interact with our dependencies determines what the experience looks like.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2685-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_306_Open_source_isnt_broken_its_an_experience.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_306_Open_source_isnt_broken_its_an_experience.mp3</audio>

Show Notes

Comment vérifier le certificat serveur dans Poezio

Posted by Casper on January 16, 2022 04:00 PM

Nous allons voir comment se comporte Poezio lors d'une connexion SSL/TLS avec le serveur, avec sa configuration par défaut.

S'il n'y a pas d'empreinte de certificat stockée dans le fichier de config, alors Poezio va demander à l'utilisateur de valider l'empreinte du certificat reçu, au moment de la connexion. Puis, Poezio va stocker l'empreinte dans son fichier de config (logique). Au lancement suivant, il va comparer l'empreinte du certificat reçu lors de la connexion SSL/TLS, avec celle qu'il a stocké, pour voir si elles sont identiques.

L'utilisateur ne peut pas, ou difficilement, vérifier l'empreinte du certificat qu'on lui demande de valider. C'est génant pour le premier lancement. Mais, pour les lancements suivants, la seule information pertinante affichée par Poezio, est que le certificat a changé (s'il a vraiment changé).

Par défaut, le certificat n'est pas vérifié avec la base CA du système.

Cette façon de procéder peut sembler surprenante, pourtant elle est connue sous l'appellation Trust on first use. Elle est également mise en pratique par le protocole Gemini, un dérivé du HTTP.

Avantages :

  • Si le serveur XMPP utilise la même clé privée pendant 10 ans
  • Si le serveur utilise un certificat auto-signé
  • Si on utilise son propre serveur XMPP, on peut comparer le hash manuellement

Inconvéniants :

  • Il faut demander le hash à l'adminsys, si on est hébergé
  • Si la clé privée change tous les 6 mois, le hash change tous les 6 mois
  • Pas de vérification avec la base CA du système
  • Si on passe par Tor, cette méthode est inutilisable
  • Si on passe par n'importe quel réseau, ligne mobile, hotspot wifi, VPN bon marché

Comment récupérer l'empreinte coté serveur

Dans le cadre de mon audit, il a fallu trouver un moyen pour voir si le hash affiché par Poezio correspondait à la clé privée sur mon serveur. Puisque j'ai accès au serveur, j'ai pû retrouver exactement quel hash était affiché.

La commande suivante travaille sur la clé privée, mais plus précisément sur la partie "publique" de la clé. Elle est convertie au format DER, qui est un format binaire, puis renvoyée dans l'outil openssl-dgst, qui va faire le calcul en SHA256. Le résultat serait identique avec un pipeline vers la commande "sha256sum" :

$ openssl pkey -in file.key -pubout -outform DER | openssl dgst -sha256 -hex -c | awk '{ print toupper ($2) }'

Cette commande présente de nombreux inconvéniants : Il faut avoir accès au serveur en root, ou bien il faut envoyer un mail aux admins (et leur envoyer la commande). De plus, si la clé privée (le fichier .key) change une fois, il faut relancer la commande. Et si jamais, la clé privée change régulièrement (tous les 6 mois par exemple), il faut relancer la commande à chaque fois. Une fois encore, cette solution est difficilement exploitable sur le long-terme.

Empreinte chez casperlefantom.net

N'étant pas vraiment en accord avec la méthode TOFU/TUFU, je me suis dit que je pourrais mettre à disposition un moyen, pour que les utilisateurs puisse effectuer la vérification manuellement. Je me suis amusé à mettre en ligne un fichier statique qui contient toutes les informations.

Mon intuition est que jamais d'autres administrateurs feront pareil. La demande est tellement pointue, voyez plutôt :

https://dl.casperlefantom.net/pub/ssl/fingerprint-for-poezio-client.txt

Comment désactiver la vérification de l'empreinte du certificat

Dans la version 0.13.1 de Poezio, il est possible de désactiver la méthode par comparaison (méthode TOFU/TUFU), pour revenir à un système plus classique. Il suffit de modifier le fichier de config :

ignore_certificate = True

Ou bien, lancer la commande dans Poezio :

/set ignore_certificate True

Ensuite, il convient d'activer la vérification avec la base CA du système. Le chemin ci-après est valide, pour les systèmes Fedora Linux. Il est possible d'indiquer le chemin vers son CA personnel, si vous en avez un :

ca_cert_path = /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt

Commande dans Poezio :

/set ca_cert_path /etc/pki/ca-trust/extracted/openssl/ca-bundle.trust.crt

Enfin, on peut nettoyer le fichier de config en supprimant la ligne qui correspond à l'empreinte du certificat stockée par Poezio :

certificate = 78:2F:71:43:1F:9B...

Référence : TLS in poezio

Configuring Rails system tests for headless and headfull browsers

Posted by Josef Strzibny on January 16, 2022 12:00 AM

Want to run your system tests headless? And why not both ways? Here’s how to extend Rails tasks to run your system tests with the driver of your choice.

Rails 6 came with system tests baked-in, and so if you generate a new Rails app today, you end up with the following setup code:

# test/application_system_test_case.rb
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  driven_by :selenium, using: :chrome, screen_size: [1400, 1400]
end

You’ll need some dependencies for this to work. If you don’t have them, add the following to your Gemfile:

# Gemfile
...
group :test do
  # Use system testing [https://guides.rubyonrails.org/testing.html#system-testing]
  gem "capybara", ">= 3.26"
  gem "selenium-webdriver", ">= 4.0.0"
  gem "webdrivers"
end

A lot of people want to switch the default driver to something else, especially to the headless Chrome for faster tests.

It’s surprisingly easy. You only need to replace the driver’s name in using parameter:

# test/application_system_test_case.rb
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  driven_by :selenium, using: :headless_chrome, screen_size: [1400, 1400]
end

But by doing this change, you lost the ability to watch your tests visually. So why not have both?

Let’s set the driver based on DRIVER environment variable:

# test/application_system_test_case.rb
require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  DRIVER = if ENV["DRIVER"]
    ENV["DRIVER"].to_sym
  else
    :headless_chrome
  end

  driven_by :selenium, using: DRIVER, screen_size: [1400, 1400]
end

I kept headless Chrome as default as something you want to run in CI.

To run system tests with a different driver, we just set that variable on the command line:

$ DRIVER=chrome rails test:system

Pretty nice, yet we can do more. We can have a fancy new Rake task to do this job for us:

# lib/tasks/test.rake
namespace :test do
  namespace :system do
    task :with, [:driver] => :environment do |task, args|
      ENV["DRIVER"] = args[:driver]
      Rake::Task["test:system"].invoke
    end
  end
end

This task sets the ENV variable for us and then invokes the regular Rails’ test:system task. Nothing less, nothing more.

By defining the driver argument, we can now choose the driver nicely on the command line:

$ rails test:system:with[chrome]
$ rails test:system:with[firefox]
$ rails test:system:with[headless_chrome]

If, on the other hand, we want to define exact tasks for particular drivers, we can do this too:

# lib/tasks/test.rake
namespace :test do
  namespace :system do
    task :chrome => :environment do |task, args|
      ENV["DRIVER"] = "chrome"
      Rake::Task["test:system"].invoke
    end
  end
end

Then we can run the test:system:chrome task for the headfull Chrome:

$ rails test:system:chrome

And that’s it! Develop with headless browsers and admire your work once in a while with a full experience!

Maintainable Rails system tests with page objects

Posted by Josef Strzibny on January 15, 2022 12:00 AM

Rails system tests often depend on input and CSS selectors. To make our tests more maintainable, we can isolate layout changes within page objects.

This post is about an idea I had a long time ago and came back to recently. It’s from a similar category as my idea for Rails contexts, so it might not be 100% failproof, and I am looking for feedback.

So what is it about? What’s a page object?

A regular system test might look like this:

require "application_system_test_case"

class RegisterUserTest < ApplicationSystemTestCase
  setup do
    @user = users(:unregistered)
  end

  test "registers an account" do
    visit new_user_registration_path

    fill_in "Email", with: user.email
    fill_in "Password", with: user.password
    fill_in "Password confirmation", with: user.password

    click_on "Sign up"

    assert_selector "h1", text: "Dashboard"
  end
end

It’s nice and tidy for something small. But if we start reusing specific flows and selectors, we would have to update many places whenever we change a particular screen.

This might not be a big deal since we can extract private methods and helpers. But it got me thinking.

What if we isolate actions and assertions of a particular screen in a page object?

An example of the registration and dashboard pages could look like this:

# test/pages/test_page.rb
class TestPage
  include Rails.application.routes.url_helpers

  attr_accessor :test, :page

  def initialize(system_test, page)
    @test = system_test
    @page = page
  end

  def visit
    @test.visit page_path
  end

  def page_path
    raise "Override this method with the page path"
  end
end

# test/pages/registration_page.rb
class RegistrationPage < TestPage
  def register(user)
    @test.fill_in "Email", with: user.email
    @test.fill_in "Password", with: "12345671"
    @test.fill_in "Password confirmation", with: "12345671"

    @test.click_on "Sign up"
  end

  def page_path
    new_user_registration_path
  end
end

# test/pages/dashboard_page.rb
class DashboardPage < TestPage
  def assert_logged_in
    @test.assert_selector "h1", text: "Dashboard"
  end

  def page_path
    dashboard_path
  end
end

The basic idea is that a page under test defines its actions (fill_in_user_email, register) and assertions (assert_logged_in). Whenever the fields change or we have to use a different selector, we have one and only one place to update. Any test that uses such a page wouldn’t have to be changed at all.

When we initialize a new page we have to pass the test and page contexts (here system_test and page) to use the testing API from within these page objects.

Since I want to group these pages, I also have to add the test/pages path to the testing configuration for Zeitwerk to pick up:

# config/environments/test.rb
require "active_support/core_ext/integer/time"

Rails.application.configure do
  ...

  config.autoload_paths << "#{Rails.root}/test/pages"
end

This allows us to write the registration test as:

require "application_system_test_case"

class RegisterUserTest < ApplicationSystemTestCase
  setup do
    @user = users(:unregistered)
  end

  test "registers an account" do
    registration = RegistrationPage.new(self, page)
    registration.register(@user)

    dashboard = DashboardPage.new(self, page)
    dashbord.assert_logged_in
  end
end

I find the grouping to pages rather than private methods cleaner and make the tests themselves much shorter.

Let’s say that I am now adding internalization to pages. Instead of going through all my system tests, I only have to open and edit the relevant pages:


# test/pages/registration_page.rb
class RegistrationPage < TestPage
  def register(user)
    @page.fill_in I18n.t("attributes.user.email"), with: user.email
    @page.fill_in I18n.t("attributes.user.password"), with: user.password
    @page.fill_in I18n.t("attributes.user.password_confirmation"), with: user.password

    @page.click_on I18n.t("buttons.register")
  end

  def page_path
    new_user_registration_path
  end
end

The test itself stayed the same.

However, I also felt there is still some disconnect between going from page to page. So another idea is to introduce a TestFlow object that would keep the whole flow together:

class TestFlow
  attr_accessor :test, :page, :history

  def initialize(system_test, system_page, start_page_class)
    @test = system_test
    @page = start_page_class.new(system_test, system_page)
    @page.visit
    @history = [@page]
  end

  def visit(page_class)
    @page = page_class.new(@test, page)
    @page.visit
    @history << @page
  end

  def transition(page_class)
    @page = page_class.new(@test, page)
    assert_transition
    @history << @page
  end

  def assert_transition
    @test.assert_equal @test.current_path, @page.page_url
  end
end

The idea is that we start with one page in the beginning and then change pages with a transition call to ensure we indeed arrived on the page we originally wanted. The @history then remembers the flow and lets us build other features like going back.

To use it, I’ll make a small helper method in application_system_test_case.rb:

require "test_helper"

class ApplicationSystemTestCase < ActionDispatch::SystemTestCase
  driven_by :selenium, using: :chrome, screen_size: [1400, 1400]

  def start_flow(start_page)
    TestFlow.new(self, page, start_page)
  end
end

And then use it by starting flow in setup and calling transition in between the screens:

require "application_system_test_case"

class RegisterUserTest < ApplicationSystemTestCase
  setup do
    @user = users(:unregistered)
    @flow = start_flow(RegistrationPage)
  end

  test "registers an account" do
    @flow.page.register(@user)
    @flow.transition(TeamPage)
    @flow.page.assert_logged_in
  end
end

That’s it. There are no new frameworks or anything like that, just a different take on organizing system tests. Let me know what you think – especially if you think it’s a terrible idea.

Friday’s Fedora Facts: 2022-02

Posted by Fedora Community Blog on January 14, 2022 08:54 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Dutch PHP ConferenceAmsterdam, NL1–2 Julcloses 30 Jan
Open Source Summit NAAustin, TX, US & virtual21–24 Juncloses 14 March
</figure>

Help wanted

Upcoming test days

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
1955416shimNEW
2032528flatpakNEW
</figure>

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345435
F352629
F36 (rawhide)6983
</figure>

Fedora Linux 36

Schedule

  • 2022-01-18 — Deadline for Self-Contained Change proposals
  • 2022-01-19 — Mass rebuild begins
  • 2022-02-08 — F36 branches from Rawhide; Rawhide begins F37 development

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Users are administrators by default in the installer GUI.Self-ContainedFESCo #2708
Enable fs-verity in RPMSystem-WideFESCo #2711
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
GHC compiler parallel version installsSelf-ContainedFESCo #2715
Keylime subpackaging and agent alternativesSelf-ContainedFESCo #2716
Golang 1.18System-WideFESCo #2720
DIGLIMSystem-WideFESCp #2721
LLVM 14System-WideApproved
Ruby 3.1System-WideApproved
%set_build_flags for %build and %checkSystem-WideApproved
Default To Noto FontsSystem-WideFESCo #2729
Hunspell Dictionary dir changeSystem-WideFESCo #2730
Relocate RPM database to /usrSystem-WideFESCo #2731
No ifcfg by defaultSelf-ContainedFESCo #2732
Django 4.0Self-ContainedFESCo #2733
GNU Toolchain UpdateSystem-WideAnnounced
New requirements for akmods binary kernel modules for Silverblue / Kinoite supportSelf-ContainedAnnounced
Malayalam Default Fonts UpdateSelf-ContainedAnnounced
Ibus-table cangjie default for zh_HKSelf-ContainedAnnounced
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-02 appeared first on Fedora Community Blog.

CPE Weekly Update – Week of January 10th – 14th

Posted by Fedora Community Blog on January 14, 2022 10:00 AM

This is a weekly report from the CPE (Community Platform Engineering)
Team. If you have any questions or feedback, please respond to this
report or contact us on #redhat-cpe channel on libera.chat
(https://libera.chat/).

We (CPE team) will be joining Fedora Social Hour on Jan 27th.
Looking forward to seeing a lot of you!
(https://discussion.fedoraproject.org/t/join-us-for-fedora-social-hour-every-week/18869/46)

Highlights of the week

Infrastructure & Release Engineering

Goal of this initiative

Purpose of this team is to take care of day to day business regarding
CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS
infrastructure and preparing things for the new Fedora release
(mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might
take on.

Update

Fedora Infra

  • Mostly quiet holidays with only minor reboots, etc
  • More koji upgrades: all aarch64/armv7 done, s390x kvm and hubs left to do
  • Container builds broken, needs more eyes
  • Centos cert fetching broken, needs more eyes

CentOS Infra including CentOS CI

  • CentOS Linux 8 EOL plan
  • Hardware issues (storage box, 64 compute nodes for CI infra)
  • Kmods SIG DuD discussion (koji plugin vs external script)
  • CI storage for ocp/openshift migration planning

Release Engineering

  • Rawhide compose issues, but we got a good compose yesterday after a bunch of work
  • Mass rebuild of F36 next week

CentOS Stream

Goal of this initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this
new distribution a reality. The goal of this initiative is to prepare
the ecosystem for the new CentOS Stream.

Updates

  • Finished our January planning, working on:
    • Preparing new version of Content Resolver for production, finishing up stuff around the buildroot integration
    • Exploring things around increasing compose quality
    • Business as usual

Datanommer/Datagrepper V.2

Goal of this initiative

The datanommer and datagrepper stacks are currently relying on fedmsg which
we want to deprecate.
These two applications need to be ported off fedmsg to fedora-messaging.
As these applications are ‘old-timers’ in the fedora infrastructure, we would
also like to look at optimizing the database or potentially redesigning it to
better suit the current infrastructure needs.
For a phase two, we would like to focus on a DB overhaul.

Updates

  • Data is migrated, we need to deploy the new code to production now.

CentOS Duffy CI

Goal of this initiative

Duffy is a system within CentOS CI Infra which allows tenants to provision and
access bare metal resources of multiple architectures for the purposes of
CI testing.
We need to add the ability to checkout VMs in CentOS CI in Duffy. We have
OpenNebula hypervisor available, and have started developing playbooks which
can be used to create VMs using the OpenNebula API, but due to the current state
of how Duffy is deployed, we are blocked with new dev work to add the
VM checkout functionality.

Updates

  • Work on backend -> modules to provision vms
  • Legacy API integration

Image builder for Fedora IoT

Goal of this initiative

Integration of Image builder as a service with Fedora infra to allow Fedora IoT migrate their pipeline to Fedora infra.

Updates

  • Team forming this week. Currently waiting on work from the Image Builder team to wrap to unblock us from moving forward

Bodhi

Goal of this initiative

This initiative is to separate Bodhi into multiple sub packages,
fix integration and unit tests in CI, fix dependency management,
and automate part of the release process.
Read ARC team findings in detail.

Updates

  • Team is forming this week and will officially be launching work next Monday

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest
Group that creates, maintains, and manages a high quality set of additional
packages for Enterprise Linux, including, but not limited to,
Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never
conflict with or replace packages in the base Enterprise Linux distributions.
EPEL uses much of the same infrastructure as Fedora, including buildsystem,
bugzilla instance, updates manager, mirror manager and more.

Updates

  • epel9 is growing rapidly:
    • 2589 packages available (355 more in testing)
    • 1158 source rpms (225 more in testing)
  • Positive community response
  • Ongoing documentation improvements

Kindest regards,
CPE Team

The post CPE Weekly Update – Week of January 10th – 14th appeared first on Fedora Community Blog.

Human Interface Guidelines, libadwaita 1.0 edition

Posted by Allan Day on January 13, 2022 03:06 PM

After a lot of hard work, libadwaita 1.0 was released on the last day of 2021. If you haven’t already, check out Alexander’s announcement, which covers a lot of what’s in the new release.

When we rewrote the HIG back in May 2021, the new version expected and recommended libadwaita. However, libadwaita evolved between then and 1.0, so changes were needed to bring the HIG up to date.

Therefore, over the last two or three weeks, I’ve been working on updating the HIG to cover libadwaita 1.0. Hopefully this will mean that developers who are porting to GTK 4 and libadwaita have everything that they need in terms of design documentation but, if anything isn’t clear, do reach out using the usual GNOME design channels.

In the rest of this post, I’ll review what’s changed in the HIG, compared with the previous version.

What’s changed

There’s a bunch of new content in the latest HIG version, which reflects additional capabilities that are present in libadwaita 1.0. This includes material on:

There have also been updates to existing content: all screenshots have been updated to use the latest UI style from libadwaita, and the guidelines on UI styling have been updated, to reflect the flexibility that comes with libadwaita’s new stylesheet.

As you might expect, there have been some general improvements to the HIG, which are unrelated to libadwaita. The page on navigation has been improved, to make it more accessible. A page on selection mode has also been added (we used to have this documented, then dropped the documentation while the pattern was updated). There has also been a large number of small style and structure changes, which should make the HIG an easier read.

If you spot any issues, the HIG issue tracker is open, and you can send merge requests too!

New badge: DevConf.cz 2022 Attendee !

Posted by Fedora Badges on January 12, 2022 04:17 PM
DevConf.cz 2022 AttendeeYou attended the 2022 iteration of DevConf.cz, a yearly open source conference in Czechia!

Installing the latest syslog-ng on Ubuntu and other DEB distributions

Posted by Peter Czanik on January 12, 2022 12:16 PM

The syslog-ng application is part of all major Linux distributions, and you can usually install syslog-ng from the official repositories. If you use just the core functionality of syslog-ng, use the package in your distribution repository (apt-get install syslog-ng), and you can stop reading here. However, if you want to use the features of newer syslog-ng versions (for example, send log messages to MQTT or Apache Kafka), you have to either compile the syslog-ng from source, or install it from unofficial repositories. This post explains you how to do that.

Read the rest of my blog at https://www.syslog-ng.com/community/b/blog/posts/installing-the-latest-syslog-ng-on-ubuntu-and-other-deb-distributions

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

AdamW's Debugging Adventures: Bootloaders and machine IDs

Posted by Adam Williamson on January 11, 2022 10:08 PM

Hi folks! Well, it looks like I forgot to blog for...checks watch....checks calendar...a year. Wow. Whoops. Sorry about that. I'm still here, though! We released, uh, lots of Fedoras since the last time I wrote about that. Fedora 35 is the current one. It's, uh, mostly great! Go get a copy, why don't you?

And while that's downloading, you can get comfy and listen to another of Crazy Uncle Adam's Debugging Adventures. In this episode, we'll be uncomfortably reminded just how much of the code that causes your system to actually boot at all consists of fragile shell script with no tests, so this'll be fun!

Last month, booting a system installed from Rawhide live images stopped working properly. You could boot the live image fine, run the installation fine, but on rebooting, the system would fail to boot with an error: dracut: FATAL: Don't know how to handle 'root=live:CDLABEL=Fedora-WS-Live-rawh-20211229-n-1'. openQA caught this, and so did one of our QA community members - Ahed Almeleh - who filed a bug. After the end-of-year holidays, I got to figuring out what was going wrong.

As usual, I got a bit of a head start from pre-existing knowledge. I happen to know that error message is referring to kernel arguments that are set in the bootloader configuration of the live image itself. dracut is the tool that handles an early phase of boot where we boot into a temporary environment that's loaded entirely into system memory, set up the real system environment, and boot that. This early environment is contained in the initrd files you can find alongside the kernel on most Linux distributions; that's what they're for. Part of dracut's job is to be run when a kernel is installed to produce this environment, and then other parts of dracut are included in the environment itself to handle initializing things, finding the real system root, preparing it, and then switching to it. The initrd environments on Fedora live images are built to contain a dracut 'module' (called 90dmsquash-live) that knows to interpret root=live:CDLABEL=Fedora-WS-Live-rawh-20211229-n-1 as meaning 'go look for a live system root on the filesystem with that label and boot that'. Installed systems don't contain that module, because, well, they don't need to know how to do that, and you wouldn't really ever want an installed system to try and do that.

So the short version here is: the installed system has the wrong kernel argument for telling dracut where to find the system root. It should look something like root=/dev/mapper/fedora-root (where we're pointing to a system root on an LVM volume that dracut will set up and then switch to). So the obvious next question is: why? Why is our installed system getting this wrong argument? It seemed likely that it 'leaked' from the live system to the installed system somehow, but I needed to figure out how.

From here, I had kinda two possible ways to investigate. The easiest and fastest would probably be if I happened to know exactly how we deal with setting up bootloader configuration when running a live install. Then I'd likely have been able to start poking the most obvious places right away and figure out the problem. But, as it happens, I didn't at the time remember exactly how that works. I just remembered that I wind up having to figure it out every few years, and it's complicated and scary, so I tend to forget again right afterwards. I kinda knew where to start looking, but didn't really want to have to work it all out again from scratch if I could avoid it.

So I went with the other possibility, which is always: figure out when it broke, and figure out what changed between the last time it worked and the first time it broke. This usually makes life much easier because now you know one of the things on that list is the problem. The shorter and simpler the list, the easier life gets.

I looked at the openQA result history and found that the bug was introduced somewhere between 20211215.n.0 and 20211229.n.1 (unfortunately kind of a wide range). The good news is that only a few packages could plausibly be involved in this bug; the most likely are dracut itself, grub2 (the bootloader), grubby (a Red Hat / Fedora-specific grub configuration...thing), anaconda (the Fedora installer, which obviously does some bootloader configuration stuff), the kernel itself, and systemd (which is of course involved in the boot process itself, but also - perhaps less obviously - is where a script called kernel-install that is used (on Fedora and many other distros) to 'install' kernels lives (this was another handy thing I happened to know already, but really - it's always a safe bet to include systemd on the list of potential suspects for anything boot-related).

Looking at what changed between 2021-12-15 and 2021-12-29, we could let out grub2 and grubby as they didn't change. There were some kernel builds, but nothing in the scriptlets changed in any way that could be related. dracut got a build with one change, but again it seemed clearly unrelated. So I was down to anaconda and systemd as suspects. On an initial quick check during the vacation, I thought anaconda had not changed, and took a brief look at systemd, but didn't see anything immediately obvious.

When I came back to look at it more thoroughly, I realized anaconda did get a new version (36.12) on 2021-12-15, so that initially interested me quite a lot. I spent some time going through the changes in that version, and there were some that really could have been related - it changed how running things during install inside the installed system worked (which is definitely how we do some bootloader setup stuff during install), and it had interesting commit messages like "Remove the dracut_args attribute" and "Remove upd-kernel". So I spent an afternoon fairly sure it'd turn out to be one of those, reviewed all those changes, mocked up locally how they worked, examined the logs of the actual image composes, and...concluded that none of those seemed to be the problem at all. The installer seemed to still be doing things the same as it always had. There weren't any tell-tale missing or failing bootloader config steps. However, this time wasn't entirely wasted: I was reminded of exactly what anaconda does to configure the bootloader when installing from a live image.

When we install from a live image, we don't do what the 'traditional' installer does and install a bunch of RPM packages using dnf. The live image does not contain any RPM packages. The live image itself was built by installing a bunch of RPM packages, but it is the result of that process. Instead, we essentially set up the filesystems on the drive(s) we're installing to and then just dump the contents of the live image filesystem itself onto them. Then we run a few tweaks to adjust anything that needs adjusting for this now being an installed system, not a live one. One of the things we do is re-generate the initrd file for the installed system, and then re-generate the bootloader configuration. This involves running kernel-install (which places the kernel and initrd files onto the boot partition, and writes some bootloader configuration 'snippet' files), and then running grub2-mkconfig. The main thing grub2-mkconfig does is produce the main bootloader configuration file, but that's not really why we run it at this point. There's a very interesting comment explaining why in the anaconda source:

# Update the bootloader configuration to make sure that the BLS
# entries will have the correct kernel cmdline and not the value
# taken from /proc/cmdline, that is used to boot the live image.

Which is exactly what we were dealing with here. The "BLS entries" we're talking about here are the things I called 'snippet' files above, they live in /boot/loader/entries on Fedora systems. These are where the kernel arguments used at boot are specified, and indeed, that's where the problematic root=live:... arguments were specified in broken installs - in the "BLS entries" in /boot/loader/entries. So it seemed like, somehow, this mechanism just wasn't working right any more - we were expecting this run of grub2-mkconfig in the installed system root after live installation to correct those snippets, but it wasn't. However, as I said, I couldn't establish that any change to anaconda was causing this.

So I eventually shelved anaconda at least temporarily and looked at systemd. And it turned out that systemd had changed too. During the time period in question, we'd gone from systemd 250~rc1 to 250~rc3. (If you check the build history of systemd the dates don't seem to match up - by 2021-12-29 the 250-2 build had happened already, but in fact the 250-1 and 250-2 builds were untagged for causing a different problem, so the 2021-12-29 compose had 250~rc3). By now I was obviously pretty focused on kernel-install as the most likely related part of systemd, so I went to my systemd git checkout and ran:

git log v250-rc1..v250-rc3 src/kernel-install/

which shows all the commits under src/kernel-install between 250-rc1 and 250-rc3. And that gave me another juicy-looking, yet thankfully short, set of commits:

641e2124de6047e6010cd2925ea22fba29b25309 kernel-install: replace 00-entry-directory with K_I_LAYOUT in k-i 357376d0bb525b064f468e0e2af8193b4b90d257 kernel-install: Introduce KERNEL_INSTALL_MACHINE_ID in /etc/machine-info 447a822f8ee47b63a4cae00423c4d407bfa5e516 kernel-install: Remove "Default" from list of suffixes checked

So I went and looked at all of those. And again...I got it wrong at first! This is I guess a good lesson from this Debugging Adventure: you don't always get the right answer at first, but that's okay. You just have to keep plugging, and always keep open the possibility that you're wrong and you should try something else. I spent time thinking the cause was likely a change in anaconda before focusing on systemd, then focused on the wrong systemd commit first. I got interested in 641e212 first, and had even written out a whole Bugzilla comment blaming it before I realized it wasn't the culprit (fortunately, I didn't post it!) I thought the problem was that the new check for $BOOT_ROOT/$MACHINE_ID would not behave as it should on Fedora and cause the install scripts to do something different from what they should - generating incorrect snippet files, or putting them in the wrong place, or something.

Fortunately, I decided to test this before declaring it was the problem, and found out that it wasn't. I did this using something that turned out to be invaluable in figuring out the real problem.

You may have noticed by this point - harking back to our intro - that this critical kernel-install script, key to making sure your system boots, is...a shell script. That calls other shell scripts. You know what else is a big pile of shell scripts? dracut. You know, that critical component that both builds and controls the initial boot environment. Big pile of shell script. The install script - the dracut command itself - is shell. All the dracut modules - the bits that do most of the work - are shell. There's a bit of C in the source tree (I'm not entirely sure what that bit does), but most of it's shell.

Critical stuff like this being written in shell makes me shiver, because shell is very easy to get wrong, and quite hard to test properly (and in fact neither dracut nor kernel-install has good tests). But one good thing about it is that it's quite easy to debug, thanks to the magic of sh -x. If you run some shell script via sh -x (whether that's really sh, or bash or some other alternative pretending to be sh), it will run as normal but print out most of the logic (variable assignments, tests, and so on) that happen along the way. So on a VM where I'd run a broken install, I could do chroot /mnt/sysimage (to get into the root of the installed system), find the exact kernel-install command that anaconda ran from one of the logs in /var/log/anaconda (I forget which), and re-run it through sh -x. This showed me all the logic going on through the run of kernel-install itself and all the scripts it sources under /usr/lib/kernel/install.d. Using this, I could confirm that the check I suspected had the result I suspected - I could see that it was deciding that layout="other", not layout="bls", here. But I could also figure out a way to override that decision, confirm that it worked, and find that it didn't solve the problem: the config snippets were still wrong, and running grub2-mkconfig didn't fix them. In fact the config snippets got wronger - it turned out that we do want kernel-install to pick 'other' rather than 'bls' here, because Fedora doesn't really implement BLS according to the upstream specs, so if we let kernel-install think we do, the config snippets we get are wrong.

So now I'd been wrong twice! But each time, I learned a bit more that eventually helped me be right. After I decided that commit wasn't the cause after all, I finally spotted the problem. I figured this out by continuing with the sh -x debugging, and noticing an inconsistency. By this point I'd thought to find out what bit of grub2-mkconfig should be doing the work of correcting the key bit of configuration here. It's in a Fedora-only downstream patch to one of the scriptlets in /etc/grub.d. It replaces the options= line in any snippet files it finds with what it reckons the kernel arguments "should be". So I got curious about what exactly was going wrong there. I tweaked grub2-mkconfig slightly to run those scriptlets using sh -x by changing these lines in grub2-mkconfig:

echo "### BEGIN $i ###"
"$i"
echo "### END $i ###"

to read:

echo "### BEGIN $i ###"
sh -x "$i"
echo "### END $i ###"

Now I could re-run grub2-mkconfig and look at what was going on behind the scenes of the scriptlet, and I noticed that it wasn't finding any snippet files at all. But why not?

The code that looks for the snippet files reads the file /etc/machine-id as a string, then looks for files in /boot/loader/entries whose names start with that string (and end in .conf). So I went and looked at my sample system and...found that the files in /boot/loader/entries did not start with the string in /etc/machine-id. The files in /boot/loader/entries started with a69bd9379d6445668e7df3ddbda62f86, but the ID in /etc/machine-id was b8d80a4c887c40199c4ea1a8f02aa9b4. This is why everything was broken: because those IDs didn't match, grub2-mkconfig couldn't find the files to correct, so the argument was wrong, so the system didn't boot.

Now I knew what was going wrong and I only had two systemd commits left on the list, it was pretty easy to see the problem. It was in 357376d. That changes how kernel-install names these snippet files when creating them. It names them by finding a machine ID to use as a prefix. Previously, it used whatever string was in /etc/machine-id; if that file didn't exist or was empty, it just used the string "Default". After that commit, it also looks for a value specified in /etc/machine-info. If there's a /etc/machine-id but not /etc/machine-info when you run kernel-install, it uses the value from /etc/machine-id and writes it to /etc/machine-info.

When I checked those files, it turned out that on the live image, the ID in both /etc/machine-id and /etc/machine-info was a69bd9379d6445668e7df3ddbda62f86 - the problematic ID on the installed system. When we generate the live image itself, kernel-install uses the value from /etc/machine-id and writes it to /etc/machine-info, and both files wind up in the live filesystem. But on the installed system, the ID in /etc/machine-info was that same value, but the ID in /etc/machine-id was different (as we saw above).

Remember how I mentioned above that when doing a live install, we essentially dump the live filesystem itself onto the installed system? Well, one of the 'tweaks' we make when doing this is to re-generate /etc/machine-id, because that ID is meant to be unique to each installed system - we don't want every system installed from a Fedora live image to have the same machine ID as the live image itself. However, as this /etc/machine-info file is new, we don't strip it from or re-generate it in the installed system, we just install it. The installed system has a /etc/machine-info with the same ID as the live image's machine ID, but a new, different ID in /etc/machine-id. And this (finally) was the ultimate source of the problem! When we run them on the installed system, the new version of kernel-install writes config snippet files using the ID from /etc/machine-info. But Fedora's patched grub2-mkconfig scriptlet doesn't know about that mechanism at all (since it's brand new), and expects the snippet files to contain the ID from /etc/machine-id.

There are various ways you could potentially solve this, but after consulting with systemd upstream, the one we chose is to have anaconda exclude /etc/machine-info when doing a live install. The changes to systemd here aren't wrong - it does potentially make sense that /etc/machine-id and /etc/machine-info could both exist and specify different IDs in some cases. But for the case of Fedora live installs, it doesn't make sense. The sanest result is for those IDs to match and both be the 'fresh' machine ID that's generated at the end of the install process. By just not including /etc/machine-info on the installed system, we achieve this result, because now when kernel-install runs at the end of the install process, it reads the ID from /etc/machine-id and writes it to /etc/machine-info, and both IDs are the same, grub2-mkconfig finds the snippet files and edits them correctly, the installed system boots, and I can move along to the next debugging odyssey...

Issue with the solution to the 6s problem

Posted by Adam Young on January 11, 2022 06:54 PM

I recently came across a posted solutions to the 6s problem. I’m going to argue that several these solutions are invalid. Or, more precisely, I am going to argue that they are only conidered valid due to a convention in notation.

Part of the problem definition states that you cannot add additional digits to get to the solution, only operators. The operators that are used start with addition, subtraction, multiplication, division, and factorial. To solve some of the more difficult lines of the problem, they introduce the square root operator. This, however, is the degenerate form of the a fractional exponent. In other words, you can write either

<figure class="wp-block-image">{\sqrt {2}}</figure>

or

<figure class="wp-block-image">2^{{1/2}}</figure>

Note that in the bottom case, you introduce two new digits: a 1 and a 2.

To be fair, the factorial operator is also shorthand for a fairly long operations. If it was written in product notation, it would be:

<figure class="wp-block-image"></figure>

Which also introduces and additional 1.

This arbitrary distinction occured to me when I was looking at the solution for the 8s problem. It occurred to me that 2^3 is 8, and so a more elegant solution would be to take the cube root of 8 for each digit and sum them. However, this explicitly violates the rules of the puzzle, as the symbol for the cube root is the same as square root, but with a superscript 3.

Why do I care: because there is a pattern with notation that mixes the default case with the more explicit non-default expansions. For example, look at these two network devices names:

enp1s0f0np0 and enP4p4s0f0np0.

You have to look close to parse the difference. It is easier to see when they are stacked:

  • enp1s0f0np0
  • enP4p4s0f0np0

The fact that the second one is longer helps your eye see that the thrid chracter is a lowercase p in the first and uppercase in the second. Why? That field indicates some aspect of the internal configuration of the machine, something about the bridge to which the device is attached. The first is attached to bridge 0, which is the default, and is thus elided. The second is attached to bridge 4, and is thus explicitly named.

Yeah, it is a pain to differentiate.

So the solution to the problem is based on context sensitive parsing of the definition of the problem, to include the fact that the square root is considered a standard symbol without a digit to explicitly state what root is being taken.

Let’s take that option off the table. Are there still solutions to the 6s problem when defined more strictly. What are the set of acceptable operators that can be used to solve this puzzle? What is the smallest set?


My polyamorous relationship with operating systems: FreeBSD, openSUSE, Fedora & Co.

Posted by Peter Czanik on January 11, 2022 09:08 AM

Recently, I have posted blogs and articles about three operating systems (or rather OS families) I use, and now people ask which one is my “true” love. It’s not easy, but I guess, the best way to describe it is that both FreeBSD and openSUSE are true ones, and Fedora & Co. is a workplace affair :-) This is why I’m writing that it is a polyamorous relationship. Let me explain!

My first ever opensource operating system was FreeBSD. I got an account on the faculty server in 1994, a FreeBSD 1.X system. A few months later, I got the task to install Linux and a year later I ended up using S.u.S.E. Linux on the second faculty server. Soon, I was running a couple of Linux and FreeBSD servers at the university and elsewhere as a part-time student job. SuSE Linux also became my desktop operating system. I have always liked state-of-the art hardware, and while I felt FreeBSD to be a lot more mature on the server-side, it did not play well on a desktop. 25+ years later, it is still the case…

SUSE Linux, which later turned into openSUSE is still my desktop OS after 25 years. Of course, just like anybody else, I tried many other distributions. I was flirting with Gentoo Linux (due to its similarity to FreeBSD) and Fedora Linux (did I mention that I love having the latest hardware?), but I’ve always returned to openSUSE within months, as soon as it ran on my new hardware.

FreeBSD became my primary server OS around the year 2000. Web servers, especially those running PHP applications, were common targets for attacks. The FreeBSD jail system, or as Linux users know it: containers, was a perfect solution for this problem, over a decade earlier than Docker and over 1.5 decades earlier than Kubernetes became available. Jails are still my preferred container technology. Unlike the early days, there are now easy-to-use tools to manage them: I use BastilleBSD.

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

As I mentioned, Fedora & Co. is a workplace affair. I love the Fedora community; I have more friends there than in the openSUSE and FreeBSD communities combined. But the single reason I run Fedora, RHEL, CentOS and all the other RHEL clones is syslog-ng, my current job. The vast majority of syslog-ng users run syslog-ng on RHEL and compatible systems. So, I use these operating systems only for work. Except a couple of times for a few months, when openSUSE does not run on new hardware.

So, which is the true one? There is no definite answer. When it comes to operating systems, I live in a polyamorous relationship. You can read more on the various operating systems I use in my earlier blogs:

Anaconda is getting a new suit

Posted by Fedora Community Blog on January 11, 2022 08:00 AM

It’s quite some time since we created the current GTK based UI for Anaconda: the OS installer for Fedora, RHEL, CentOS. For a long time we (the Anaconda team) were looking for possibilities to modernize and improve the user experience. In this post, we would like to explain what we are working on, and—most of all—inform you about what you can expect in the future.

First, we need to express that we decided to share this information pretty early. We are currently at the stage where we have made the decisions. We have a ‘working prototype’ of the solution already available but don’t expect screenshots and demos yet!

What you can expect?

We will rewrite the new UI as a web browser-based UI using existing Cockpit technology. We are taking this approach because Cockpit is a mature solution with great support for the backend (Anaconda DBus). The Cockpit team is also providing us with great support and they have significant knowledge which we could use. We thank them for helping us a lot with the prototype and creating a foundation for the future development. 

We also decided for this step to be consistent with the rest of the system. More and more projects have support in Cockpit. By this step we should make the system more consistent between different applications. The great UX improvement should be easier remote installations compared to the current VNC solution. You can expect a lot of other improvements but let’s wait and see :).

Furthermore, we are building the new UI on top of the Anaconda modularization effort which we are implementing quite some time now. It’s great to see the fruits of our work which helps us now with the creation of the new UI. That also means that users of Fedora shouldn’t be much impacted by the changes during development of the new UI. A big part of Anaconda is now Anaconda modules with DBus APIs and we are reusing that API. We haven’t yet decided the approach for upstream development. We will tell you more about this in the future.

At the current state, we cannot communicate yet the expected day of the new UI or the minimum viable product availability. However, we will make sure to keep you informed about our progress from time to time, ensuring you know what to expect

We are thrilled about this new change and hopefully you are too! We look forward to give you something to play with!

The post Anaconda is getting a new suit appeared first on Fedora Community Blog.

Single attribute in-place editing with Rails and Turbo

Posted by Josef Strzibny on January 11, 2022 12:00 AM

Turbo can largely simplify our front-end needs to achieve a single-page application feel. If you have ever wondered how to do a single attribute in-place update with Turbo, this post is for you.

I’ll assume you have Turbo (with turbo-rails gem) installed, and you already have a classic model CRUD done. If you don’t, just generate a standard scaffold. I’ll use the User model and the name attribute, but it can be anything.

At this point, you might have a controller for the model looking like this:

class UsersController < ApplicationController
  before_action :set_user, only: %i[ show edit update destroy ]

  ...

  # GET /users/1/edit
  def edit
  end

  # PATCH/PUT /users/1 or /users/1.json
  def update
    respond_to do |format|
      if @user.update(user_params)
        format.html { redirect_to user_path(@user), notice: "User was successfully updated." }
        format.json { render :show, status: :ok, location: user_path(@user) }
      else
        format.html { render :edit, status: :unprocessable_entity }
        format.json { render json: @user.errors, status: :unprocessable_entity }
      end
    end
  end

  private
    # Use callbacks to share common setup or constraints between actions.
    def set_user
      @user = User.find(params[:id])
    end

    # Only allow a list of trusted parameters through.
    def user_params
      params.require(:user).permit(:name)
    end
end

You should also have all the standard views that go with it, namely views/users/show.html.erb, that we’ll modify for in-place editing of the user’s name.

We make a specific page for this change to support editing a specific attribute (here a name).

The controller change is easy. We add edit_name method next to your original edit:

class UsersController < ApplicationController
  before_action :set_user, only: %i[ show edit edit_name update destroy password_reset ]

  # GET /users/1/edit
  def edit
  end

  # GET /users/1/edit_name
  def edit_name
  end

  # PATCH/PUT /users/1 or /users/1.json
  def update
    respond_to do |format|
      if @user.update(user_params)
        format.html { redirect_to user_path(@user), notice: "User was successfully updated." }
        format.json { render :show, status: :ok, location: user_path(@user) }
      else
        format.html { render :edit, status: :unprocessable_entity }
        format.json { render json: @user.errors, status: :unprocessable_entity }
      end
    end
  end

  private
    # Use callbacks to share common setup or constraints between actions.
    def set_user
      @user = User.find(params[:id])
    end

    # Only allow a list of trusted parameters through.
    def user_params
      params.require(:user).permit(:name)
    end
end

Notice that there is no need to change how update works, it can do the job for all the attributes at once.

And let’s not forget to make the new path accessible with a change to routes.rb file:

Rails.application.routes.draw do
  ...

  resources :users do
    member do
      get 'edit_name'
    end
  end

  # Defines the root path route ("/")
  root "application#index"
end

Now that we have a new route and controller method to render the form for the name change, we implement the views.

We’ll add a standard view for the edit_name action (views/users/edit_name.html.erb):

<%= form_with model: @user, url: user_path(@user) do |form| %>
  <%= form.text_field :name %>
  <%= form.submit "Save" %>
<% end %>

And then wrap it with turbo_frame_tag call:

<%= turbo_frame_tag :user_name do %>
  <%= form_with model: @user, url: user_path(@user) do |form| %>
    <%= form.text_field :name %>
    <%= form.submit "Save" %>
  <% end %>
<% end %>

Wrapping everything in turbo_frame_tag gives this form a unique identifier and determines the area that gets swapped later.

Notice that we don’t need a specific model ID for turbo_frame_tag (like the examples leveraging dom_id) as we will swap the content on the model’s show page where other user entries don’t exist.

Once prepared, we make another turbo_frame_tag on the show page with the same ID. This tells Turbo that it can swap it with the frame we defined in the previous step:

...
<%= turbo_frame_tag :user_name do %>
  Name: <%= link_to @user.name, edit_name_user_path(@user) %>
<% end %>
...

A link_to pointing to the specific path for editing the name will trigger the action, and Turbo does the rest!

Freezing your Node.js dependencies with yarn.lock and –frozen-lockfile

Posted by Josef Strzibny on January 11, 2022 12:00 AM

When Yarn introduced a lock file (similar to Gemfile.lock), it did it with an unexpected twist. If you need reproducible builds, yarn.lock is not enough.

What is a lock file? Lock files ensure that the defined dependencies from files such as package.json get pinned to specific versions. This later ensures parity on developers’ workstations, CI, and production.

Many people probably depend on Yarn doing the right thing and installing only the pinned versions from yarn.lock on yarn install. But, unfortunately, this is not the case…

The default behavior of yarn install is that the yarn.lock file gets updated if there is any mismatch between package.json and yarn.lock. Weird, right?

(In comparison, other package managers such as RubyGems would only ever look at lock files and install the pinned versions from there.)

Luckily a solution exists. The documentation for the Classic Yarn (1.x) says:

If you need reproducible dependencies, which is usually the case with the continuous integration systems, you should pass –frozen-lockfile flag.

So your yarn install command for CI and production should look like this:

$ yarn install --silent --production=true --frozen-lockfile

There is a long-standing issue for making this a default, but the developers decided to leave it for a new Yarn version which is developed under the name Berry.

Some also say that you don’t need it as you can use pinned versions directly in package.json. This only true to some extend, though, because you would have to specify all transitive dependencies as well.

If you still run without the --frozen-lockfile flag, fix it today. It will save you some headaches later.

Also note, that the --frozen-lockfile flag is changed to --immutable in modern versions of Yarn and it’s a default for CI mode.

Episode 305 – Norton, Ethereum, NFT, and Apes

Posted by Josh Bressers on January 10, 2022 12:01 AM

Josh and Kurt talk about Norton creating an Ethereum mining pool. This is almost certainly a bad idea, we explain why. We then discuss the reality of NFTs and the case of stolen apes. NFTs can be very confusing. The whole world of cryptocurrency is very confusing for normal people. None of this is new, there have always been con artists, there will always be con artists.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2681-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_305_Norton_Ethereum_NFT_and_Apes.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_305_Norton_Ethereum_NFT_and_Apes.mp3</audio>

Show Notes

Google and Facebook fined for cookies practices

Posted by Fabio Alessandro Locati on January 10, 2022 12:00 AM
The CNIL, France’s data regulator, fined Meta (Facebook) and Google for violating the GDPR for a total of 210M€. More specifically: Google LLC (USA) got fined 90M€ Google Ireland Limited got fined 60M€ Facebook Ireland Limited got fined 60M€ Also, if the companies will not fix the issue within three months, an additional penalty of 100'000€/day will be added. There are two facts that I think are very interesting about these fines: the reason behind the fines the fines issuer

Pluton is not (currently) a threat to software freedom

Posted by Matthew Garrett on January 09, 2022 12:59 AM
At CES this week, Lenovo announced that their new Z-series laptops would ship with AMD processors that incorporate Microsoft's Pluton security chip. There's a fair degree of cynicism around whether Microsoft have the interests of the industry as a whole at heart or not, so unsurprisingly people have voiced concerns about Pluton allowing for platform lock-in and future devices no longer booting non-Windows operating systems. Based on what we currently know, I think those concerns are understandable but misplaced.

But first it's helpful to know what Pluton actually is, and that's hard because Microsoft haven't actually provided much in the way of technical detail. The best I've found is a discussion of Pluton in the context of Azure Sphere, Microsoft's IoT security platform. This, in association with the block diagrams on page 12 and 13 of this slidedeck, suggest that Pluton is a general purpose security processor in a similar vein to Google's Titan chip. It has a relatively low powered CPU core, an RNG, and various hardware cryptography engines - there's nothing terribly surprising here, and it's pretty much the same set of components that you'd find in a standard Trusted Platform Module of the sort shipped in pretty much every modern x86 PC. But unlike Titan, Pluton seems to have been designed with the explicit goal of being incorporated into other chips, rather than being a standalone component. In the Azure Sphere case, we see it directly incorporated into a Mediatek chip. In the Xbox Series devices, it's incorporated into the SoC. And now, we're seeing it arrive on general purpose AMD CPUs.

Microsoft's announcement says that Pluton can be shipped in three configurations:as the Trusted Platform Module; as a security processor used for non-TPM scenarios like platform resiliency; or OEMs can choose to ship with Pluton turned off. What we're likely to see to begin with is the former - Pluton will run firmware that exposes a Trusted Computing Group compatible TPM interface. This is almost identical to the status quo. Microsoft have required that all Windows certified hardware ship with a TPM for years now, but for cost reasons this is often not in the form of a separate hardware component. Instead, both Intel and AMD provide support for running the TPM stack on a component separate from the main execution cores on the system - for Intel, this TPM code runs on the Management Engine integrated into the chipset, and for AMD on the Platform Security Processor that's integrated into the CPU package itself.

So in this respect, Pluton changes very little; the only difference is that the TPM code is running on hardware dedicated to that purpose, rather than alongside other code. Importantly, in this mode Pluton will not do anything unless the system firmware or OS ask it to. Pluton cannot independently block the execution of any other code - it knows nothing about the code the CPU is executing unless explicitly told about it. What the OS can certainly do is ask Pluton to verify a signature before executing code, but the OS could also just verify that signature itself. Windows can already be configured to reject software that doesn't have a valid signature. If Microsoft wanted to enforce that they could just change the default today, there's no need to wait until everyone has hardware with Pluton built-in.

The two things that seem to cause people concerns are remote attestation and the fact that Microsoft will be able to ship firmware updates to Pluton via Windows Update. I've written about remote attestation before, so won't go into too many details here, but the short summary is that it's a mechanism that allows your system to prove to a remote site that it booted a specific set of code. What's important to note here is that the TPM (Pluton, in the scenario we're talking about) can't do this on its own - remote attestation can only be triggered with the aid of the operating system. Microsoft's Device Health Attestation is an example of remote attestation in action, and the technology definitely allows remote sites to refuse to grant you access unless you booted a specific set of software. But there are two important things to note here: first, remote attestation cannot prevent you from booting whatever software you want, and second, as evidenced by Microsoft already having a remote attestation product, you don't need Pluton to do this! Remote attestation has been possible since TPMs started shipping over two decades ago.

The other concern is Microsoft having control over the firmware updates. The context here is that TPMs are not magically free of bugs, and sometimes these can have security consequences. One example is Infineon TPMs producing weak RSA keys, a vulnerability that could be rectified by a firmware update to the TPM. Unfortunately these updates had to be issued by the device manufacturer rather than Infineon being able to do so directly. This meant users had to wait for their vendor to get around to shipping an update, something that might not happen at all if the machine was sufficiently old. From a security perspective, being able to ship firmware updates for the TPM without them having to go through the device manufacturer is a huge win.

Microsoft's obviously in a position to ship a firmware update that modifies the TPM's behaviour - there would be no technical barrier to them shipping code that resulted in the TPM just handing out your disk encryption secret on demand. But Microsoft already control the operating system, so they already have your disk encryption secret. There's no need for them to backdoor the TPM to give them something that the TPM's happy to give them anyway. If you don't trust Microsoft then you probably shouldn't be running Windows, and if you're not running Windows Microsoft can't update the firmware on your TPM.

So, as of now, Pluton running firmware that makes it look like a TPM just isn't a terribly interesting change to where we are already. It can't block you running software (either apps or operating systems). It doesn't enable any new privacy concerns. There's no mechanism for Microsoft to forcibly push updates to it if you're not running Windows.

Could this change in future? Potentially. Microsoft mention another use-case for Pluton "as a security processor used for non-TPM scenarios like platform resiliency", but don't go into any more detail. At this point, we don't know the full set of capabilities that Pluton has. Can it DMA? Could it play a role in firmware authentication? There are scenarios where, in theory, a component such as Pluton could be used in ways that would make it more difficult to run arbitrary code. It would be reassuring to hear more about what the non-TPM scenarios are expected to look like and what capabilities Pluton actually has.

But let's not lose sight of something more fundamental here. If Microsoft wanted to block free operating systems from new hardware, they could simply mandate that vendors remove the ability to disable secure boot or modify the key databases. If Microsoft wanted to prevent users from being able to run arbitrary applications, they could just ship an update to Windows that enforced signing requirements. If they want to be hostile to free software, they don't need Pluton to do it.

(Edit: it's been pointed out that I kind of gloss over the fact that remote attestation is a potential threat to free software, as it theoretically allows sites to block access based on which OS you're running. There's various reasons I don't think this is realistic - one is that there's just way too much variability in measurements for it to be practical to write a policy that's strict enough to offer useful guarantees without also blocking a number of legitimate users, and the other is that you can just pass the request through to a machine that is running the appropriate software and have it attest for you. The fact that nobody has actually bothered to use remote attestation for this purpose even though most consumer systems already ship with TPMs suggests that people generally agree with me on that)

comment count unavailable comments

Nouveaux relais Tor

Posted by Casper on January 08, 2022 06:00 AM

Aujourd'hui, cela fait 1 an jour-pour-jour que 4 nouveaux relais sont entrés en production. Ces relais Tor viennent compléter une flotte de 6 relais, à moyenne vitesse.

Au niveau des ports d'écoute, les nouveaux relais fonctionnent sur des ports non-standards ORPort (Onion Routing Port) et DIRPort (Directory Port).

J'ai choisi les ports de la plage 26000 pour leurs ORPort. L'idéal est de choisir le port 443, s'il n'est pas disponible alors essayer de prendre le port 9080. S'ils ne sont pas disponibles, essayer les ports 993 et 995. S'ils ne sont pas disponibles non-plus, alors prendre un port aléatoire dans la plage 20000-30000.

Dans tous les cas, en tant que particulier, n'essayez pas de monter un noeud de sortie. Il existe des assos pour ça. "Nos-Oignons" est une association loi 1901 qui s'occupe de faire des sorties Tor.

Nos-Oignons : Association Nos-Oignons

Au niveau du parc

Tous mes relais ont une connectivité IPv6, et privilégient les connexions IPv6 pour les connexions sortantes vers les autres relais. J'ai déployé 2 relais par machine, avec 2 process séparés, gérés par systemd. La charge des machines est stable.

Pour la première fois, les machines sont exploitées à leur pleine capacité. Ci-après l'output de la commande "uptime" :

nsa.casperlefantom.net (8 cores, nouvelle machine) :
23:34:52 up 9 days, 11:29,  1 user,  load average: 1,25, 1,25, 1,26

nsd.casperlefantom.net (dual-core) :
23:36:07 up 9 days, 11:17,  1 user,  load average: 1.08, 1.04, 1.04

nse.casperlefantom.net (dual-core) :
22:00:51 up 9 days,  9:41,  1 user,  load average: 0.98, 0.87, 0.89

OrNetStats Screenshot

https://nusenu.github.io/OrNetStats/casperlefantom.net.html

Friday’s Fedora Facts: 2022-01

Posted by Fedora Community Blog on January 07, 2022 10:07 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
CentOS Dojo @ FOSDEMvirtual3–4 Febcloses 9 Jan
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

Upcoming meetings

Releases

<figure class="wp-block-table">
Releaseopen bugs
F345481
F352509
F36 (rawhide)6973
</figure>

Fedora Linux 36

Schedule

  • 2022-01-18 — Deadline for Self-Contained Change proposals
  • 2022-01-19 — Mass rebuild begins
  • 2022-02-08 — F36 branches from Rawhide; Rawhide begins F37 development

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Users are administrators by default in the installer GUI.Self-ContainedFESCo #2708
Enable fs-verity in RPMSystem-WideFESCo #2711
Switch GnuTLS to allowlistingSystem-WideApproved
Make Rescue Mode Work With Locked RootSystem-WideFESCo #2713
Wayland By Default with NVIDIA proprietary DriverSystem-WideApproved
GHC compiler parallel version installsSelf-ContainedFESCo #2715
Keylime subpackaging and agent alternativesSelf-ContainedFESCo #2716
Golang 1.18System-WideFESCo #2720
DIGLIMSystem-WideFESCp #2721
LLVM 14System-WideFESCo #2726
Ruby 3.1System-WideFESCp #2727
%set_build_flags for %build and %checkSystem-WideFESCo #2728
Default To Noto FontsSystem-WideFESCo #2729
Hunspell Dictionary dir changeSystem-WideFESCo #2730
Relocate RPM database to /usrSystem-WideFESCo #2731
No ifcfg by defaultSelf-ContainedAnnounced
Django 4.0Self-ContainedAnnounced
GNU Toolchain UpdateSystem-WideAnnounced
New requirements for akmods binary kernel modules for Silverblue / Kinoite supportSelf-ContainedAnnounced
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-01 appeared first on Fedora Community Blog.

Reading a log out of a docker file

Posted by Adam Young on January 07, 2022 04:50 PM

I have to pull the log out of a docker process to figure out why it is crashing. The Docker container name is ironic_ipxe.

cat $( docker inspect ironic_ipxe  | jq -r  '.[] | .LogPath' )

Trouble with signing and notarization on macOS for Tumpa

Posted by Kushal Das on January 07, 2022 09:10 AM

This week I released the first version of Tumpa on Mac. Though the actual changes required for building the Mac app and dmg file were small, but I had to reap apart those few remaining hairs on my head to get it working on any other Mac (than the building box). It was the classic case of Works on my laptop.

The issue

Tumpa is a Python application which uses PySide2 and also Johnnycanencrypt which is written in Rust.

I tried both briefcase tool and manual calling to codesign and create-dmg tools to create the tumpa.app and the tumpa-0.1.3.dmg.

After creating the dmg file, I had to submit it for notarisation to Apple, following:

xcrun /Applications/Xcode.app/Contents/Developer/usr/bin/altool --notarize-app --primary-bundle-id "in.kushaldas.Tumpa" -u "kushaldas@gmail.com" -p "@keychain:MYNOTARIZATION" -f macOS/tumpa-0.1.3.dmg

This worked successfully, after a few minutes I can see that the job passed. So, I can then staple the ticket on the dmg file.

xcrun stapler staple macOS/tumpa-0.1.3.dmg

I can install from the file, and run the application, sounds great.

But, whenever someone else tried to run the application after installing from dmg, it showed the following.

mac failure screenshot

Solution

It took me over 4 hours to keep trying all possible combinations, and finally I had to pass --options=runtime,library to the codesign tool, and that did the trick. Not being able to figure out how to get more logs on Mac was making my life difficult.

I had to patch briefcase to make sure I can keep using it (also created the upstream issue).

--- .venv/lib/python3.9/site-packages/briefcase/platforms/macOS/__init__.py	2022-01-07 08:48:12.000000000 +0100
+++ /tmp/__init__.py	2022-01-07 08:47:54.000000000 +0100
@@ -117,7 +117,7 @@
                     '--deep', str(path),
                     '--force',
                     '--timestamp',
-                    '--options', 'runtime',
+                    '--options', 'runtime,library',
                 ],
                 check=True,
             )

You can see my build script, which is based on input from Micah.

I want to thank all of my new friends inside of SUNET who were excellent helping hands to test the multiple builds of Tumpa. Later many folks from IRC also jumped in to help to test the tool.

PHP version 8.0.15RC1 and 8.1.2RC1

Posted by Remi Collet on January 07, 2022 07:06 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 8.1.2RC1 are available as SCL in remi-test repository and as base packages in the remi-php81-test repository for Fedora 33-34 and Enterprise Linux.

RPM of PHP version 8.0.15RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 35 or in the remi-php80-test repository for Fedora 33-34 and Enterprise Linux.

 

emblem-notice-24.pngPHP version 7.4 is now in security mode only, so no more RC will be released, this is also the last one for 7.4.

emblem-notice-24.pngInstallation : follow the wizard instructions.

Parallel installation of version 8.1 as Software Collection:

yum --enablerepo=remi-test install php81

Parallel installation of version 8.0 as Software Collection:

yum --enablerepo=remi-test install php80

Update of system version 8.1:

yum --enablerepo=remi-php81,remi-php81-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.1
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.0:

yum --enablerepo=remi-php80,remi-php80-test update php\*

or, the modular way (Fedora and EL ≥ 8):

dnf module reset php
dnf module enable php:remi-8.0
dnf --enablerepo=remi-modular-test update php\*

Notice: version 8.1.2RC1 is also in Fedora rawhide for QA.

emblem-notice-24.pngEL-9 packages are built using RHEL-9.0-Beta

emblem-notice-24.pngEL-8 packages are built using RHEL-8.5

emblem-notice-24.pngEL-7 packages are built using RHEL-7.9

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

emblem-notice-24.pngVersion 8.1.0RC3 is also available

Software Collections ( php74, php80)

Base packages (php)

CES 2022: my favorite announcement comes from AMD, and why it's interesting for syslog-ng

Posted by Peter Czanik on January 07, 2022 04:35 AM

For the past few days, the IT news has been abuzz with announcements from CES. As usual, I’m following them on Engadget. I must admit, that there were just a very few announcements which really caught my attention. And my favorite announcement is the most boring of them all :-)

  • Foldable tablet by ASUS: I still use my Google Pixel C tablet almost every day. It’s almost six years old and waiting for replacement. The ASUS tablet is larger and has more accurate colors, two features for the photography maniac in me. Being folded gives it a more book-like feeling when using it for reading. It also has an optional keyboard accesory, just like the Pixel C, so it’s not just a content consumption device.

  • Color changing car is a promising concept by BMW. You can express your mood by the color, but it has more practical useses as well: turning it light in bright sunshine and dark in cold can also help in regulating temperature.

  • Autonomous tractor by John Deere is more about my university research: precision agriculture. I worked on some of the foundations, like soil sampling and correlating the results with aerial photographs. Those and much more are already in practice today. This tractor brings precision farming concepts even further.

To me the best of show is something completely boring: AMD Ryzen 7 5800X3D. It is a CPU. Why is it interesting? It has 100MB of cache. I do regular peak performance testing of syslog-ng. It seems to me, that performance is correlated both to single core performance and cache size. I did not have a chance to test syslog-ng on the latest EPIC or Power10 CPUs, but my AMD Ryzen 7 5800X desktop CPU I use for photo editing beats any ARM, Intel or Power CPUs I tested previously with syslog-ng. And the 5800X3D has almost 3x as large cache, as my current CPU. I must say that I am amazed about the advancement of semiconductor technology and how it helps to deliver more capabilities with less power.

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Why to Start With ESP32

Posted by Zamir SUN on January 06, 2022 09:17 PM

Disclaimer: I’m just a beginner of learning embedded, I wrote all of my embedded articles based on the view of myself as a beginner, so different ideas from embedded professions are highly unavoidable.

Last spring, I’ve wrote about start learning embedded with STM32. However, soon after I wrote that, I realize that the price of STM32 series has gone up to an incredible level. Which makes me wonder if that’s still a good choice.

Recently I started to think about the beginner’s question again. This time, I have some different thoughts: if the learning material can be more related with daily life, the learner will take it much easier. Nowadays IoT is definitely a hot topic. So if the MCU can have some sort of IoT capabilities, it will definitely make the learner happier. So MCU with real wireless communication function is a better choice. Currently, there are a bunch of wireless protocol in real life. The most widely mentioned including (but not limited to)

and more.

In my option, it would be much easier for the beginner if he or she can just connect the MCU to other existing devices, especially mobile phone and laptop. So WiFi and Bluetooth (including BLE) outstands others of all those listed.

So this time, the criterias are

  • The chips or boards should be easily available.
  • Tutorials should be easily available.
  • People should be able to develop and debug for the MCU on any major OS (Linux, MacOS, Windows).
  • There is an IDE that is easy to use even for people who do not have embedded experience.
  • The chip should have WiFi or BLE
  • Only 32bits MCUs

There aren’t much choice matching such criteria, especially in where I live. I already mentioned why I do not like cc26xx and nRF5x for beginner, then there are only ESP8266 series, ESP32 series, or WinnerMicro w600. Of all these, ESP8266 and ESP32 series have the best user community, and materials are most widely available online. But ESP8266 seems to be not recommended for new design (NRND), I think ESP32 series outstands othere.

So let’s talk more about ESP32.

ESP32 is a series of wireless MCU produced by Espressif. As the time of writing, they are

  • ESP32, which contains Xtensa 32-bit LX6 microprocessor(s) with 2.4G WiFi and Bluetooth 4.2 BR/EDR and BLE support.
  • ESP32 S2, which contains a Xtensa 32-bit LX7 microprocessor with 2.4G WiFi
  • ESP32 S3, which contains Xtensa 32-bit LX7 microprocessors with 2.4G WiFi and Bluetooth 5(LE)
  • ESP32 C3, which contains a 32-bit RISC-V microprocessor with 2.4G WiFi and Bluetooth 5(LE).

What’s more, Espressif provides Arduino support for ESP32, ESP32 S2, ESP32 C3. In case you are a newbie reader, Arduino is an open-source hardware and software company, its Arduino IDE is famous for it being easy to use. I hear that even artists can use Arduino to do some embedded related artists without much trouble.

Now, people can freely decide to use ESP32 with Arduino which is much easier to start with, or to use ESP-IDF framework provided by Espressif directly. Espressif even wrote good get started guide for both Arduino-ESP32 and ESP-IDF.

There are so many ESP32 boards available on the internet. The biggest difference for them are mostly only about periphrals. So just purchase as you prefer. Or if you don’t know what to start with, buying a minimal ESP32 board like Node32S also should work, as it should be pretty chip (less than USD $3 here).

QCoro 0.5.0 Release Announcement

Posted by Daniel Vrátil on January 06, 2022 07:00 PM

It took a few months, but there’s a new release of QCoro with some new cool features. This change contains a breaking change in CMake, wich requires QCoro users to adjust their CMakeLists.txt. I sincerely hope this is the last breaking change for a very long time.

Major highlights in this release:

  • Co-installability of Qt5 and Qt6 builds of QCoro
  • Complete re-work of CMake configuration
  • Support for compiling QCoro with Clang against libstdc++

Co-installability of Qt5 and Qt6 builds of QCoro

This change mostly affects packagers of QCoro. It is now possible to install both Qt5 and Qt6 versions of QCoro alongside each other without conflicting files. The shared libraries now contain the Qt version number in their name (e.g. libQCoro6Core.so) and header files are also located in dedicated subdirectories (e.g. /usr/include/qcoro6/{qcoro,QCoro}). User of QCoro should not need to do any changes to their codebase.

Complete re-work of CMake configuration

This change affects users of QCoro, as they will need to adjust CMakeLists.txt of their projects. First, depending on whether they want to use Qt5 or Qt6 version of QCoro, a different package must be used. Additionally, list of QCoro components to use must be specified:

find_package(QCoro5 REQUIRED COMPONENTS Core Network DBus)

Finally, the target names to use in target_link_libraries have changed as well:

  • QCoro::Core
  • QCoro::Network
  • QCoro::DBus

The version-less QCoro namespace can be used regardless of whether using Qt5 or Qt6 build of QCoro. QCoro5 and QCoro6 namespaces are available as well, in case users need to combine both Qt5 and Qt6 versions in their codebase.

This change brings QCoro CMake configuration system to the same style and behavior as Qt itself, so it should now be easier to use QCoro, especially when supporting both Qt5 and Qt6.

Support for compiling QCoro with Clang against libstdc++

Until now, when the Clang compiler was detected, QCoro forced usage of LLVM’s libc++ standard library. Coroutine support requires tight co-operation between the compiler and standard library. Because Clang still considers their coroutine support experimental it expects all coroutine-related types in standard library to be located in std::experimental namespace. In GNU’s libstdc++, coroutines are fully supported and thus implemented in the std namespace. This requires a little bit of extra glue, which is now in place.

Full changelog

  • QCoro can now be built with Clang against libstdc++ (#38, #22)
  • Qt5 and Qt6 builds of QCoro are now co-installable (#36, #37)
  • Fixed early co_return not resuming the caller (#24, #35)
  • Fixed QProcess example (#34)
  • Test suite has been improved and extended (#29, #31)
  • Task move assignment operator checks for self-assignment (#27)
  • QCoro can now be built as a subdirectory inside another CMake project (#25)
  • Fixed QCoroCore/qcorocore.h header (#23)
  • DBus is disabled by default on Windows, Mac and Android (#21)

Thanks to everyone who contributed to QCoro!


Download

You can download QCoro 0.4.0 here or check the latest sources on QCoro GitHub.

More About QCoro

If you are interested in learning more about QCoro, go read the documentation, look at the first release announcement, which contains a nice explanation and example or watch recording of my talk about C++20 coroutines and QCoro this years’ Akademy.

Kiwi TCMS Enterprise 10.5.1

Posted by Kiwi TCMS on January 06, 2022 11:45 AM

We're happy to announce Kiwi TCMS Enterprise version 10.5.1!

IMPORTANT: this is a small release which contains minor improvements and bug-fixes.

You can explore everything at https://public.tenant.kiwitcms.org!

Kiwi TCMS Enterprise v10.5.1-mt

  • Based on Kiwi TCMS v10.5

  • Update django-python3-ldap from 0.13.0 to 0.13.1

  • Update kiwitcms-github-app from 1.3.1 to 1.3.2

    Private images:

    quay.io/kiwitcms/enterprise         10.5.1-mt       c4d745bd914c   806MB
    

IMPORTANT: version tagged and Enterprise container images are available only to subscribers!

How to upgrade

Backup first! Then execute the commands:

cd path/containing/docker-compose/
docker-compose down
docker-compose pull
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Refer to our documentation for more details!

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us!

Community Blog monthly summary: December 2021

Posted by Fedora Community Blog on January 06, 2022 08:00 AM
Community Blog update

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.

Stats

In December, we published 15 posts. The site had 5,262 visits from 3,445 unique viewers. 181 visits came from search engines, while 66 came from Fedora Magazine and 12 came from Twitter.

The most read post last month was EPEL 9 is now available with 150 views.

Badges

  • Community Messenger I (1 post)
    • carlwgeorge

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly summary: December 2021 appeared first on Fedora Community Blog.