Fedora People

CPE Weekly Update – Week 32 2022

Posted by Fedora Community Blog on August 12, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week: 8th – 12th August 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board
Link to docs

Update

Fedora Infra

  • Some great nest talks and discussions, check them out on replay!
  • Debugging instability in 32bit arm builders again. ;(
  • Rebalanced the s390x builders.
  • Business as usual

CentOS Infra including CentOS CI

Release Engineering

  • Mass branching yesterday (f37 split off rawhide, which is now f38)

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Git source moved from git.centos.org to gitlab, for c8s modules.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • EPEL9 is up to 7233 (+106) packages from 3207 (+27) source packages
  • “State of EPEL” presentation at Nest conference, recording will be posted to YouTube at a later date
  • EPEL Survey has launched and is available through the end of August
  • epel-release has been improved with a recommends on dnf-command(config-manager) to ensure crb enabling script can work out of the box
  • KDE Plasma updated from 5.23 to 5.24 (LTS release) in epel8-next and epel9-next

FMN replacement

Goal of this initiative

FMN (Fedora-Messaging-Notification) is a web application allowing users to create filters on messages sent to (currently) fedmsg and forward these as notifications on to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas, create a new UI for a better user experience and create a new service to triage incoming messages to reduce the current message delivery lag problem. Community will profit from speedier notifications based on own preferences (IRC, Matrix, Email), unified fedora project to one message service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the maintenance of fedmsg altogether.

Updates

  • Unit tests/coverage tests on frontend (Vue.js)
  • Auth/OIDC work on both frontend and backend
  • Initial backend connection via SQLAlchemy/fastAPI
  • Basic functionality of connecting to FASJSON
  • CI improvements and fixes

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 32 2022 appeared first on Fedora Community Blog.

Common GLib Programming Errors, Part Two: Weak Pointers

Posted by Michael Catanzaro on August 11, 2022 09:40 PM

This post is a sequel to Common GLib Programming Errors, where I covered four common errors: failure to disconnect a signal handler, misuse of a GSource handle ID, failure to cancel an asynchronous function, and misuse of main contexts in library or threaded code. Although there are many ways to mess up when writing programs that use GLib, I believe the first post covered the most likely and most pernicious… except I missed weak pointers. Sébastien pointed out that these should be covered too, so here we are.

Mistake #5: Failure to Disconnect Weak Pointer

In object-oriented languages, weak pointers are a safety improvement. The idea is to hold a non-owning pointer to an object that gets automatically set to NULL when that object is destroyed to prevent use-after-free vulnerabilities. However, this only works well because object-oriented languages have destructors. Without destructors, we have to deregister the weak pointer manually, and failure to do so is a disaster that will result in memory corruption that’s extremely difficult to track down. For example:

static void
a_start_watching_b (A *self,
                    B *b)
{
  // Keep a weak reference to b. When b is destroyed,
  // self->b will automatically be set to NULL.
  self->b = b;
  g_object_add_weak_pointer (b, &self->b);
}

static void
a_do_something_with_b (Foo *self)
{
  if (self->b) {
    // Do something safely here, knowing that b
    // is assuredly still alive. This avoids a
    // use-after-free vulnerability if b is destroyed,
    // i.e. self->b cannot be dangling.
  }
}

Let’s say that the Bar in this example outlives the Foo, but Foo failed to call g_object_remove_weak_pointer() . Then when Bar is destroyed later, the memory that used to be occupied by self->bar will get clobbered with NULL. Hopefully that will result in an immediate crash. If not, good luck trying to debug what’s going wrong when some innocent variable elsewhere in your program gets randomly clobbered. This is often results in a frustrating wild goose chase when trying to track down what is going wrong (example).

The solution is to always disconnect your weak pointer. In most cases, your dispose function is the best place to do this:

static void
a_dispose (GObject *object)
{
  A *a = (A *)object;
  g_clear_weak_pointer (&a->b);
  G_OBJECT_CLASS (a_parent_class)->dispose (object);
}

Note that g_clear_weak_pointer() is equivalent to:

if (a->b) {
  g_object_remove_weak_pointer (a->b, &a->b);
  a->b = NULL;
}

but you probably guessed that, because it follows the same pattern as the other clear functions that we’ve used so far.

Friday’s Fedora Facts: 2022-32

Posted by Fedora Community Blog on August 11, 2022 08:46 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Kieler Open Source und Linux TageKiel, DE and virtual16–17 Sepcloses 14 Aug
Hardwear.io NetherlandsThe Hauge, NL24–28 Octcloses 15 Aug
SeaGLvirtual4–5 Novcloses 19 Aug
PyGeeklevirtual6–7 Sepcloses 24 Aug
EuroRustBerlin, DE and virtual13–14 Octcloses 28 Aug
Denver Dev DayDenver, CO, US20–21 Octcloses 2 Sep
Vue.js LiveLondon, UK and virtual28, 31 Octcloses 5 Sep
EmacsConfvirtual3–4 Deccloses 18 Sep
JavaLandBrühl, DE21–23 Marcloses 26 Sep
Python Web Confvirtual14–17 Marcloses 1 Oct
</figure>

Help wanted

Upcoming test days

Meetings & events

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354112
F363573
F37 (pre-release)1476
Rawhide6283
</figure>

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
2079833cmakeNEW
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-08-23 — Beta freeze begins, Change complete (100% complete) deadline
  • 2022-09-13 — Current beta target date (early target date)

Changes

<figure class="wp-block-table">
StatusCount
ASSIGNED13
MODIFIED17
ON_QA22
CLOSED1
</figure>

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Emacs 28Self-ContainedFESCo #2845
</figure>

Blockers

<figure class="wp-block-table">
Bug IDComponentBug StatusBlocker Status
2117706kde-settingsNEWAccepted(Beta)
2070823anacondaON_QAProposed(Beta)
2101229anacondaNEWProposed(Beta)
1907030dnfNEWProposed(Beta)
2107858dracutPOSTProposed(Beta)
2109145polkitASSIGNEDProposed(Beta)
2110801sddmNEWProposed(Beta)
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-32 appeared first on Fedora Community Blog.

Next Open NeuroFedora meeting: 15 August 1300 UTC

Posted by The NeuroFedora Blog on August 11, 2022 08:44 PM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 15 August at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2022-08-15'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

Type support: getting started with syslog-ng 4.0

Posted by Peter Czanik on August 11, 2022 10:05 AM

Version 4.0 of syslog-ng is right around the corner. It hasn’tyet been released; however, you can already try some of its features. The largest and most interesting change is type support. Right now, name-value pairs within syslog-ng are represented as text, even if the PatternDB or JSON parsers could see the actual type of the incoming data. This does not change, but starting with 4.0, syslog-ng will keep the type information, and use it correctly on the destination side. This makes your life easier, for example when you store numbers to Elasticsearch or to other type-aware storage.

From this blog, you can learn how type support makes your life easier and helps you to give it a testdrive on your own hosts: https://www.syslog-ng.com/community/b/blog/posts/type-support-getting-started-with-syslog-ng-4-0

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

The new XWAYLAND extension is available

Posted by Peter Hutterer on August 11, 2022 06:50 AM

As of xorgproto 2022.2, we have a new X11 protocol extension. First, you may rightly say "whaaaat? why add new extensions to the X protocol?" in a rather unnecessarily accusing way, followed up by "that's like adding lipstick to a dodo!". And that's not completely wrong, but nevertheless, we have a new protocol extension to the ... [checks calendar] almost 40 year old X protocol. And that extension is, ever creatively, named "XWAYLAND".

If you recall, Xwayland is a different X server than Xorg. It doesn't try to render directly to the hardware, instead it's a translation layer between the X protocol and the Wayland protocol so that X clients can continue to function on a Wayland compositor. The X application is generally unaware that it isn't running on Xorg and Xwayland (and the compositor) will do their best to accommodate for all the quirks that the application expects because it only speaks X. In a way, it's like calling a restaurant and ordering a burger because the person answering speaks American English. Without realising that you just called the local fancy French joint and now the chefs will have to make a burger for you, totally without avec.

Anyway, sometimes it is necessary for a client (or a user) to know whether the X server is indeed Xwayland. Previously, this was done through heuristics: the xisxwayland tool checks for XRandR properties, the xinput tool checks for input device names, and so on. These heuristics are just that, though, so they can become unreliable as Xwayland gets closer to emulating Xorg or things just change. And properties in general are problematic since they could be set by other clients. To solve this, we now have a new extension.

The XWAYLAND extension doesn't actually do anything, it's the bare minimum required for an extension. It just needs to exist and clients only need to XQueryExtension or check for it in XListExtensions (the equivalent to xdpyinfo | grep XWAYLAND). Hence, no support for Xlib or libxcb is planned. So of all the nightmares you've had in the last 2 years, the one of misidentifying Xwayland will soon be in the past.

Give nothing, expect nothing: GitLab’s the latest punching bag for entitled users

Posted by Joe Brockmeier on August 10, 2022 03:26 PM
What do Docker, GitLab, and Red Hat have in common? Aside from various levels of participation in open source, they've all been punching bags over the past few years for non-paying users angry that they've taken some freebies off the table. When Docker had the temerity to introduce limits for free users pulling containers from … Continue reading Give nothing, expect nothing: GitLab’s the latest punching bag for entitled users

Hibernation in Fedora Workstation

Posted by Fedora Magazine on August 10, 2022 08:07 AM

This article walks you through the manual setup for hibernation in Fedora Linux 36 Workstation using BTRFS and is based on a gist by eloylp on github.

Goal and Rationale

Hibernation stores the current runtime state of your machine – effectively the contents of your RAM, onto disk and does a clean shutdown. Upon next boot this state is restored from disk to memory such that everything, including open programs, is how you left it.

Fedora Workstation uses ZRAM. This is a sophisticated approach to swap using compression inside a portion of your RAM to avoid the slower on-disk swap files. Unfortunately this means you don’t have persistent space to move your RAM upon hibernation when powering off your machine.

How it works

The technique configures systemd and dracut to store and restore the contents of your RAM in a temporary swap file on disk. The swap file is created just before and removed right after hibernation to avoid trouble with ZRAM. A persistent swap file is not recommended in conjunction with ZRAM, as it creates some confusing problems compromising your systems stability.

A word on compatibility and expectations

Hibernation following this guide might not work flawless on your particular machine(s). Due to possible shortcomings of certain drivers you might experience glitches like non-working wifi or display after resuming from hibernation. In that case feel free to reach out to the comment section of the gist on github, or try the tips from the troubleshooting section at the bottom of this article.

The changes introduced in this article are linked to the systemd hibernation.service and hibernation.target units and hence won’t execute on their own nor interfere with your system if you don’t initiate a hibernation. That being said, if it does not work it still adds some small bloat which you might want to remove.

Hibernation in Fedora Workstation

The first step is to create a btrfs sub volume to contain the swap file.

$ btrfs subvolume create /swap

In order to calculate the size of your swap file use swapon to get the size of your zram device.

$ swapon
NAME       TYPE      SIZE USED PRIO
/dev/zram0 partition   8G   0B  100

In this example the machine has 16G of RAM and a 8G zram device. ZRAM stores roughly double the amount of system RAM compressed in a portion of your RAM. Let that sink in for a moment. This means that in total the memory of this machine can hold 8G * 2 + 8G of RAM which equals 24G uncompressed data. Create and configure the swapfile using the following commands.

$ touch /swap/swapfile
# Disable Copy On Write on the file
$ chattr +C /swap/swapfile
$ fallocate --length 24G /swap/swapfile
$ chmod 600 /swap/swapfile 
$ mkswap /swap/swapfile

Modify the dracut configuration and rebuild your initramfs to include the

resume
module, so it can later restore the state at boot.

$ cat <<-EOF | sudo tee /etc/dracut.conf.d/resume.conf
add_dracutmodules+=" resume "
EOF
$ dracut -f

In order to configure grub to tell the kernel to resume from hibernation using the swapfile, you need the UUID and the physical offset.

Use the following command to determine the UUID of the swap file and take note of it.

$ findmnt -no UUID -T /swap/swapfile
dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8

Calculate the correct offset. In order to do this you’ll unfortunately need gcc and the source of the btrfs_map_physical tool, which computes the physical offset of the swapfile on disk. Invoke gcc in the directory you placed the source in and run the tool.

$ gcc -O2 -o btrfs_map_physical btrfs_map_physical.c
$ ./btrfs_map_physical /path/to/swapfile

FILE OFFSET  EXTENT TYPE  LOGICAL SIZE  LOGICAL OFFSET  PHYSICAL SIZE  DEVID  PHYSICAL OFFSET
0            regular      4096          2927632384      268435456      1      <4009762816>
4096         prealloc     268431360     2927636480      268431360      1      4009766912
268435456    prealloc     268435456     3251634176      268435456      1      4333764608
536870912    prealloc     268435456     3520069632      268435456      1      4602200064
805306368    prealloc     268435456     3788505088      268435456      1      4870635520
1073741824   prealloc     268435456     4056940544      268435456      1      5139070976
1342177280   prealloc     268435456     4325376000      268435456      1      5407506432
1610612736   prealloc     268435456     4593811456      268435456      1      5675941888

The first value in the PHYSICAL OFFSET column is the relevant one. In the above example it is 4009762816.

Take note of the pagesize you get from getconf PAGESIZE.

Calculate the kernel resume_offset through division of physical offset by the pagesize. In this example that is 4009762816 / 4096 = 978946.

Update your grub configuration file and add the resume and resume_offset kernel cmdline parameters.

grubby --args="resume=UUID=dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8 resume_offset=2459934" --update-kernel=ALL

The created swapfile is only used in the hibernation stage of system shutdown and boot hence not configured in fstab. Systemd units control this behavior, so create the two units hibernate-preparation.service and hibernate-resume.service.

$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-preparation.service
[Unit]
Description=Enable swap file and disable zram before hibernate
Before=systemd-hibernate.service

[Service]
User=root
Type=oneshot
ExecStart=/bin/bash -c "/usr/sbin/swapon /swap/swapfile && /usr/sbin/swapoff /dev/zram0"

[Install]
WantedBy=systemd-hibernate.service
EOF
$ systemctl enable hibernate-preparation.service
$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-resume.service
[Unit]
Description=Disable swap after resuming from hibernation
After=hibernate.target

[Service]
User=root
Type=oneshot
ExecStart=/usr/sbin/swapoff /swap/swapfile

[Install]
WantedBy=hibernate.target
EOF
$ systemctl enable hibernate-resume.service

Systemd does memory checks on login and hibernation. In order to avoid issues when moving the memory back and forth between swapfile and zram disable some of them.

$ mkdir -p /etc/systemd/system/systemd-logind.service.d/
$ cat <<-EOF | sudo tee /etc/systemd/system/systemd-logind.service.d/override.conf
[Service]
Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1
EOF
$ mkdir -p /etc/systemd/system/systemd-hibernate.service.d/
$ cat <<-EOF | sudo tee /etc/systemd/system/systemd-hibernate.service.d/override.conf
[Service]
Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1
EOF

Reboot your machine for the changes to take effect. The following SELinux configuration won’t work if you don’t reboot first.

SELinux won’t like hibernation attempts just yet. Change that with a new policy. An easy although “brute” approach is to initiate hibernation and use the audit log of this failed attempt via audit2allow. The following command will fail, returning you to a login prompt.

systemctl hibernate

After you’ve logged in again check the audit log, compile a policy and install it. The -b option filters for audit log entries from last boot. The -M option compiles all filtered rules into a module, which is then installed using semodule -i.

$ audit2allow -b
#============= systemd_sleep_t ==============
allow systemd_sleep_t unlabeled_t:dir search;
$ cd /tmp
$ audit2allow -b -M systemd_sleep
$ semodule -i systemd_sleep.pp

Check that hibernation is working via systemctl hibernate again. After resume check that ZRAM is indeed the only active swap device.

$ swapon
NAME       TYPE      SIZE USED PRIO
/dev/zram0 partition   8G   0B  100

You now have hibernation configured.

GNOME Shell hibernation integration

You might want to add a hibernation button to the GNOME Shell “Power Off / Logout” section. Check out the extension Hibernate Status Button to do so.

Troubleshooting

A first place to troubleshoot any problems is through journalctl -b. Have a look around the end of the log, after trying to hibernate, to pin-point log entries that tell you what might be wrong.

Another source of information on errors is the Problem Reporting tool. Especially problems, that are not common but more specific to your hardware configuration. Have a look at it before and after attempting hibernation and see if something comes up. Follow up on any issues via BugZilla and see if others experience similar problems.

Revert the changes

To reverse the changes made above, follow this check-list:

  • remove the swapfile
  • remove the swap subvolume
  • remove the dracut configuration and rebuild dracut
  • remove kernel cmdline args via grubby –remove-args=
  • disable and remove hibernation preparation and resume services
  • remove systemd overrides for systemd-logind.service and systemd-hibernation.service
  • remove SELinux module via semodule -r systemd_sleep

Credits and Additional Resources

This article is a community effort based primarily on the work of eloylp. As author of this article I’d like to make transparent that I’ve participated in the discussion to advance the gist behind this but many more minds contributed to make this work. Make certain to check out the discussion on github.

There are already some ansible playbooks and shell scripts to automate the process depicted in this guide. For example check out the shell scripts by krokwen and pietryszak or the ansible playbook by jorp

See the arch wiki for the full guide on how to calculate the swapfile offset.

Building and Running the Linux Kernel Selftests on AARCH64/ Fedora

Posted by Adam Young on August 09, 2022 10:42 PM

I won’t go into checking out or building the Kernel, as that is covered elsewhere. Assuming you have a buildable Kernel, you can build the tests with:

make -C tools/testing/selftests

But you are probably going to see errors like this:

ksm_tests.c:7:10: fatal error: numa.h: No such file or directory
    7 | #include <numa.h>
      |          ^~~~~~~~
compilation terminated.

The userland test suites use several libraries and need headers to compile the tests that call those libraries. Here is the yum, line I ran to get the dependencies I needed for my system:

sudo yum install libmnl-devel fuse-devel numactl-devel libcap-ng-devel alsa-lib-devel

With those installed, the make line succeeded.

Running the test like this CRASHED THE SYSTEM. Don’t do this.

 make -C tools/testing/selftests run_tests

A more sensible test to run is the example on the Docs page:

# make -C tools/testing/selftests TARGETS=ptrace run_tests
make: Entering directory '/root/linux/tools/testing/selftests'
make --no-builtin-rules ARCH=arm64 -C ../../.. headers_install
make[1]: Entering directory '/root/linux'
  INSTALL ./usr/include
make[1]: Leaving directory '/root/linux'
make[1]: Entering directory '/root/linux/tools/testing/selftests/ptrace'
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory '/root/linux/tools/testing/selftests/ptrace'
make[1]: Entering directory '/root/linux/tools/testing/selftests/ptrace'
TAP version 13
1..3
# selftests: ptrace: get_syscall_info
# TAP version 13
# 1..1
# # Starting 1 tests from 1 test cases.
# #  RUN           global.get_syscall_info ...
# #            OK  global.get_syscall_info
# ok 1 global.get_syscall_info
# # PASSED: 1 / 1 tests passed.
# # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
ok 1 selftests: ptrace: get_syscall_info
# selftests: ptrace: peeksiginfo
# PASS
ok 2 selftests: ptrace: peeksiginfo
# selftests: ptrace: vmaccess
# TAP version 13
# 1..2
# # Starting 2 tests from 1 test cases.
# #  RUN           global.vmaccess ...
# #            OK  global.vmaccess
# ok 1 global.vmaccess
# #  RUN           global.attach ...


# # attach: Test terminated by timeout
# #          FAIL  global.attach
# not ok 2 global.attach
# # FAILED: 1 / 2 tests passed.
# # Totals: pass:1 fail:1 xfail:0 xpass:0 skip:0 error:0
not ok 3 selftests: ptrace: vmaccess # exit=1
make[1]: Leaving directory '/root/linux/tools/testing/selftests/ptrace'
make: Leaving directory '/root/linux/tools/testing/selftests'

Next up is to write my own stub test.

Discogs

Posted by Peter Czanik on August 09, 2022 12:16 PM

Last week I became a Discogs user. Why? I have been browsing the site for years to find information on albums. Recently I also needed a solution to create an easy to access database of my CD/DVD collection. Right now I am not interested in the marketplace function of Discogs, but that might change in the long term :-)

Information overload

For many years when I searched for an album, the first few hits were from YouTube and Wikipedia. Nowadays the first few results are often from Discogs. While Wikipedia sometimes provides some interesting background information about the creation of an album, Discogs has more structured and uniform information about albums. It also lists the many variants of the same album. Even for artists where I thought that I have all albums in my collection (like Mike Oldfield), I can find albums I have never heard about before. It is also easy to see who a given artist was working with and using TIDAL I can instantly listen to some really interesting (or awful…) music right away.

My collection

I only have a few hundred CDs, but that is already more than I can remember. When I am in a CD shop, I happily buy new CDs from artists I have never heard about before, as I can be sure that I do not already have that disc. However, when it comes to Solaris, Mike Oldfield or Vangelis, I can never be sure if I already have an album. Of course I tried some DIY methods, but it was difficult to maintain the lists and they were never at hand when I really needed them.

Discogs provides an easy to use mobile application to scan bar codes on the back of CDs. This can speed up adding new items to my collection tremendously. Of course not all bar codes are in available in Discogs, but until now there was only one CD that I could not find at all. The more difficult part is when it lists dozens of disks for the same bar code: various (re)prints of the the same album from around the World. I must admit that I am lazy here and just take an educated guess… I can use the same mobile app to check my collection when away from home.

A few weeks ago I realized that I have a duplicate album, and while entering my collection into Discogs, I discovered another one. I have no plans for selling them, I already know which of my friends would be happy to receive them. But in the long term it could be interesting to buy a few CDs which are otherwise impossible to buy here in Hungary.

Discogs also gives a price estimate for most CDs. It was a kind of surprising: some of my most expensive disks are not worth too much anymore, as they were printed in large numbers. On the other hand I have a large collection of Hungarian progrock music, and the price of those is much higher than I paid for them originally.

You can find my collection at https://www.discogs.com/user/pczanik/collection. The list is constantly growing, as I am still just at less than a half of my collection. The next time I visit my favorite CD shop, Periferic Records - Stereo Kft., I will have an easier job when I see a CD from a familiar artist :-)

<figure><figcaption>

flower

</figcaption> </figure>

Fedora Linux 38 development schedule

Posted by Fedora Community Blog on August 09, 2022 08:00 AM

Fedora Linux 37 branches from Rawhide today. While there’s still a lot of work before the Fedora Linux 37 release in October, this marks the beginning of the Fedora Linux 38 development cycle. The work you do in Rawhide will be in the Fedora Linux 38 release in April.

With that in mind, here are some important milestones:

  • Wed 2022-12-21: Proposal submission deadline (Changes requiring infrastructure changes)
  • Tue 2022-12-27: Proposal submission deadline (Changes requiring mass rebuild & System-Wide Changes)
  • Tue 2023-01-17: Proposal submission deadline (Self Contained Changes)
  • Tue 2023-02-07:
    • Change Checkpoint: Completion deadline (testable)
    • Branch Fedora Linux 38 from Rawhide
  • Tue 2023-02-21:
    • Change Checkpoint: 100% Code Complete Deadline
    • Beta Freeze begins
  • Tue 2023-03-14: Beta release (early target date)
  • Tue 2023-03-21: Beta release (target date #1)
  • Tue 2023-04-04: Final Freeze begins
  • Tue 2023-04-18: Final release (early target date)
  • Tue 2023-04-25: Final release (target date #1)

Of course, the schedule is subject to change. The schedules published to fedorapeople.org are always the most up-to-date.

As always, if your team needs additions, removals, or changes, you can file a ticket in the Pagure repo.

The post Fedora Linux 38 development schedule appeared first on Fedora Community Blog.

SSH from RHEL 9 to RHEL 5 or RHEL 6

Posted by Richard W.M. Jones on August 08, 2022 09:06 AM

RHEL 9 no longer lets you ssh to RHEL ≤ 6 hosts out of the box. You can weaken security of the whole system but there’s no easy way to set security policy per remote host. Here’s how to set up ssh so it works for a RHEL 5 or RHEL 6 host:

First edit your .ssh/config file, adding an entry for the host:

Host rhel5or6-host
KexAlgorithms +diffie-hellman-group14-sha1
MACs +hmac-sha1
HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedKeyTypes +ssh-rsa
PubkeyAcceptedAlgorithms +ssh-rsa

That’s not enough on its own, because RHEL 9 also maims the openssl library by disabling SHA1 support by default. To fix that, create /var/tmp/openssl.cnf with:

.include /etc/ssl/openssl.cnf
[openssl_init]
alg_section = evp_properties
[evp_properties]
rh-allow-sha1-signatures = yes

Now you can ssh to RHEL 5 or RHEL 6 hosts like this:

OPENSSL_CONF=/var/tmp/openssl.cnf ssh rhel5or6-host

Thanks Laszlo Ersek for working out most of this. Related bugs:

2064740 – RFE: Make it easier to configure LEGACY policy per service or per host

2062360 – RFE: Virt-v2v should replace hairy “enable LEGACY crypto” advice which a more targeted mechanism

آموزش نصب و پیکربندی Cilium در Kubernetes – بخش ۲

Posted by Fedora fans on August 08, 2022 06:30 AM
cilium-kubernetes-ebpf

cilium-kubernetes-ebpfدر ادامه ی سلسه مطالب نصب و پیکربندی Cilium بر روی Kubernetes، اکنون در قسمت دوم قصد داریم تا Cilium را نصب کنیم. روش نصب Cilium بستگی به نحوه ی Deploy کردن کلاستر کوبرنتیز شما دارد که با توجه به مستندات Cilium باید روش و ابزار مناسب جهت نصب Cilium را انتخاب کنید. در این مطلب می خواهیم تا با استفاده از Minikube یک Kubernetes cluster راه اندازی کنیم و سپس Cilium را روی آن نصب کنیم. بدین منظور کافیست تا دستور زیر را اجرا کنید:

minikube start --network-plugin=cni --cni=false --memory 10240 --cpus 6

نکته: برای اطلاعات بیشتر در مورد نصب Minikube می توانید مطلب « آموزش نصب Kubernetes با Minikube» را ببینید.

یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:

kubernetes-clusterنصب Cilium CLI:

اکنون نیاز است تا آخرین نسخه ی Cilium CLI را نصب کنیم. با استفاده از Cilium CLI می توان Cilium را نصب کرد، وضعیت نصب Cilium را بررسی کرد و برخی ویژگی ها مانند clustermesh و Hubble را فعال و غیر فعال کرد. برای نصب Cilium CLI کافیست تا دستورهای زیر را بر روی لینوکس اجرا کنید:

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/master/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi

 


curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

نصب Cilium:

بطور کلی برای نصب Cilium بر روی Kubernetes cluster می توان از دستور زیر استفاده کرد. با اجرای دستور زیر، Cilium بر روی کوبرنتیز کلاستری که در kubectl context جاری باشد نصب خواهد شد:

cilium install

بررسی نصب Cilium:

جهت بررسی صحت نصب Cilium می توان از دستور زیر استفاده کرد:

cilium status --wait

یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:

cilium-statusبرای بررسی درستی کارکرد شبکه می توان از دستور زیر استفاده کرد:

cilium connectivity test

یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:

cilium connectivity testبرای لیست گرفتن از Cilium agent ها می توان از دستور زیر استفاده کرد:

kubectl -n kube-system get pods -l k8s-app=cilium

اکنون با فهمیدن نام Cilium pod (یکی از Cilium pod ها) می توان وارد آن شد و دستورهای Cilium را اجرا کرد. به عنوان نمونه برای گرفتن لیست endpoint ها می توان از دستور زیر استفاده کرد:

kubectl -n kube-system exec cilium-ff8xd -- cilium endpoint list

یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:

cilium endpoint listاز آنجایی که Cilium مبتنی بر eBPF است می توانیم یک لایه عمیق تر جلو برویم و به policy های مربوط به eBPF نگاه بیندازیم:

kubectl -n kube-system exec cilium-ff8xd -- cilium bpf policy get --all

یک نمونه خروجی از دستور گفته شده را در تصویر پایین مشاهده می کنید:

cilium policyنکته اینکه، شماره policy با endpoint ID که بالاتر آنها را لیست کردیم مرتبط است.

ادامه دارد …

The post آموزش نصب و پیکربندی Cilium در Kubernetes – بخش ۲ first appeared on طرفداران فدورا.

Episode 335 – Bull*&$% security ideas

Posted by Josh Bressers on August 08, 2022 12:00 AM

Josh and Kurt talk about a tweet from @kmcquade3 asking the question “What’s a concept in security that is generally accepted as true but is actually bull%$#*?” How many of the replies make sense? Most of them do. We go over some of the best replies as fast as we can.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2844-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_335_Bull_security_ideas.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_335_Bull_security_ideas.mp3</audio>

Show Notes

Friday’s Fedora Facts: 2022-31

Posted by Fedora Community Blog on August 05, 2022 10:10 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
Hardwear.io NetherlandsThe Hauge, NL24–28 Octcloses 15 Aug
SeaGLvirtual4–5 Novcloses 19 Aug
PyGeeklevirtual6–7 Sepcloses 24 Aug
EuroRustBerlin, DE and virtual13–14 Octcloses 28 Aug
Denver Dev DayDenver, CO, US20–21 Octcloses 2 Sep
Vue.js LiveLondon, UK and virtual28, 31 Octcloses 5 Sep
Python Web Confvirtual14–17 Marcloses 1 Oct
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
2079833cmakeNEW
</figure>

Meetings & events

Fedora Hatches

Hatches are local, in-person events to augment Nest With Fedora. Here are the upcoming Hatch events.

<figure class="wp-block-table">
DateLocation
11 AugBrno, CZ
</figure>

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354117
F363522
Rawhide7927
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-08-09 — F37 branches from Rawhide, Change complete (testable) deadline
  • 2022-08-23 — Beta freeze begins, Change complete (100% complete) deadline
  • 2022-09-13 — Current beta target date (early target date)

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Preset All Systemd Units on First BootSelf-ContainedApproved
Public release of the Anaconda Web UI preview imageSelf-ContainedApproved
BIND 9.18Self-ContainedApproved
SELinux Parallel AutorelabelSelf-ContainedApproved
ibus-libpinyin 1.13Self-ContainedApproved
z13 as the Baseline for IBM Z HardwareSelf-ContainedApproved for F38
Haskell GHC 8.10.7 & Stackage LTS 18.28Self-ContainedApproved
Emacs 28Self-ContainedFESCo #2845
Mumble 1.4Self-ContainedApproved
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-31 appeared first on Fedora Community Blog.

CPE Weekly Update – Week 31 2022

Posted by Fedora Community Blog on August 05, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week: 1st – 5th August 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board
Link to docs

Update

Fedora Infra

  • Unblocked osbuild in production, should be working now. (Uses script to keep api ip updated in the firewalls)
  • Ocp4 cluster api uses valid cert not (for webhooks/external oc)
  • Disabled systemd-oomd in some places (koji hubs in particular)
  • Barcamp at Nest on Saturday
  • Some sysadmin-main additions: Nils, Michal, Ryan

CentOS Infra including CentOS CI

  • Duffy CI is now live (so hotfixes are also coming, thanks to Nils)
  • Preparing CBS/koji upgrade to 1.29 (would unblock other RFEs on tracker)

Release Engineering

  • FTBFS bugs filed on failing to build packages
  • Containers: rawhide fixed/updating, updated f35/f36

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Meetings about and started code, to move module source from git.centos to gitlab.
  • New ISOs for CentOS Linux 7 for installation that fixes libtimezonemap (and other) issues.
  • Rewrote the errata announcement scripts for CentOS Linux 7 to use new endpoints after the decommissioning of the API search/rs/ on access.redhat.com.

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • EPEL9 is up to 7127 (+141) packages from 3180 (+97) source packages
  • Prepared EPEL survey, it will be promoted on FedoraNest
  • Provided a fix to nagios-plugins-check-updates to improve distro compatibility

FMN replacement

Goal of this initiative

FMN (Fedora-Messaging-Notification) is a web application allowing users to create filters on messages sent to (currently) fedmsg and forward these as notifications on to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas, create a new UI for a better user experience and create a new service to triage incoming messages to reduce the current message delivery lag problem. Community will profit from speedier notifications based on own preferences (IRC, Matrix, Email), unified fedora project to one message service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the maintenance of fedmsg altogether.

Updates

  • Frontend auth being developed
  • Access token and refresh token
  • Making pages require auth, if user is not authenticated, redirect to login
  • Backend auth still being developed (tests)
  • Mockups for UI – bootstrap/HTML/CSS
  • Agile ceremonies being planned

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 31 2022 appeared first on Fedora Community Blog.

Untitled Post

Posted by Zach Oglesby on August 04, 2022 02:36 PM

It seems like Microsoft is killing it with the Surface lineup. I am on vacation and I have seen more Surface products than MacBooks or Dell laptops combined. This is obviously not backed up by sales data, but the experience is real if not puzzling.

PHP version 8.0.22 and 8.1.9

Posted by Remi Collet on August 04, 2022 01:05 PM

RPMs of PHP version 8.1.9 are available in remi-modular repository for Fedora ≥ 34 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php81 repository for EL 7.

RPMs of PHP version 8.0.22 are available in remi-modular repository for Fedora ≥ 34 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in remi-php80 repository for EL 7.

emblem-notice-24.png The modules for EL-9 are now available for x86_64 and aarch64.

emblem-notice-24.pngNo security fix this month, so no update for version 7.4.30.

emblem-important-2-24.pngPHP version 7.3 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.1 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.1
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php81
yum update php\*

Parallel installation of version 8.1 as Software Collection

yum install php81

Replacement of default PHP by version 8.0 installation (simplest):

dnf module reset php
dnf module enable php:remi-8.0
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php80
yum update

Parallel installation of version 8.0 as Software Collection

yum install php80

Replacement of default PHP by version 7.4 installation (simplest):

dnf module reset php
dnf module enable php:remi-7.4
dnf update php\*

or, the old EL-7 way:

yum-config-manager --enable remi-php74
yum update

Parallel installation of version 7.4 as Software Collection

yum install php74

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-8 RPMs are build using RHEL-8.6
  • EL-7 RPMs are build using RHEL-7.9
  • EL-7 builds now use libicu69 (version 69.1)
  • EL builds now uses oniguruma5php (version 6.9.5, instead of outdated system library)
  • oci8 extension now uses Oracle Client version 21.6
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php74 / php80 / php81)

Kiwi TCMS 11.4

Posted by Kiwi TCMS on August 04, 2022 12:56 PM

We're happy to announce Kiwi TCMS version 11.4!

IMPORTANT: This is a medium sized release which contains security related updates, multiple improvements, database and API changes, new settings, bug fixes and new translations!

You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

---

Upstream container images (x86_64):

kiwitcms/kiwi   latest  8c8356c0268d    610MB

IMPORTANT: version tagged and multi-arch container images are available only to subscribers!

Changes since Kiwi TCMS 11.3

Security

Improvements

  • Update bleach from 5.0.0 to 5.0.1
  • Update django-colorfield from 0.6.3 to 0.7.2
  • Update django-extensions from 3.1.5 to 3.2.0
  • Update django-tree-queries from 0.9.0 to 0.11.0
  • Update jira from 3.2.0 to 3.3.1
  • Update markdown from 3.3.6 to 3.4.1
  • Update mysqlclient from 2.1.0 to 2.1.1
  • Update python-gitlab from 3.3.0 to 3.7.0
  • Update node_modules/marked from 4.0.14 to 4.0.18
  • Relax docutils requirement. Use latest version
  • Add template block which will allow logo customizations (Ivajlo Karabojkov)
  • Don't show PLUGINS menu when no plugins are installed. References Issue #2729
  • Add information about Kiwi TCMS user to 1-click bug reports. Closes Issue #2591
  • Use a better icon to signify a manual test inside the user interface
  • Change UserAdmin to be permission based instead of being role-based. Fixes Issue #2496
  • Allow post-processing of automatically created issues. Closes Issue #2383
  • Allow more customization over issue type discovery for Jira. Closes Issue #2833
  • Allow more customization over project discovery for Jira
  • Allow more customization over Redmine tracker. Closes Issue #2382
  • Allow DB settings to be read from Docker Secret files. Fixes Issue #2606
  • Add filter on TestRun page to show test executions assigned to the current user. Closes Issue #333
  • Add URL for creating new TestRun from a TestPlan. The format is /runs/from-plan/<plan-id>/. Closes Issue #274
  • Add bug.Severity attribute which is fully customizeable. Closes Issue #2703
  • Update documentation around TCMS_ environment variables used by automation plugins
  • Update documentation to denote that pytest plugin is now generally available
  • Document necessary permissions for adding new users. References Issue #2496

Database

  • New migration for bug.Severity model

Settings

API

  • If default_tester field is not specified for TestRun.create() method then use the currently logged-in user.
  • Return value for method TestExecution.filter() now contains fields expected_duration and actual_duration. Closes Issue #1924
  • Return value for method Bug.filter() now contains fields severity__name, severity__icon and severity__color

Bug fixes

  • Adjust field name when rendering test execution on TestRun page. Fixes Issue #2794
  • Render rich text editor preview via backend API:
    • Makes display on HTML pages and editor preview the same. Fixes Issue #2659
    • Fixes a bug with markdown rendered in JavaScript. Fixes Issue #2711
  • Stop propagation of HTML unescape which causes display issues with code snippets that contain XML values. Fixes Issue #2800
  • Show bug text only when creating new records, not when editing
  • Properly display & validate related form fields when editing bugs
  • Don't send duplicate emails when editing bugs. Fixes Issue #2782

Refactoring and testing

  • Convert two assignment statements to augmented source code. Closes Issue #2610 (Markus Elfring)

  • Rename method IssueTrackerType.report_issue_from_testexecution():

    • Rename to _report_issue() which returns tuple of (object, str)
    • In case new issue was not created automatically and the method falls back to manual creation the return value will be (None, str)
    • report_issue_from_testexecution() will call _report_issue() internally and handle the change in return type

    Note

    • This change is backwards compatible!
    • For customized issue tracker integration you will have to apply the same changes to your customized code if you wish new functionality, like post-processing of automatically created issues to work.
  • Add tests for backup & restore commands. Closes Issue #2695

  • Update versions of several CI tools

  • Updates around new version of pylint

  • Use codecov-action to upload coverage results

  • Remove setuptools and other workarounds in tests

  • Don't special case dependencies which already provide wheel packages

  • Workaround an issue with setuptools_git_archive introduced by jira==3.2.0

  • Workaround the fact that django-ranged-response doesn't provide wheels

  • Report test results via kiwitcms-django-plugin. Closes Issue #1757

Kiwi TCMS Enterprise v11.4-mt

  • Based on Kiwi TCMS v11.4

  • Update django-python3-ldap from 0.13.1 to 0.15.2

  • Update django-ses from 3.0.1 to 3.1.0

  • Update dj-database-url from 0.5.0 to 1.0.0

  • Add more icons for extra GitHub login backends

  • Add images for various Google login backends

    Private images:

    quay.io/kiwitcms/enterprise         11.4-mt (aarch64)       f5720d030612    03 Aug 2022     862MB
    quay.io/kiwitcms/enterprise         11.4-mt (x86_64)        8ffd5a64a4d1    03 Aug 2022     829MB
    quay.io/kiwitcms/version            11.4 (aarch64)          62207c605dcf    03 Aug 2022     639MB
    quay.io/kiwitcms/version            11.4 (x86_64)           8c8356c0268d    03 Aug 2022     610MB
    

IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!

How to upgrade

Backup first! Then execute the commands:

cd path/containing/docker-compose/
docker-compose down
docker-compose pull
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py upgrade

Refer to our documentation for more details!

Happy testing!

---

If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!

Toolbx @ Community Central

Posted by Debarshi Ray on August 04, 2022 09:38 AM

At 15:00 UTC today, I will be talking about Toolbx on a new episode of Community Central. It will be broadcast live on BlueJeans Events (formerly Primetime) and the recording will be available on YouTube. I am looking forward to seeing some friendly faces in the audience.

<figure class="wp-block-image size-large"></figure>

Community Blog monthly summary: July 2022

Posted by Fedora Community Blog on August 04, 2022 08:00 AM
Community Blog update

This is the latest in our monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think.

Stats

In July, we published 17 posts. The site had 5,500 visits from 3,548 unique viewers. 1,850 visits came from search engines, while 111 came from Twitter and 48 came from Phoronix.

The most read post last month was Fedora Linux 37 development schedule with 776 views. The most read post published last month was Nest With Fedora 2022 registration now open with 252 views.

Badges

Your content here!

The Community Blog is the place to publish community-facing updates on what you’re working on in Fedora. The process is easy, so submit early and submit often.

The post Community Blog monthly summary: July 2022 appeared first on Fedora Community Blog.

GTK[3|4] GtkScrollbar for writer documents

Posted by Caolán McNamara on August 03, 2022 04:03 PM

 

GTK4 screenshot of writer using true GtkScrollbars rather than themed Vcl ScrollBars. Long press enters gtk's usual fine control mode for scrolling.

How to find the current ChromeOS Flex image

Posted by Ville-Pekka Vainio on August 03, 2022 04:02 PM

Edit: The quick answer to the question by a reader of my blog, Julien:

The info to download Chrome OS Flex from Linux is a bit hidden, but official info and link is available here: https://support.google.com/chromeosflex/answer/11543105?hl=en#zippy=%2Chow-do-i-create-a-chromeos-flex-usb-installer-on-linux

My dad has an Acer Chromebook 14 CB3-431, codenamed Edgar. Google just stopped supporting it with ChromeOS, but it’s still working well. Luckily, Google also just released the first stable version of ChromeOS Flex.

I decided to install the full UEFI image to the Chromebook from https://mrchromebox.tech/ so that starting Flex would be as easy as possible. That went well after finding and removing the write protect screw.

But it wasn’t too easy to find the URL to download the current ChromeOS Flex installation image. Google’s Chromebook recovery extension for Chrome does not work on Linux. By reading through some reddit threads, I found out that you can get the download URLs from this json file: https://dl.google.com/dl/edgedl/chromeos/recovery/cloudready_recovery2.json So as of this writing, the current image is https://dl.google.com/dl/edgedl/chromeos/recovery/chromeos_14816.99.0_reven_recovery_stable-channel_mp-v2.bin.zip

Use dd to write the image straight to a USB stick (not to a partition) and you should be good to go. Flex installs pretty much like a regular Linux distribution and seems to work well.

Fedora 37 Beta Wallpaper Update

Posted by Madeline Peck on August 03, 2022 09:56 AM

The night and day versions for Beta release need some feedback before we choose our final version to be packaged. Please feel free to leave constructive feedback

<figure class=" sqs-block-image-figure intrinsic "> </figure>

Day Option 1 - the eco city in daylight

<figure class=" sqs-block-image-figure intrinsic "> </figure>

Day Option 2 - light streaming through the sky

<figure class=" sqs-block-image-figure intrinsic "> </figure>

Night Option 1 - stars in sky

<figure class=" sqs-block-image-figure intrinsic "> </figure>

Night Option 2 - stars and satellites moving through sky

Jess Chitas was able to test the first day option on her desktop as seen below:

<figure class=" sqs-block-image-figure intrinsic "> </figure> <figure class=" sqs-block-image-figure intrinsic "> </figure>

Part 2: How to automate graphics production with Inkscape

Posted by Máirín Duffy on August 02, 2022 11:31 PM

A couple weeks ago I recorded a 15-minute tutorial with supporting materials on how to automate graphics production in Inkscape by building a base template and automatically replacing various text strings in the file from an CSV using the Next Generator Inkscape extension from Maren Hachmann.

Based on popular demand from that tutorial, I have created a more advanced tutorial that expands upon the last one, demonstrating how to automate image replacement and changing colors via the same method. (Which, oddly, also turned out to be roughly 15-minutes long!)

You can watch it below embedded from the Fedora Design Team Linux Rocks PeerTube channel, or on YouTube. (PeerTube is open source so I prefer it!)

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" loading="lazy" sandbox="allow-same-origin allow-scripts allow-popups" src="https://peertube.linuxrocks.online/videos/embed/5d60fd32-5ccd-41cf-9e6e-2fe6784df132" title="Inkscape Advanced Automation Tutorial" width="560"></iframe>

As in the last tutorial, I will provide a very high-level summary of the content in the video in case you’d rather skim text and not watch a video.

Conference Talk Card Graphics

The background on this tutorial is continued from the original tutorial: for each Flock / Nest conference, we need a graphic for each talk for the online platform we use to host the virtual conference. There’s usually on the order of 50+ talks for large events like this, and that’s a lot of graphics to produce manually.

With this tutorial, you will learn how to make a template like this in Inkscape:

Graphic template showing a speaker photo in the lower left corner and a bright red background on the track name.

And a CSV file like this:

ConferenceName TalkName PresenterNames TrackNames BackgroundColor1 BackgroundColor2 AccentColor Photo
BestCon The Pandas Are Marching Beefy D. Miracle Exercise 51a2da 294172 e59728 beefy.png
Fedora Nest Why Fedora is the Best Linux Colúr and Badger The Best Things afda51 0d76c4 79db32 colur.png
BambooFest 2022 Bamboo Tastes Better with Fedora Panda Panda Life 9551da 130dc4 a07cbc panda.png
AwesomeCon The Best Talk You Ever Heard Dr. Ver E. Awesome Hyperbole da51aa e1767c db3279 badger.png

And combine them to generate one graphic per row in the CSV, like so, where the background color of the slide, the background color of the track name / speaker headshot background, and the speaker headshot image changes accordingly:

Graphic showing one of the example rows "Why Fedora is the Best Linux" with a green and blue background, a green accent color, and a hot dog picture as the speaker photo to demonstrate the technique.

As we discussed in the previous post – there are so many things you can use this technique for – even creating consistent cover images for your video channel videos 🙂 I need to point out again, that you could even use it to create awesome banners and graphics for Fedora as a member of the Fedora Design Team!! (We’d love to have you 🙂 )

The Inkscape Next Generator Extension

As in the last tutorial, the first step to creating these is to install the Next Generator extension for Inkscape created by Maren Hachmann, if you haven’t already:

  1. Grab the .inx and .py files from the top level of the repo, download them [next_gen.inx] [next_gen.py].
  2. Then go into the Edit > Preferences > System dialog in Inkscape, search for the “User Extensions” directory listing and click the “Open” icon next to it. Drag the .inx and .py files into that folder.
  3. Close all open Inkscape windows, and restart Inkscape. The new extension will be under the “Extensions” menu: Extensions > Export > Next Generator.

Creating the Template

Each header of your CSV file (in my example: ConferenceName, TalkName, PresenterNames) is a variable you can place in an Inkscape file that will serve as your template. Take a look at the example SVG template file for direction. To have the TalkName appear in your template, create a text object in Inkscape and put the following content into it:

%VAR_TalkName%

When you run the extension, the %VAR_TalkName% text will be replaced with the TalkName listed for each row of the CSV. So for the first row, %VAR_TalkName% will be replaced with the text The Pandas Are Marching for the first graphic. For the second graphic, the TalkName will be Why Fedora is the Best Linux. So on and so forth down the TalkName column per each graphic.

Extending the Template for Color Changes

For the color changes, there’s not much you have to do except decide what colors you want to change, come up for field names for them in your CSV, and pick out colors for each row of your CSV. In our example CSV, we have two colors of the background gradient that change (BackgroundColor1 and BackgroundColor2) and an accent color (AccentColor) that is used to color the conference track name background lozenge as well as the outline on the speaker headshot:

BackgroundColor1 BackgroundColor2 AccentColor
51a2da 294172 e59728
afda51 0d76c4 79db32
9551da 130dc4 a07cbc
da51aa e1767c db3279

Tip: changing only certain items of the same color

There is one trick you have to do if you have the same color you want to change in some parts of the image and to stay the same in other parts of the image.

The way color changes work in Next Generator is a simple find & replace type of mechanism. So when you tell Next Generator in Inkscape to replace anything with the color code #ff0000 (which is in the sample template and what I like to call “obnoxious red”) to some other color (let’s say #aaaa00), it will replace every single object in the file that has #ff0000 as a color to the new value, #aaaa00.

If you wanted just the conference track name background’s red to change color, but you wanted to keep the color border around the speaker’s headshot red in all of the graphics, there’s a little trick you can use to achieve this. Simply use the HSV tool in the Fill & Stroke dialog in Inkscape to tune the red item that you didn’t down just one notch, say to #fa0000, so it has a different hex value for its color code. Then, you can have anything with #ff0000 change color according to the values in your CSV, and anything #fa0000 would stay red and be unaffected by the color replacement mechanism.

Now a couple of things to note about color codes (and we review this in the troubleshooting section below):

  • Do not use # in the CSV or the JSON (more on the JSON below) for these color values.
  • Only use the first six “digits” of the hex color code. Inkscape by default includes 8; the last two are the alpha channel / opacity value for the color. (But wait, how do you use different opacity color values here then? You might be able to use an inline stylesheet that changes the fill-opacity value for the items you want transparency on, but I have not tested this yet.)

Extending the Template for Image Changes

First, you’ll want to add “filler” images to your template (do this by linking them, do not embed them when you import them into Inkscape! I don’t make this point in the video and I should have!) We used just one in our template – photo.png.

Then, similarly to how we prepped the CSV for the color changes, for the image changes you’ll need to come up for field names for any images you’d like to be swappable in your CSV, and list out the image filenames you want to use to replace those images for each row of your CSV. In our example CSV, we have just one image with a field name of “Photo”:

Photo
beefy.png
colur.png
panda.png
badger.png

Note that the images as listed in the CSV are just filenames. I recommend placing these files in the same directory as your template SVG file – you won’t have to worry about specifying specific file paths, which will make your template more portable (tar or zip it up and share!)

Building the JSON for the NextGenerator dialog

The final (and trickiest!) bit of getting this all to work is to write some JSON formatted key-value pairs for NextGenerator to understand which colors / images present in the template file map to which field names / column headers in your CSV file, so it knows what goes where.

Here is the example JSON we used:
{"BackgroundColor1":"51a2da","BackgroundColor2":"294172","AccentColor":"ff0000","Photo":"photo.png"}

Where did I come up with those color codes for the JSON? They are all picked from the template.svg file. 51a2da is the lighter blue color in the circular gradient in the background; 294172 is the darker blue towards the bottom of the gradient. ff0000 (aka obnoxious red) is the color border around the speaker headshot and the background lozenge color behind the track name.

Where did the photo.png filename come from? That’s the name of the filler image I used for the headshot placement (if you’re in Inkscape and not sure what the filename of the image you’re using is, right click, select “Image Properties” and it’s the value in the URL field that pops up in the sidebar.)

Running the Generator

Once your template is ready, you simply run the Next Generator extension by loading your CSV into it, selecting which variables (header names) you want to use in each file name, and copy pasting your JSON snippet into the dialog in the “Non-text values to replace” field:

Screenshot showing the JSON text in the NextGenerator dialog

Then hit apply and enjoy!

Troubleshooting Tips

Tips to troubleshoot color and image replacement issues

Some hard-won knowledge on how to troubleshoot color and/or image replacement not working:

  • Image names are just the filename; keep the images in the same directory as your template and you do not need to use the full file path. (This will make your templates more portable since you can then tar or zip up the directory and share it.)
  • Image names and color values and variable names in the spreadsheet do not need any ” or ‘ unless you need to escape a comma (,) character in a text field. But image names and color values and variable names do need quotes always in the JSON.
  • Color values are not preceded by the # character. It won’t work if you add it.
  • By default Inkscape gives you an 8-“digit” hex value for color codes, the last two correspond to the alpha value of the color (e.g. ff0000ff for bright red with no opacity.) You will need to remove the last two digits so you are using the base 6-“digit” hex code for the color values (that correspond to RGB colors) to remove the opacity/alpha values from the color code. Otherwise, the color replacement won’t work.
  • Check that you have all variable names in the JSON spelled and written exactly the same as in the CSV header entries except with ” in the JSON (e.g. BackgroundColor1 in the CSV is “BackgroundColor1” in the JSON)
  • Use the filename for the default image you are replacing in the template. You do not use the ObjectID or any other Inkscape-specific identifier for the image. Also, link the image instead of embedding it.

Tutorial Resources

All of the example files used in this tutorial are available here:
https://gitlab.com/fedora/design/team/tutorials/inkscape-automation

Link to the Next Generator extension:
https://gitlab.com/Moini/nextgenerator

Direct Links to download *.inx and *.py for the extension:

Have fun 🙂

Nest with Fedora 2022: Thanks to our Sponsors!

Posted by Fedora Community Blog on August 02, 2022 08:00 AM

Fedora’s annual contributor conference Nest with Fedora 2022 is occurring August 4th–6th. Even with the virtual format, we are so excited to see everyone together, so don’t forget to register! Nest with Fedora is made possible by funding from our sponsors. Their assistance brings us everything from the conference platform to promotion to swag.

A big “Thank You!” goes to our astounding sponsors for their support in bringing Fedora Friends together in 2022. Thank you Red Hat, Lenovo, AlmaLinux, openSUSE, GitLab, Datto, and Das Keyboard.

We also want to thank TuxDigital, GNOME, KDE, and Opensource.com for being our amazing media partners for this event and helping us reach a bigger audience.

The post Nest with Fedora 2022: Thanks to our Sponsors! appeared first on Fedora Community Blog.

August 2022

Posted by Weekly status of Packit Team on August 01, 2022 12:00 AM
Week 30 (July 26th–August 1st) # Packit has switched to python-specfile library for handling spec files. This may cause some issues to pop up. (packit#1588) Packit CLI can now build RPMs in mock. For more information see https://packit.dev/docs/cli/build/mock (packit#1662) When using Packit before being allowed, Packit newly links an approval issue where the self-approval can be performed. (packit-service#1596) A downstream koji-build can now be re-triggered by adding a comment containing /packit koji-build into a dist-git pull request with target branch corresponding to the branch the build should be acted upon.

Episode 334 – Leap seconds break everything

Posted by Josh Bressers on August 01, 2022 12:00 AM

Josh and Kurt talk about leap seconds. Every time there’s a leap second, things break. Facebook wants to get rid of them because they break computers, but Google found a clever way to keep leap seconds without breaking anything. Corner cases are hard, security is often just one huge corner case. There are lessons we can learn here.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2837-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_334_Leap_seconds_break_everything.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_334_Leap_seconds_break_everything.mp3</audio>

Show Notes

MACCHIATObin Single Shot first impressions

Posted by Fabio Alessandro Locati on July 31, 2022 12:00 AM
I’ve played with a MACCHIATObin Single Shot board for the last month. I decided to pick this up instead of a different board because of its sheer connectivity. This board has 1x1GbE, 1x2.5GbE, and 2x10GbE, which is very rare for those kinds of boards. I was most interested in the two 10GbE due to some projects I have in mind. I was interested in installing Fedora, which proved very easy. The first time I created a bootable micro-SD card with Fedora, it worked perfectly out of the box.

Friday’s Fedora Facts: 2022-30

Posted by Fedora Community Blog on July 29, 2022 10:36 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

I have weekly office hours on Wednesdays in the morning and afternoon (US/Eastern time) in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. See the upcoming meetings for more information.

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
dotnetdaysroIaşi, RO20–22 Octcloses 31 Jul
SeaGLvirtual4–5 Novcloses 3 Aug
Hardwear.io NetherlandsThe Hauge, NL24–28 Octcloses 15 Aug
PyGeeklevirtual6–7 Sepcloses 24 Aug
EuroRustBerlin, DE and virtual13–14 Octcloses 28 Aug
Denver Dev DayDenver, CO, US20–21 Octcloses 2 Sep
Vue.js LiveLondon, UK and virtual28, 31 Octcloses 5 Sep
Python Web Confvirtual14–17 Marcloses 1 Oct
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
2079833cmakeNEW
</figure>

Meetings & events

Fedora Hatches

Hatches are local, in-person events to augment Nest With Fedora. Here are the upcoming Hatch events.

<figure class="wp-block-table">
DateLocation
11 AugBrno, CZ
</figure>

As a reminder, Nest With Fedora registration is open. We’ll see you online 4–6 August.

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354132
F363397
Rawhide7288
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-08-09 — F37 branches from Rawhide, Change complete (testable) deadline

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Preset All Systemd Units on First BootSelf-ContainedFESCo #2835
Public release of the Anaconda Web UI preview imageSelf-ContainedFESCo #2839
BIND 9.18Self-ContainedFESCo #2840
SELinux Parallel AutorelabelSelf-ContainedFESCo #2841
ibus-libpinyin 1.13Self-ContainedFESCo #2843
z13 as the Baseline for IBM Z HardwareSelf-ContainedFESCo #2842
Haskell GHC 8.10.7 & Stackage LTS 18.28Self-ContainedFESCo #2844
Emacs 28Self-ContainedFESCo #2845
Mumble 1.4Self-ContainedFESCo #2846
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-30 appeared first on Fedora Community Blog.

Important changes to software license information in Fedora packages (SPDX and more!)

Posted by Fedora Community Blog on July 29, 2022 03:20 PM

On behalf of all of the folks working on Fedora licensing improvements, I have a few things to announce!

New docs site for licensing and other legal topics

All documentation related to Fedora licensing has moved to a new section in Fedora Docs, which you can find at https://docs.fedoraproject.org/en-US/legal/. Other legal documentation will follow. This follows the overall Fedora goal of moving active user and contributor documentation away from the wiki.

Fedora license information in a structured format

The “good” (allowed) and “bad” (not-allowed) licenses for Fedora are now stored in a repository, using a simple structured file format for each license (it’s TOML). You can find this at https://gitlab.com/fedora/legal/fedora-license-data. This data is then presented in easy tabular format in the documentation, at https://docs.fedoraproject.org/en-US/legal/allowed-licenses/.

Historically, this information was listed in tables on the Fedora Wiki. This was hard to maintain and was not conducive to using the data in other ways. This format will enable automation for license validation and other similar process improvements.

New policy for the License field in packages — SPDX identifiers!

We’re changing the policy for the “License” field in package spec files to use SPDX license identifiers. Historically, Fedora has represented licenses using short abbreviations specific to Fedora. In the meantime, SPDX license identifiers have emerged as a standard, and other projects, vendors, and developers have started using them. Adopting SPDX license identifiers provides greater accuracy as to what license applies, and will make it easier for us to collaborate with other projects.

Updated licensing policies and processes

Fedora licensing policies and processes have been updated to reflect the above changes. In some cases, this forced deeper thought as to how these things are decided and why, which led to various discussion on Fedora mailing lists. In other cases, it prompted better articulation of guidance that was implicitly understood but not necessarily explicitly stated. 

New guidance on “effective license” analysis

Many software packages consist of code with different free and open source licenses. Previous practice often involved  “simplification” of the package license field when the packager believed  that one license subsumed the other — for example, using just “GPL” when the source code includes parts licensed under a BSD-style license as well. Going forward, packagers and reviewers should not make this kind of analysis, and rather use (for example) “GPL-2.0-or-later AND MIT”. This approach is easier for packagers to apply in a consistent way. 

When do these changes take effect?

The resulting changes in practice will be applied to new packages and licenses going forward. It is not necessary to revise existing packages at this time, although we have provided some guidance for package maintainers who want to get started. We’re in the process of planning a path for updating existing packages at a larger scale — stay tuned for more on that!

Thank you everyone!

A huge thanks to some key people who have worked tirelessly to make this happen: David Cantrell, Richard Fontana, Jilayne Lovejoy, Miroslav Suchý. Behind the scenes support was also provided by David Levine, Bryan Sutula, and Beatriz Couto. Thank you as well for the valuable feedback from Fedora community members in various Fedora forums. 

Please have a look at the updated information. If you have questions, please post them to the Fedora Legal mailing list: https://lists.fedoraproject.org/archives/list/legal@lists.fedoraproject.org/ 

The post Important changes to software license information in Fedora packages (SPDX and more!) appeared first on Fedora Community Blog.

Berlin mini-GUADEC

Posted by Allan Day on July 29, 2022 10:10 AM

Photo courtesy of Jakub Steiner

As I write this post, I’m speeding through the German countryside, on a high speed train heading for Amsterdam, as I make my way home from the GUADEC satellite event that just took place in Berlin.

The event itself was notable for me, given that it was the first face-to-face GNOME event that I’ve participated in since the Covid pandemic set in. Given how long its been since I physically met with other contributors, I felt that it was important to me to do a GNOME event this summer, but I wasn’t prepared to travel to Mexico for various reasons (the environment, being away from family), so the Berlin event that sprang up was a great opportunity.

I’d like to thank the local Berlin organisers for making the event happen, C-Base for hosting us, and the GNOME Foundation for providing sponsorship so I could attend.

Pairing a main conference with regional satellite events seems like an effective approach, which can both widen access while managing our carbon footprint, and I think that this could be a good approach for other GUADECs in the future. It would be good to document the lessons that can be learned from this year’s GUADEC before we forget.

In order to reduce my own environmental impact, I traveled to and from the event over land and sea, using the Hull↔Rotterdam overnight ferry, followed by a train between Amsterdam and Berlin. This was a bit of an adventure, particularly due to the scary heatwave that was happening during my outward journey (see travel tips below).

The event itself had good attendance and had a relaxed hackfest feel to it. With two other members of the design team present, plus Florian Muellner and António Fernandes, it was a great opportunity to do some intensive work on new features that are coming in the GNOME 43 release.

I go home re-energised by the time spent with fellow collaborators – something that I’ve missed over the past couple of years – and satisfied by the rapid progress we’ve been able to make by working together in person.

Notes on Travel

I learnt some things with the travel on this trip, so I’m recording them here for future reference. Some of this is might be useful for those wanting to avoid air transport themselves.

Travel in the time of Covid

One obvious tip: check the local covid requirements before you travel, including vaccination and mask requirements. (Something I failed to fully do this trip.)

There was one point on this trip when I felt unwell and wasn’t entirely prepared to deal with it. Make sure you can handle this scenario:

  • Have travel insurance that covers Covid.
  • Note down any support numbers you might need.
  • Check local requirements for what to do if you contract Covid.
  • Take Covid tests with you. If you start to feel unwell, you need to be able to check if you’re positive or not.

Long-distance overland travel

This wasn’t the first time I’ve done long-distance overland travel in Europe, but the journey did present some challenges that I hadn’t encountered before. Some things I learned as a result:

  • Research each leg of your journey yourself, in order to see what options are available and to pick comfortable interchange times. (Background: I used raileurope.com to research my train tickets. This site promises to work out your full journey for you, but it turns out that it doesn’t do a great job. In particular, it assumes that you want the shortest interchange time possible between connecting services, but then it warns about the interchanges being too short. The result is that it appears that some journeys aren’t viable, when they are if you pick a different combination of services.)
  • Wherever possible, make sure that your travel schedule has ample contingency time. I had a couple of delays on my journey which could have easily caused me to miss a connection.
  • I typically book the cheapest ticket I can, which usually means buying non-flexible tickets. For this trip, this felt like a mistake, due to the aforementioned delays. Having flexible tickets would have made this much less stressful and would have avoided costly replacement tickets if I’d missed a connection.
  • Make sure you carry lots of water with you, particularly if it’s warm. I carried 2 litres, which was about right for me.
  • The boat

    The Hull↔Rotterdam ferry service is a potentially interesting option for those traveling between northern England and mainland Europe. It’s an overnight service, and you get a cabin included with the price of your ticket. This can potentially save you the cost of a night’s accommodation.

    A coach service provides a connection between the ferry terminal and Amsterdam and Rotterdam train stations, and there’s an equivalent taxi service on the UK side.

    I quite like the ferry, but it is also somewhat infuriating:

    • The timing of the coach to Amsterdam is a bit variable, and it’s hard to get an exact arrival time. If you have a particular train you need to catch in Amsterdam or Rotterdam, make sure to tell the coach driver what time it is. When I did this, the driver dropped me off close to the station to help me catch my train.
    • It can be hard to find the coach stop in Amsterdam. If you’ve been issued with a ticket for the coach, check the address that’s printed on it. Give yourself plenty of time.
    • The food on board the ferry is expensive and bad. My recommendation would be to not book an evening meal or breakfast, and take your own food with you.

CPE Weekly Update – Week 30 2022

Posted by Fedora Community Blog on July 29, 2022 10:00 AM
featured image with CPE team's name

This is a weekly report from the CPE (Community Platform Engineering) Team. If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on libera.chat (https://libera.chat/).

Week: 25th July – 29th July 2022

Highlights of the week

Infrastructure & Release Engineering

Goal of this Initiative

Purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible initiatives that CPE might take on.
Link to planning board: https://zlopez.fedorapeople.org/I&R-2022-07-27.pdf
Link to docs: https://docs.fedoraproject.org/en-US/infra/

Update

Fedora Infra

  • New Fedora Media Writer release (mostly translation fixes) 5.0.2
  • Rabbitmq cluster instability last week due to both ocp3 and ocp4 clusters running monitor-gating and them obsoleting each other over and over. ;(
  • Worked around redhat.com SPF issue by accepting email from redhat.com, expanding fedoraproject.org alias and sending it right back out through redhat.com mx to deliver.

CentOS Infra including CentOS CI

Release Engineering

  • F37 Mass rebuild finished

CentOS Stream

Goal of this Initiative

This initiative is working on CentOS Stream/Emerging RHEL to make this new distribution a reality. The goal of this initiative is to prepare the ecosystem for the new CentOS Stream.

Updates

  • Imported c8s modules into the new mbs.
  • CentOS Linux 7.9 ISOs are respun

EPEL

Goal of this initiative

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).

EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions. EPEL uses much of the same infrastructure as Fedora, including buildsystem, bugzilla instance, updates manager, mirror manager and more.

Updates

  • EPEL9 is up to 6986 (+378) packages from 3137 (+54) source packages
  • Backported fix for CVE-2021-21419 to EPEL8’s python-eventlet
  • Retired several rawhide branches of EPEL-only packages
  • Updated KDE (5.24.6) is available in epel9-next and epel8-next testing.
    • Will not go out to regular epel until RHEL 8.7 and 9.1
  • Working on an EPEL survey that will go out in August

FMN replacement

Goal of this initiative

FMN (Fedora-Messaging-Notification) is a web application allowing users to create filters on messages sent to (currently) fedmsg and forward these as notifications on to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas, create a new UI for a better user experience and create a new service to triage incoming messages to reduce the current message delivery lag problem. Community will profit from speedier notifications based on own preferences (IRC, Matrix, Email), unified fedora project to one message service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the maintenance of fedmsg altogether.

Updates

  • Basic repo structure in place (Python + JS side), more boilerplate work ongoing
  • Fedora-messaging schemas checkup commenced

Kindest regards,
CPE Team

The post CPE Weekly Update – Week 30 2022 appeared first on Fedora Community Blog.

Emulated host profiles in fwupd

Posted by Richard Hughes on July 29, 2022 09:25 AM

As some as you may know, there might be firmware security support in the next versions of Plymouth, GNOME Control Center and GDM. This is a great thing, as most people are running terribly insecure hardware and have no idea. The great majority of these systems can be improved with a few settings changes, and the first step in my plan is showing people what’s wrong, giving some quick information, and perhaps how to change it. The next step will be a “fix the problem” button but that’s still being worked on, and will need some pretty involved testing for each OEM. For the bigger picture there’s the HSI documentation which is a heavy and technical read but the introduction might be interesting. For other 99.99% of the population here are some pretty screenshots:

To facilitate development of various UIs, fwupd now supports emulating different systems. This would allow someone to show dozens of supported devices in GNOME Firmware or to showcase the firmware security panel in the GNOME release video. Hint hint. :)

To do this, ensure you have fwupd 1.8.3 installed (or enable the COPR), and then you can do:

sudo FWUPD_HOST_EMULATE=thinkpad-p1-iommu.json.gz /usr/libexec/fwupd/fwupd

Emulation data files can be created with ./contrib/generate-emulation.py file.json in the fwupd source tree and then can be manually modified if required. Hint: it’s mostly the same output as fwupdmgr get-devices --json and fwupdmgr security --json and you can run generate-emulation.py on any existing JSON output to minimize it.

To load a custom profile, you can do something like:

sudo FWUPD_HOST_EMULATE=/tmp/my-system.json /usr/libexec/fwupd/fwupd

As a precaution, the org.fwupd.hsi.HostEmulation attribute is added so we do not ask the user to upload the HSI report. The emulated devices are also not updatable for obvious reasons. Comments welcome!

UEFI rootkits and UEFI secure boot

Posted by Matthew Garrett on July 28, 2022 10:19 PM
Kaspersky describes a UEFI-implant used to attack Windows systems. Based on it appearing to require patching of the system firmware image, they hypothesise that it's propagated by manually dumping the contents of the system flash, modifying it, and then reflashing it back to the board. This probably requires physical access to the board, so it's not especially terrifying - if you're in a situation where someone's sufficiently enthusiastic about targeting you that they're reflashing your computer by hand, it's likely that you're going to have a bad time regardless.

But let's think about why this is in the firmware at all. Sophos previously discussed an implant that's sufficiently similar in some technical details that Kaspersky suggest they may be related to some degree. One notable difference is that the MyKings implant described by Sophos installs itself into the boot block of legacy MBR partitioned disks. This code will only be executed on old-style BIOS systems (or UEFI systems booting in BIOS compatibility mode), and they have no support for code signatures, so there's no need to be especially clever. Run malicious code in the boot block, patch the next stage loader, follow that chain all the way up to the kernel. Simple.

One notable distinction here is that the MBR boot block approach won't be persistent - if you reinstall the OS, the MBR will be rewritten[1] and the infection is gone. UEFI doesn't really change much here - if you reinstall Windows a new copy of the bootloader will be written out and the UEFI boot variables (that tell the firmware which bootloader to execute) will be updated to point at that. The implant may still be on disk somewhere, but it won't be run.

But there's a way to avoid this. UEFI supports loading firmware-level drivers from disk. If, rather than providing a backdoored bootloader, the implant takes the form of a UEFI driver, the attacker can set a different set of variables that tell the firmware to load that driver at boot time, before running the bootloader. OS reinstalls won't modify these variables, which means the implant will survive and can reinfect the new OS install. The only way to get rid of the implant is to either reformat the drive entirely (which most OS installers won't do by default) or replace the drive before installation.

This is much easier than patching the system firmware, and achieves similar outcomes - the number of infected users who are going to wipe their drives to reinstall is fairly low, and the kernel could be patched to hide the presence of the implant on the filesystem[2]. It's possible that the goal was to make identification as hard as possible, but there's a simpler argument here - if the firmware has UEFI Secure Boot enabled, the firmware will refuse to load such a driver, and the implant won't work. You could certainly just patch the firmware to disable secure boot and lie about it, but if you're at the point of patching the firmware anyway you may as well just do the extra work of installing your implant there.

I think there's a reasonable argument that the existence of firmware-level rootkits suggests that UEFI Secure Boot is doing its job and is pushing attackers into lower levels of the stack in order to obtain the same outcomes. Technologies like Intel's Boot Guard may (in their current form) tend to block user choice, but in theory should be effective in blocking attacks of this form and making things even harder for attackers. It should already be impossible to perform attacks like the one Kaspersky describes on more modern hardware (the system should identify that the firmware has been tampered with and fail to boot), which pushes things even further - attackers will have to take advantage of vulnerabilities in the specific firmware they're targeting. This obviously means there's an incentive to find more firmware vulnerabilities, which means the ability to apply security updates for system firmware as easily as security updates for OS components is vital (hint hint if your system firmware updates aren't available via LVFS you're probably doing it wrong).

We've known that UEFI rootkits have existed for a while (Hacking Team had one in 2015), but it's interesting to see a fairly widespread one out in the wild. Protecting against this kind of attack involves securing the entire boot chain, including the firmware itself. The industry has clearly been making progress in this respect, and it'll be interesting to see whether such attacks become more common (because Secure Boot works but firmware security is bad) or not.

[1] As we all remember from Windows installs overwriting Linux bootloaders
[2] Although this does run the risk of an infected user booting another OS instead, and being able to see the implant

comment count unavailable comments

Next Open NeuroFedora meeting: 1 August 1300 UTC

Posted by The NeuroFedora Blog on July 28, 2022 11:10 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 1 August at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us over:

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date --date='TZ="UTC" 1300 2022-08-01'

The meeting will be chaired by @ankursinha. The agenda for the meeting is:

We hope to see you there!

GTK4: Toolbar popups via GtkPopovers

Posted by Caolán McNamara on July 28, 2022 10:03 AM

 

Bootstrapped using GtkPopovers to implement popups from LibreOffice's main menubars for GTK4.

libadwaita: Fixing Usability Problems on the Linux Desktop

Posted by TheEvilSkeleton on July 28, 2022 12:00 AM

Disclaimers

  1. The premise of this article is to make it easy for people who know little about GTK to understand the situation and motivation behind libadwaita.
  2. To provide real world examples, I have to name certain entities and provide links to further explain the topic.
  3. I am not speaking on the behalf of GNOME. I am just an outsider who observes the GNOME Project and tries to understand their reasoning and motivation behind controversial changes.

Introduction

libadwaita is a huge controversy in the Linux desktop community, because of GNOME’s stance towards themes.

I have heard a lot of misinformation surrounding GTK4 and libadwaita, mainly based on misunderstanding. I’d like to take some time to explain what GTK4 and libadwaita are, why GNOME decided to go this route and why it’s a huge step in the right direction.

What are GTK4 and libadwaita?

To start off, what exactly is GTK? GTK is a toolkit by the GNOME Project, meaning it provides building blocks for graphical user interfaces (GUI). GTK4 is the 4th iteration of GTK that brings better performance and more features over GTK3.

libadwaita, in layman’s term, is a plugin for GTK4 that adds more building blocks tailored for the GNOME desktop environment. These blocks are used to provide more complex widgets to help maintenance, and facilitate development.

Understanding the Problems With Custom Themes and Complexity of Applications

Not to confuse with the premise of this paragraph: GNOME has nothing against custom themes themselves. This is merely a side effect with custom themes I want to point out so I can expand onto the actual problems.

One of the biggest problems regarding custom themes and GNOME applications are that custom themes were unable to catch up with applications. These applications are complex, in the sense that many of the features and UIs have some layers atop to improve certain aspects of the interface, i.e. accessibility, cosmetic, etc., using CSS, as there are plenty of reasons to use an extra layer for the UI, when the base is not enough.

With custom layers in mind, many GNOME developers test with a small range of themes in their limited time, mainly Adwaita and Adwaita-dark — after all, these projects are developed during free time. This means, many custom themes that were untested in these cases may pose to visual issues; they may look unpleasing because of some content being out of place, have unusual color patterns, may cause accessibility bugs, or even worse, render the application completely nonfunctional.

Furthermore, many GNOME developers want to change the UI a bit here and there to improve its visuals. These minor changes may break custom themes after an update, as themes are generally very sensitive to any kind of changes. They must adapt to fix new issues.

Distributions Shipping Custom Themes by Default

Many distributions, such as Ubuntu and Pop!_OS, often either ship themes that have several visual bugs by default or make them very accessible, in the sense that the user can very easily change the theme.

Here are some examples:

Pop!_OS(Source) Ubuntu
Low contrast with font thumbnails in Nautilus with Dark Style Low contrast with font thumbnails in GNOME Fonts with Dark Style
Characters out of place and low contrast in GNOME Characters with Dark Style  
White background in Contrast  

With those examples, we observe that those themes break several aspects of applications, some even making them nonfunctional. When distributions ship custom themes, users get the impression that the application is the issue, but not the theme itself. This gives a bad impression about the application and worsens its reputation, despite the application itself working as intended. Further, many users then contact GNOME developers to fix a problem that doesn’t exist within the application. GNOME developers have to refer users to theme developers, or create new styles that patch the theme, in which they have to maintain afterwards. Either way, it increasingly becomes irritating for the maintainer.

Humans, including developers, have limits with repetition. Supporting users about nonexistent issues within the application is typically okay, because it happens. However, when this repetition continues for 5 other times, 10 times, or more, it causes stress and burdens the developer. At the end of the day, many GNOME developers are here to develop applications and fix bugs within their scope of support, not to constantly triage invalid issues, emails and the likes and refer users to other developers. And again, this is done in free time. Many developers have jobs, would like to stay with family, friends or just use their free time to focus on something else with the project.

Just to clarify: this is about distributions shipping with custom themes by default, not tinkerers knowingly playing with third-party themes.

The Request

With these problems in mind, this lead to GNOME contributors to write an open letter in 2019, to politely ask distributions to stop shipping custom themes by default, and let users manually apply themes if they ever choose to do so. However, within a couple of years after the letter, nothing had changed: distributions continued to ship custom themes by default, which caused them to break many applications, GNOME developers continued to triage invalid issues and get overburdened, their development would be hindered in some way or another, etc.

The Solution

As a response, GNOME introduced libadwaita. As mentioned at the beginning, libadwaita is a “plugin” for GTK4. It allows developers to use complex widgets with little effort in contrast to using GTK4 purely. However, this convenience comes at a price: “end user freedom”, which I will get to later in the article.

GNOME plans to facilitate branding by implementing a recoloring API, to let vendors inject their branding and make it look appealing, as explained in this article by Adrien Plazas. Essentially, this means that GNOME developers are starting to put more work to implement proper APIs to let users and distributions customize applications, without the need of hacks.

With the addition of libadwaita, I personally see huge benefits with it and a foreseeable future. I’m currently one of the developers of Bottles. Thanks to libadwaita, it helped us decrease the project’s codebase, making it easier for us to maintain. It also made Bottles look a lot prettier, and gave us the opportunity to work on other aspects of the project.

Other Solutions Proposed as Criticism

Now that we know the decision taken by GNOME and why they did so, let’s take a look at some arguments people have introduced that GNOME should’ve taken to prevent the existence of libadwaita and “locking” down theming.

Warn the User

Many users argue that GNOME developers could have just warned the user in some way; for example, an issue template that asks the user to contact theme developers while they’re opening an issue. Unfortunately, it’s not that simple. GNOME developers would have to write the template that refers users to theme developers. For that, they would have to assume the user’s knowledge, e.g. whether the user knows who to contact, how to contact, what to contact about, etc. And more importantly, they would be the ones to maintain the template.

Additionally and most importantly, issue templates and similar approaches worsen the user experience. GNOME developers would be forced to give unwanted and unexpected warnings and instructions to users. Presumably, the point of an application and distribution is to solve real world problems, not to push away users by making them appalling either socially or technically. By referring users to numerous places, it repulses them, and potentially even discourages them from opening an issue in the first place.

This is also ignoring the fact that whether the user even reads the template, ignores, misreads, etc., and ends up opening an issue in the wrong bug tracker. Even if GNOME developers are not at fault, these events affect them and/or their project, either directly or indirectly. They have very little room to defend or protect themselves.

No Warranty; No Support

Many users argue that GNOME developers have the right to not provide any support, as the majority of licenses they use have a disclaimer that the project is distributed under no warranty. While this is true, it is not free of consequences. Not providing any support is detrimental to the project. If a developer refuses to support the user, then there’s a chance that this event worsens the project’s or even the developer’s reputation and image.

Of course, not providing support is bound to happen, but minimizing the possibility to provide no support is beneficial to the project, the developer’s mental health, and the relationship between developers and users. GNOME developers can’t simply provide no support, because there are consequences in doing so.

Change the License

Another argument I often hear from users is to simply change the license, specifically to a proprietary one. Well, this 100% goes against free and open source philosophy. One of the main reasons why GNOME developers follow the free software philosophy is for ethical reasons.

Many people say that GNOME doesn’t care about their users. In my opinion, saying that GNOME developers don’t care about users is a false accusation.

GNOME developers typically use copyleft licenses, specifically GNU licenses like GPL. GPL prevents entities from creating proprietary and closed source forks. For example, if I fork (grab the code) a GNOME project that is licensed as GPL, I legally cannot modify and redistribute the software without disclosing the source code, because GPL prohibits forkers from doing so, thereby literally protecting user freedom.

While “locking” down theming may worsen user freedom, it prevents distributions from breaking applications, as a result protecting user experience. Every decision taken has to have some compromises. GNOME developers are slightly compromising user freedom to protect user experience, as it prevents users from getting a subpar experience out of the box with GNOME applications. In my opinion, this is absolutely worth the compromise.

Frankly, GTK4+libadwaita still allows overwriting CSS, just like any previous GTK version, but requires knowledge with CSS and is very sensitive for the reasons mentioned earlier. This means that GNOME literally does not restrict theming whatsoever, hence me quoting “locking” and “end user freedom” throughout the article.

Custom Themes in Qt Also Have Issues

I hear many users complaining that these problems only appear with GTK themes and applications, but never with Qt. However, this is untrue; Qt has its own fair share of problems caused by themes.

qt5ct and Plasma

Note: qt5ct allows you to customize Qt in desktop environments that do not style or integrate Qt, such as Cinnamon and GNOME. It was never meant to be used in KDE Plasma, but the instructions for using qt5ct on those desktop environments cause qt5ct to be used in KDE Plasma anyway.

qt5ct is a QPlatformTheme, allowing it to control visual aspects of Qt, as well Qt’s behavior when performing actions, such as clicking files. Likewise, plasma-integration (KDEPlatformTheme) is KDE’s QPlatformTheme, allowing Plasma to apply its appearance and behavior to Qt.

However, Qt can only utilize one QPlatformTheme at a time, meaning using qt5ct in Plasma disables KDEPlatformTheme, causing visual issues and incorrect application behavior.

With qt5ct, we experience visual issues, an example can be seen in System Settings:

System Settings with KDEPlatformTheme System Settings with qt5ct

As we can observe with qt5ct, System Settings is now using Qt’s default style, resulting in these buttons, as well as enourmous radio buttons and checkboxes, and blacked out icons scattered randomly throughout the interface.

Outside of System Settings, applications also no longer recolor their icons according to your color scheme, causing contrast issues in Plasma when using dark color schemes:

KWrite, with Breeze Dark color scheme, with qt5ct

Kvantum

Kvantum is another popular way to theme Qt. Unlike qt5ct, which is a QPlatformTheme, Kvantum is a QStyle (Application Style), meaning Qt applications can integrate exactly as the desktop environment intended. However, Kvantum has problems as well.

Kvantum uses themes which apply predefined color schemes to applications. This works fine for most applications, as they follow the color scheme that Qt provides to applications. However, there are applications that override the color scheme, such as some KDE applications.

This causes those applications to have serious readability issues, such as if a color scheme other than the one intended to be used is applied on some KDE applications:

System Settings, with Breeze Dark color scheme, on Kvantum with KvArc

As seen here, applications that override colors can potentially worsen the visuals with Kvantum, as Kvantum isn’t designed for applications that enforce a different color scheme than is intended by Kvantum.

Not all Qt Applications can be Themed

To add more, some Qt applications, like Telegram, hard code their own theme, and do not give any easy option to use system themes instead. Should we blame them? Of course not. These applications are really complex in the context of visuals. Allowing system themes means that they will certainly be bound to provide some level of support, which can potentially hinder development speed.

The Fedora Project’s Relationship

The Fedora Project managed to properly communicate with GNOME developers and come with an agreement, to make it very appealing to many parties. Fedora Linux uses stock GNOME, with the only addition of adding the Fedora Linux logo at the bottom right of the background. This approach, in my opinion, is what distributions should be going for.

Conclusion

In conclusion, theming is very sensitive to change, even at the slightest. While users knowingly theming applications were not a problem for GNOME developers, the problem lied in distributions shipping custom themes despite explicitly being asked not to.

It is important to understand that while this article is about GNOME, GTK and libadwaita, this is more of a universal issue than a GTK-specific issue. One of which GNOME is taking a step to solve this fundamental issue with the help of libadwaita, to protect not only developers, but user experience as well, while making it very appealing to several parties.

Before libadwaita, I had no interest in developing GUI applications, as I didn’t want to deal with the burden. Thanks to libadwaita, I was convinced that GNOME is building a stable ecosystem that appeals potential developers, contributors and users. Personally, with my limited experience with libadwaita, it was positive. I genuinely applaud all developers and contributors for putting an outstanding amount of work on GTK and libadwaita to build an ecosystem, with a bright future ahead.

This is also a very important lesson to every one: having the freedom to do something doesn’t mean it should be done. While it is neat idea to be able to do whatever you want, there is a risk that you can affect people around you, or worse, yourself.


Just because you can, doesn’t mean you should.


Edit 1: Rewrite Qt section (Credit to Dominic Hayes(Response))

Common GLib Programming Errors

Posted by Michael Catanzaro on July 27, 2022 08:24 PM

Let’s examine four mistakes to avoid when writing programs that use GLib, or, alternatively, four mistakes to look for when reviewing code that uses GLib. Experienced GNOME developers will find the first three mistakes pretty simple and basic, but nevertheless they still cause too many crashes. The fourth mistake is more complicated.

These examples will use C, but the mistakes can happen in any language. In unsafe languages like C, C++, and Vala, these mistakes usually result in security issues, specifically use-after-free vulnerabilities.

Mistake #1: Failure to Disconnect Signal Handler

Every time you connect to a signal handler, you must think about when it should be disconnected to prevent the handler from running at an incorrect time. Let’s look at a contrived but very common example. Say you have an object A and wish to connect to a signal of object B. Your code might look like this:

static void
some_signal_cb (gpointer user_data)
{
  A *self = user_data;
  a_do_something (self);
}

static void
some_method_of_a (A *self)
{
  B *b = get_b_from_somewhere ();
  g_signal_connect (b, "some-signal", (GCallback)some_signal_cb, a);
}

Very simple. Now, consider what happens if the object B outlives object A, and Object B emits some-signal after object A has been destroyed. Then the line a_do_something (self) is a use-after-free, a serious security vulnerability. Drat!

If you think about when the signal should be disconnected, you won’t make this mistake. In many cases, you are implementing an object and just want to disconnect the signal when your object is disposed. If so, you can use g_signal_connect_object() instead of the vanilla g_signal_connect(). For example, this code is not vulnerable:

static void
some_method_of_a (A *self)
{
  B *b = get_b_from_somewhere ();
  g_signal_connect_object (b, "some-signal", (GCallback)some_signal_cb, a, 0);
}

g_signal_connect_object() will disconnect the signal handler whenever object A is destroyed, so there’s no longer any problem if object B outlives object A. This simple change is usually all it takes to avoid disaster. Use g_signal_connect_object() whenever the user data you wish to pass to the signal handler is a GObject. This will usually be true in object implementation code.

Sometimes you need to pass a data struct as your user data instead. If so, g_signal_connect_object() is not an option, and you will need to disconnect manually. If you’re implementing an object, this is normally done in the dispose function:

// Object A instance struct (or priv struct)
struct _A {
  B *b;
  gulong some_signal_id;
};

static void
some_method_of_a (A *self)
{
  B *b = get_b_from_somewhere ();
  g_assert (a->some_signal_id == 0);
  a->some_signal_id = g_signal_connect (b, "some-signal", (GCallback)some_signal_cb, a, 0);
}

static void
a_dispose (GObject *object)
{
  A *a = (A *)object;
  g_clear_signal_handler (&a->some_signal_id, a->b);
  G_OBJECT_CLASS (a_parent_class)->dispose (object);
}

Here, g_clear_signal_handler() first checks if &a->some_signal_id is 0. If not, it disconnects and sets &a->some_signal_id to 0. Setting your stored signal ID to 0 and checking whether it is 0 before disconnecting is important because dispose may run multiple times to break reference cycles. Attempting to disconnect the signal multiple times is another common programmer error!

Instead of calling g_clear_signal_handler(), you could equivalently write:

if (a->some_signal_id != 0) {
  g_signal_handler_disconnect (a->b, a->some_signal_id);
  a->some_signal_id = 0;
}

But writing that manually is no fun.

Yet another way to mess up would be to use the wrong integer type to store the signal ID, like guint instead of gulong.

There are other disconnect functions you can use to avoid the need to store the signal handler ID, like g_signal_handlers_disconnect_by_data(), but I’ve shown the most general case.

Sometimes, object implementation code will intentionally not disconnect signals if the programmer believes that the object that emits the signal will never outlive the object that is connecting to it. This assumption may usually be correct, but since GObjects are refcounted, they may be reffed in unexpected places, leading to use-after-free vulnerabilities if this assumption is ever incorrect. Your code will be safer and more robust if you disconnect always.

Mistake #2: Misuse of GSource Handler ID

Mistake #2 is basically the same as Mistake #1, but using GSource rather than signal handlers. For simplicity, my examples here will use the default main context, so I don’t have to show code to manually create, attach, and destroy the GSource. The default main context is what you’ll want to use if (a) you are writing application code, not library code, and (b) you want your callbacks to execute on the main thread. (If either (a) or (b) does not apply, then you need to carefully study GMainContext to ensure you do not mess up; see Mistake #4.)

Let’s use the example of a timeout source, although the same style of bug can happen with an idle source or any other type of source that you create:

static gboolean
my_timeout_cb (gpointer user_data)
{
  A *self = user_data;
  a_do_something (self);
  return G_SOURCE_REMOVE;
}

static void
some_method_of_a (A *self)
{
  g_timeout_add (42, (GSourceFunc)my_timeout_cb, a);
}

You’ve probably guessed the flaw already: if object A is destroyed before the timeout fires, then the call to a_do_something() is a use-after-free, just like when we were working with signals. The fix is very similar: store the source ID and remove it in dispose:

// Object A instance struct (or priv struct)
struct _A {
  gulong my_timeout_id;
};

static gboolean
my_timeout_cb (gpointer user_data)
{
  A *self = user_data;
  a_do_something (self);
  a->my_timeout_id = 0;
  return G_SOURCE_REMOVE;
}

static void
some_method_of_a (A *self)
{
  g_assert (a->my_timeout_id == 0);
  a->my_timeout_id = g_timeout_add (42, (GSourceFunc)my_timeout_cb, a);
}

static void
a_dispose (GObject *object)
{
  A *a = (A *)object;
  g_clear_handler_id (&a->my_timeout_id, g_source_remove);
  G_OBJECT_CLASS (a_parent_class)->dispose (object);
}

Much better: now we’re not vulnerable to the use-after-free issue.

As before, we must be careful to ensure the source is removed exactly once. If we remove the source multiple times by mistake, GLib will usually emit a critical warning, but if you’re sufficiently unlucky you could remove an innocent unrelated source by mistake, leading to unpredictable misbehavior. This is why we need to write a->my_timeout_id = 0; before returning from the timeout function, and why we need to use g_clear_handler_id() instead of g_source_remove() on its own. Do not forget that dispose may run multiple times!

We also have to be careful to return G_SOURCE_REMOVE unless we want the callback to execute again, in which case we would return G_SOURCE_CONTINUE. Do not return TRUE or FALSE, as that is harder to read and will obscure your intent.

Mistake #3: Failure to Cancel Asynchronous Function

When working with asynchronous functions, you must think about when it should be canceled to prevent the callback from executing too late. Because passing a GCancellable to asynchronous function calls is optional, it’s common to see code omit the cancellable. Be suspicious when you see this. The cancellable is optional because sometimes it is really not needed, and when this is true, it would be annoying to require it. But omitting it will usually lead to use-after-free vulnerabilities. Here is an example of what not to do:

static void
something_finished_cb (GObject      *source_object,
                       GAsyncResult *result,
                       gpointer      user_data)
{
  A *self = user_data;
  B *b = (B *)source_object;
  g_autoptr (GError) error = NULL;

  if (!b_do_something_finish (b, result, &error)) {
    g_warning ("Failed to do something: %s", error->message);
    return;
  }

  a_do_something_else (self);
}

static void
some_method_of_a (A *self)
{
  B *b = get_b_from_somewhere ();
  b_do_something_async (b, NULL /* cancellable */, a);
}

This should feel familiar by now. If we did not use A inside the callback, then we would have been able to safely omit the cancellable here without harmful effects. But instead, this example calls a_do_something_else(). If object A is destroyed before the asynchronous function completes, then the call to a_do_something_else() will be a use-after-free.

We can fix this by storing a cancellable in our instance struct, and canceling it in dispose:

// Object A instance struct (or priv struct)
struct _A {
  GCancellable *cancellable;
};

static void
something_finished_cb (GObject      *source_object,
                       GAsyncResult *result,
                       gpointer      user_data)
{
  B *b = (B *)source_object;
  A *self = user_data;
  g_autoptr (GError) error = NULL;

  if (!b_do_something_finish (b, result, &error)) {
    if (!g_error_matches (error, G_IO_ERROR, G_IO_ERROR_CANCELLED))
      g_warning ("Failed to do something: %s", error->message);
    return;
  }
  a_do_something_else (self);
}

static void
some_method_of_a (A *self)
{
  B *b = get_b_from_somewhere ();
  b_do_something_async (b, a->cancellable, a);
}

static void
a_init (A *self)
{
  self->cancellable = g_cancellable_new ();
}

static void
a_dispose (GObject *object)
{
  A *a = (A *)object;

  g_cancellable_cancel (a->cancellable);
  g_clear_object (&a->cancellable);

  G_OBJECT_CLASS (a_parent_class)->dispose (object);
}

Now the code is not vulnerable. Note that, since you usually do not want to print a warning message when the operation is canceled, there’s a new check for G_IO_ERROR_CANCELLED in the callback.

Update #1: I managed to mess up this example in the first version of my blog post. The example above is now correct, but what I wrote originally was:

if (!b_do_something_finish (b, result, &error) &&
    !g_error_matches (error, G_IO_ERROR, G_IO_ERROR_CANCELLED)) {
  g_warning ("Failed to do something: %s", error->message);
  return;
}
a_do_something_else (self);

Do you see the bug in this version? Cancellation causes the asynchronous function call to complete the next time the application returns control to the main context. It does not complete immediately. So when the function is canceled, A is already destroyed, the error will be G_IO_ERROR_CANCELLED, and we’ll skip the return and execute a_do_something_else() anyway, triggering the use-after-free that the example was intended to avoid. Yes, my attempt to show you how to avoid a use-after-free itself contained a use-after-free. You might decide this means I’m incompetent, or you might decide that it means it’s too hard to safely use unsafe languages. Or perhaps both!

Update #2:  My original example had an unnecessary explicit check for NULL in the dispose function. Since g_cancellable_cancel() is NULL-safe, the dispose function will cancel only once even if dispose runs multiple times, because g_clear_object() will set a->cancellable = NULL. Thanks to Guido for suggesting this improvement in the comments.

Mistake #4: Incorrect Use of GMainContext in Library or Threaded Code

My fourth common mistake is really a catch-all mistake for the various other ways you can mess up with GMainContext. These errors can be very subtle and will cause functions to execute at unexpected times. Read this main context tutorial several times. Always think about which main context you want callbacks to be invoked on.

Library developers should pay special attention to the section “Using GMainContext in a Library.” It documents several security-relevant rules:

  • Never iterate a context created outside the library.
  • Always remove sources from a main context before dropping the library’s last reference to the context.
  • Always document which context each callback will be dispatched in.
  • Always store and explicitly use a specific GMainContext, even if it often points to some default context.
  • Always match pushes and pops of the thread-default main context.

If you fail to follow all of these rules, functions will be invoked at the wrong time, or on the wrong thread, or won’t be called at all. The tutorial covers GMainContext in much more detail than I possibly can here. Study it carefully. I like to review it every few years to refresh my knowledge. (Thanks Philip Withnall for writing it!)

Properly-designed libraries follow one of two conventions for which main context to invoke callbacks on: they may use the main context that was thread-default at the time the asynchronous operation started, or, for method calls on an object, they may use the main context that was thread-default at the time the object was created. Hopefully the library explicitly documents which convention it follows; if not, you must look at the source code to figure out how it works, which is not fun. If the library documentation does not indicate that it follows either convention, it is probably unsafe to use in threaded code.

Conclusion

All four mistakes are variants on the same pattern: failure to prevent a function from being unexpectedly called at the wrong time. The first three mistakes commonly lead to use-after-free vulnerabilities, which attackers abuse to hack users. The fourth mistake can cause more unpredictable effects. Sadly, today’s static analyzers are probably not smart enough to catch these mistakes. You could catch them if you write tests that trigger them and run them with an address sanitizer build, but that’s rarely realistic. In short, you need to be especially careful whenever you see signals, asynchronous function calls, or main context sources.

Sequel

The adventure continues in Common GLib Programming Errors, Part Two: Weak Pointers.

Happy birthday, Valgrind!

Posted by Mark J. Wielaard on July 27, 2022 11:01 AM
<figure class="aligncenter"><figcaption>It has been twenty years today since Valgrind 1.0 was released.</figcaption></figure>

Make sure to read Nicholas Nethercote’s Twenty years of Valgrind to learn about the early days, Valgrind “skins”, the influence Valgrind had on raising the bar when it comes to correctness for C and C++ programs, and why a hacker on the Rust programming language still uses Valgrind.

Upgrade of Copr servers

Posted by Fedora Infrastructure Status on July 27, 2022 08:00 AM

We're updating copr packages to the new versions which will bring new features and bugfixes.

This outage impacts the copr-frontend and the copr-backend.

Can you run a Minecraft Server on an Ampere Computing based System?

Posted by Adam Young on July 26, 2022 03:24 PM

Most Minecraft servers are run on x86_64 based hardware. Ampere AltraMax chips run AARCH64…which is the non-ARM specific way of saying ARM64 instruction set.

I just tested run the the Mojang Minecraft server on an AltraMax based system we have in our lab. This is not a surprise that it runs, as the Server is a pure Java application and does not use any hardware specific tools; all of the rendering happens on the client side. Here’s how I set up to run it.

My system is running Fedora 36

$ cat /etc/redhat-release 
Fedora release 36 (Thirty Six)

I installed the latest OpenJDK runtime.

sudo yum install java-17-openjdk.aarch64

And downloaded the latest version of the server from the Mojang site. This version will change and I suggest you start here to get the actual file name.

wget https://launcher.mojang.com/v1/objects/e00c4052dac1d59a1188b2aa9d5a87113aaf1122/server.jar

To run the server from the bash prompt

java -jar server.jar

When you run it, it will stop with a message telling you to change the eula.txt file:

[07:20:38] [ServerMain/INFO]: Building unoptimized datafixer
[07:20:39] [ServerMain/ERROR]: Failed to load properties from file: server.properties
[07:20:39] [ServerMain/WARN]: Failed to load eula.txt
[07:20:39] [ServerMain/INFO]: You need to agree to the EULA in order to run the server. Go to eula.txt for more info.

Edit that file and change false to true.

$ cat eula.txt 
#By changing the setting below to TRUE you are indicating your agreement to our EULA (https://aka.ms/MinecraftEULA).
#Mon Jul 25 07:20:39 PDT 2022
eula=true

Run the program again and you will see standard world setup happening.

$ java -jar server.jar 
Starting net.minecraft.server.Main
[07:31:29] [ServerMain/INFO]: Building unoptimized datafixer
[07:31:30] [ServerMain/INFO]: Environment: authHost='https://authserver.mojang.com', accountsHost='https://api.mojang.com', sessionHost='https://sessionserver.mojang.com', servicesHost='https://api.minecraftservices.com', name='PROD'
[07:31:32] [ServerMain/INFO]: Loaded 7 recipes
[07:31:32] [ServerMain/INFO]: Loaded 1179 advancements
[07:31:33] [Server thread/INFO]: Starting minecraft server version 1.19
[07:31:33] [Server thread/INFO]: Loading properties
[07:31:33] [Server thread/INFO]: Default game type: SURVIVAL
[07:31:33] [Server thread/INFO]: Generating keypair
[07:31:33] [Server thread/INFO]: Starting Minecraft server on *:25565
[07:31:33] [Server thread/INFO]: Using epoll channel type
[07:31:34] [Server thread/INFO]: Preparing level "world"
[07:31:34] [Server thread/INFO]: Preparing start region for dimension minecraft:overworld
[07:31:36] [Worker-Main-6/INFO]: Preparing spawn area: 0%
...
[07:31:38] [Worker-Main-27/INFO]: Preparing spawn area: 86%
[07:31:38] [Server thread/INFO]: Time elapsed: 4172 ms
[07:31:38] [Server thread/INFO]: Done (4.347s)! For help, type "help"

Type in help and you get copious output.

Connect from your minecraft client using the FQDN or IP address of the server.

<figure class="wp-block-image"><figcaption>Connected to a minecraft client paused to take a screenshot….</figcaption></figure>

This particualr machine has 80 cores. I wonder how that would deal with the lag at Boatem?

Creation of the Nest/Flock/Hatch logos and Colúr!

Posted by Fedora Community Blog on July 26, 2022 08:00 AM

My name is Jess Chitas and I am an intern at Red Hat focusing on contributing to the Fedora community. Over the past couple of months, I have been fortunate enough to work on the new Nest, Flock, and Hatch logos as well as — Colúr — a new Fedora character! In this post, I document my journey from creating Colúr to revamping the Nest, Flock, and Hatch logos!

Creation of Colúr

Related ticket: https://pagure.io/design/issue/807

In creating Colúr (pronounced Co-loor), the main idea was to create a character that was some sort of bird. Nest, Flock and Hatch all involve around birds, so we wanted to keep with the existing theme. When looking at the original Nest logo, I saw it used a dove, so doves were on the table. Fedora has the 4 F’s and the colors associated with them, so I wanted to work off of a colorful bird. I came across a bird called the pink-necked green pigeon and it was perfect! 

<figure class="wp-block-image size-large"><figcaption>Picture of Pink Necked Green Pigeon, this image was marked with a CC BY-SA 4.0 license.
By JJ Harrison (https://tiny.jjharrison.com.au/t/3rUZckpXLJTJuAko) – Own work</figcaption></figure>

I collaborated with Madeline Peck for the design of Colúr, since I was quite stuck as to what art style to draw the bird in. I also took a LinkedIn Learning course to help me with character design. Below are Madeline’s sketches that were super helpful in creating Colúr:

<figure class="wp-block-image size-large">Various sketches of a pigeon<figcaption>Madeline Peck’s sketches of Colúr</figcaption></figure>

The great thing that I love about Red Hat and the Fedora community is that collaboration is so important! It was so easy to collaborate with Madeline and Marie Nordin (the creator of the ticket) for feedback and new ideas! Below are my iterations of Colúr and how the design of him evolved into what he is now!

<figure class="wp-block-image">Increasingly detailed sketches of a pink-necked green pigeon in profile.<figcaption>My initial sketches of Colúr</figcaption></figure>

I wasn’t too sure about my first iteration. It looked off to me. The bird was very static looking and didn’t really have any expression on their face. Originally, I wanted Colúr to be a motherly-type character. The idea of Mary Poppins came to Máirín Duffy and me! This is where the hat comes in. I also wanted to add a feather to the hat, but with Red Hat’s hat, I didn’t want to get into any complications. 

I went back to the drawing board and took another look at Madeline’s sketches of the birdies. I wanted to add boots to Colúr. I had the idea of giving him wellington boots as; I thought it was cute and it also reminded me of a book I had when I was younger about a baby duck. He always had these red wellies on and I thought it would be cute to add them to Colúr! Below is the final design for Colúr:

<figure class="wp-block-image"><figcaption>My final sketch of Colúr</figcaption></figure>

I wanted to then experiment with different facial expressions and poses for Colúr as well. Here are some other iterations I created and finalized in Inkscape: 

<figure class="wp-block-image is-resized"><figcaption>Final variants of Colúr with different with different facial expressions and body movements </figcaption></figure>

After creating Colúr, we had to name him. Máirín Duffy came up with the name Colúr as it resembles the English word ‘colour’ but Colúr actually stands for pigeon in Irish! So it was meant to be, really! 

Creation of Nest/Flock/Hatch logos

<figure class="wp-block-image"><figcaption>Final Nest, Flock and Hatch loos</figcaption></figure>

Here are the final iterations of the Nest, Flock and Hatch logos! But let’s rewind back to the start and how it all came about. 

When creating Colúr, we wanted to make a logo for Nest that incorporated a character as well as the Nest logotype. That is, until we came up with the idea to make a separate mascot that will be more centered towards Nest, Flock, and Hatch and then have separate logotypes for Nest and friends. We went with that new idea. 

Initial Ideas

Altogether, I wanted to have breaks in the logo that were reminiscent of the break in the Fedora ‘F’. Next, I added curves to the ‘n’ in the nest and ‘t’ in the hatch as I thought it gave a softness to the logotypes and made them look more natural. I also wanted each logo to tell a story by having a symbol incorporated. I will elaborate on that more with each individual logo.

With Nest, I kind of knew what I wanted from the get go. I tested out the gaps and curves and maybe a way to attach some of the letters to each other as well. After many meetings, we decided that it would be cool to add a nest element. For example, we picked the symbol of the tree and how birds keep their nest in trees and trees are homes to birds similar to how Nest is  a virtual event that you can attend when at home. Here is the initial creation process of the Nest logo:

<figure class="wp-block-image"><figcaption>Initial sketches of the Nest logo using Procreate</figcaption></figure>

For the Flock logo, it was a similar design process with the breaks however for the symbolism in the logo, I decided to use feathers. The quote “birds of a feather flock together” came to mind when I thought of Flock and what it stands for so I wanted to add feathers into the design. Here is the creative process of creating the Flock logo:

<figure class="wp-block-image"><figcaption>Initial sketches of the Flock logo using Procreate</figcaption></figure>

As you can see by the thought process, I was initially thinking of creating the feathers. I also experimented with creating a feather-like design on the back of the ‘k’ but I wasn’t too keen on it. With feedback, I then added little tufts of fluff onto the feathers as the first iterations looked like leaves and I also inverted the feathers so they are blocked out with colour. 

For the hatch logo, there was much experimenting with that logo. The initial idea was to put a crack in the ‘a’ so that it looks like an egg. There was a lot of trial and error with the logo but here is the thought process regarding the Hatch logo:

<figure class="wp-block-image"><figcaption>Initial sketches of the Hatch logo using Procreate</figcaption></figure>

It evolved from having breaks in the ‘a’ to making the ‘a’ a solid colour to making the ‘a’ into an egg shape. Here is the final iteration we went with:

<figure class="wp-block-image"><figcaption>Final sketch of the Hatch logo using Procreate</figcaption></figure>

Vectorising the Text

I used Inkscape to vectorise the text. I imported a screenshot of the logos and started tracing them with strokes. By doing this method along with feedback, I realised that this was not the best approach for the typeface. I had a workshop with Marie Nordin about logotypes and the different aspects of typography and it was very informative! Here is a before and after of the Hatch logo when creating it with strokes and then using the Comfortaa base and changing the object to a path and manipulating it that way:

<figure class="wp-block-image"><figcaption>Initial vectorisation of the Hatch logo using strokes in Inkscape</figcaption></figure>

This was the first version of the logo. If you can see closely comparing it to the logo below, there are many imperfections. A big one are the two ‘h’s as in the first version they were not as round as they are in the final version and that was corrected using the actual ‘h’ from the Comfortaa typeface, turning it into a path and cutting out the spaces i needed in the logo. One thing I learned as well is that letters like ‘h’ and ‘n’ have a natural narrowing to them as the curve attaches to the stem which is more pleasing to the eye and I couldn’t really achieve that by using the stroke method. The ‘c’ I created with the stroke was pretty uneven as well so replacing it with Comfortaa’s ‘c’ was a better idea and looks smoother. The ‘t’ was shortened and was made to look more uniform with Nest’s ‘t’ as in the beginning, there were slight things off and they did not match. The ‘a’ was straightened too and more refined.

<figure class="wp-block-image"><figcaption>Refined Hatch logo after the workshop with Marie </figcaption></figure>

Here are the final logos and all the other possibilities they could be used in:

<figure class="wp-block-image"><figcaption>Final iterations of the Nest logo, rendered using Inkscape</figcaption></figure> <figure class="wp-block-image"><figcaption>Final iterations of the Flock logo, rendered using Inkscape</figcaption></figure> <figure class="wp-block-image"><figcaption>Final iterations of the Hatch logo, rendered using Inkscape</figcaption></figure>

Where to next?

These logos have been distributed and you will be able to see them on more projects in the near future! We already used the logo for the Hatch Cork event in Ireland and will use it in the upcoming Nest event! It will be used on all kinds of swag too, especially Colúr! I can’t wait to see where this logo goes and where it will be used!  

The post Creation of the Nest/Flock/Hatch logos and Colúr! appeared first on Fedora Community Blog.

Bored on Reddit and submitting funny coding comments

Posted by Jon Chiappetta on July 25, 2022 10:25 PM

So there was a Reddit thread today that was diverging into bad coding tests plus grade evaluation puns (C++) – so I tried to make a purposefully confusing C program behave in a way that a coder or programmer wouldn’t exactly expect. After I wrote it, I had to put in some for-loop-printf-debug statements just to confirm and demonstrate how the right-to-left side value-lookup to delayed-increase to index-assignment works in C, it’s pretty neat to try and analyze how and why this is “working” lol!

#include <stdio.h>
int main()
{
    char A = 'A', B = 'B', C = 'C', D = 'D', E = 'E', F = 'F';
    char grades[256];
    grades[A++] = A++;
    grades[B++] = B++;
    grades[C++] = C++;
    grades[D++] = D++;
    grades[E++] = E++;
    grades[F++] = F++;
    printf("My grade is %c!\n",grades[C++]);
}

And basically this will print out “My grade is D!” – but it’s fun to trace through how each line will process the value statement first and then the assignment index is processed afterwards. In addition, each line will affect the next statement below because of the increases that take place in between processing sides, so the future lookup index values are being changed along the way! 🙂

no trust in black box ai

Posted by Frank Ch. Eigler on July 25, 2022 05:46 PM

I'm a software guy, and have been a while. I've had the pleasure of witnessing or studying many a software failure, and even causing a few. Comes with part of the job. When a software system fails, we open it up, take a look at how it works, make a patch, then close 'er up and release a new version. Done, more or less, usually. This is possible because the "how it works" part - the computer program - is generally available for inspection and modification. This is especially true in the free/open-source part of the industry, where all the program source code is available to end-users.

Enter the brave new world of neural-network or "machine learning" based AI - the new hotness in the last decade or so. All of the above goes out the window. There is nothing to open, because all there is is an array of numbers, millions and billions of them. There is no finding out how it works. All the developers know, roughly, is that after tremendous amounts of automated computation, the neural nets generally produce expected outputs for most inputs they tested. That means that if something goes wrong, there is usually no practical engineering fix, other than running a new batch of automated computation, hoping the new network learns by itself how to avoid the problematic case.

The result is something that looks like it works, but we don't really know how & why. What could go wrong?

Tesla autopilots are based on computer vision, which uses a bunch of neural networks that consume video data from several cameras around the car. Some researchers successfully fooled the neural nets to misinterpret speed limit signs by putting little stickers on real signs. We know there have been a bunch of autopilot crashes where the software was just not good enough, where road markings were weird or construction equipment confused the thing. Hey, don't worry, a later version probably corrects some of those problems, maybe!

And then how about this bit. Researchers proved mathematically that it is possible to deliberately mistrain a neural network to give it undetectable backdoors: undesired responses that the adversary can trigger on demand. The implications are grave: the chain of custody for artifacts of the neural-network training process needs to be pristine. Ken Thompson's classic Reflections on Trusting Trust paper is analogous for normal software - but at least normal software is inspectable.

Oh and how about some politics? That always helps! Have you heard of "bias in AI"? This is referring to the phenomenon whereby neural networks, computing accurately from their input data, can produce politically undesirable outputs. Very real inequalities in real life naturally show up, and golly gee howdy, they must be suppressed by the AI system operators, or else. So google plays with its search engine to "diversify" its hits; OpenAI adds fake keywords for the same purpose; and many more examples are all around the economy. On the bright side, at least these technical measures tend to be crude and leave visible trauma in the computer system: more like a electro-shock-treatment than relearning. As the political winds change, at least they can be removed. The fragility of such mechanisms is the exception that proves the rule of neural network unfixability. To an end-user, in either case, it's still opaque.

Finally, if you are just tired of sleeping, check this out. An image-synthesizing neural network you can play with has apparently invented its own monster. This was not in its training set but something new, and hallucinated a name for it: crungus. Check out the whole thread or the #crungus hashtag. While it is possible that the operator of this system planted this as an easter egg of sorts, akin to the researchers "hidden backdoors" above, it could also be just an emergent genuine artifact of the giant neural network behind this service. There is literally no way to know how many such hidden concepts the thing has inside its billions of numbers - how many are real, how many are fake, how many are ... messages from the occult? This is unusable for serious purposes.

Normal software can and often does become incredibly complicated, requiring discipline and aptitude to develop and troubleshoot, and yeah things still go wrong. But neural networks represent a leap off the cliff: don't even try to engineer the thing. Putting such systems into roles of safety-critical judgement is a dereliction of duty. For what it's worth, symbolic AI systems like Cyc are explicitly engineered, but there is no free lunch: they are expensive and complicated.

Labeling a Linux Kernel RPM

Posted by Adam Young on July 25, 2022 04:13 PM

You can use the Kernel build system to make your own RPMs using the the target:

make rpm-pkg

Before we build ours, we use a cached coy of our config which we copy to .config and run

make -j$(nproc) oldconfig

as that builds only the modules we want (avoiding some that tend to break) and picks up any code we are doing in house.

I tend to run it with all the cpus on my system:

time make -j$(nproc)  rpm-pkg

Last week, I was working with another developer to build some customer Kernels that were testing different options. One thing his team wanted to test was what would happen if we switched the HZ (scheduler) value from 100 to 1000. Building a Kernel with this option is fairly trivial: we use make menuconfig to set the value we wanted and rebuilt. However, since the testing was going to need to compare to a baseline, we wanted to clearly label our Kernels. To do so, we modified the toplevel makefile.

[ayoung@eng14sys-r111 linux]$ git diff Makefile
diff --git a/Makefile b/Makefile
index 7d5b0bfe7960..9e2d339f97d2 100644
--- a/Makefile
+++ b/Makefile
@@ -2,7 +2,7 @@
 VERSION = 5
 PATCHLEVEL = 18
 SUBLEVEL = 0
-EXTRAVERSION =
+EXTRAVERSION =-1000-hz
 NAME = Superb Owl

With this change, we ended up with RPM files like this one:

kernel-5.18.0_1000_hz+-1.aarch64.rpm

Episode 333 – Open Source is unfair

Posted by Josh Bressers on July 25, 2022 12:01 AM

Josh and Kurt talk about Microsoft creating a policy of not allowing anyone to charge for open source in their app store. This policy was walked back quickly, but it raises some questions about how fair or unfair open source really is. It’s mostly unfair to developers if you look at the big picture.

<audio class="wp-audio-shortcode" controls="controls" id="audio-2833-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_333_Open_Source_is_unfair.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_333_Open_Source_is_unfair.mp3</audio>

Show Notes

Friday’s Fedora Facts: 2022-29

Posted by Fedora Community Blog on July 22, 2022 08:57 PM

Here’s your weekly Fedora report. Read what happened this week and what’s coming up. Your contributions are welcome (see the end of the post)!

Announcements

CfPs

<figure class="wp-block-table">
ConferenceLocationDateCfP
CentOS Dojo at DevConf USBoston, MA, US17 Augcloses 22 Jul
dotnetdaysroIaşi, RO20–22 Octcloses 31 Jul
SeaGLvirtual4–5 Novcloses 3 Aug
Hardwear.io NetherlandsThe Hauge, NL24–28 Octcloses 15 Aug
PyGeeklevirtual6–7 Sepcloses 24 Aug
EuroRustBerlin, DE and virtual13–14 Octcloses 28 Aug
Denver Dev DayDenver, CO, US20–21 Octcloses 2 Sep
Vue.js LiveLondon, UK and virtual28, 31 Octcloses 5 Sep
</figure>

Help wanted

Prioritized Bugs

See the Prioritized Bugs documentation for information on the process, including how to nominate bugs.

<figure class="wp-block-table">
Bug IDComponentStatus
2079833cmakeNEW
</figure>

Meetings & events

Fedora Hatches

Hatches are local, in-person events to augment Nest With Fedora. Here are the upcoming Hatch events.

<figure class="wp-block-table">
DateLocation
28 JulMexico City, MX
11 AugBrno, CZ
</figure>

As a reminder, Nest With Fedora registration is open. We’ll see you online 4–6 August.

Releases

<figure class="wp-block-table">
Releaseopen bugs
F354164
F363318
Rawhide7360
</figure>

Fedora Linux 37

Schedule

Below are some upcoming schedule dates. See the schedule website for the full schedule.

  • 2022-07-26 — Software string freeze
  • 2022-08-09 — F37 branches from Rawhide, Change complete (testable) deadline

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideDeferred
Deprecate openssl1.1 packageSystem-WideApproved
Stratis 3.2.0Self-ContainedApproved
Firefox Langpacks SubpackageSystem-WideApproved
GNU Toolchain Update (glibc 2.36, binutils 2.38)System-WideApproved
LXQt 1.1.0Self-ContainedApproved
Officially support Raspberry Pi 4Self-ContainedApproved
Preset All Systemd Units on First BootSelf-ContainedFESCo #2835
Public release of the Anaconda Web UI preview imageSelf-ContainedAnnounced
BIND 9.18Self-ContainedAnnounced
SELinux Parallel AutorelabelSelf-ContainedAnnounced
ibus-libpinyin 1.13Self-ContainedAnnounced
z13 as the Baseline for IBM Z HardwareSelf-ContainedAnnounced
Haskell GHC 8.10.7 & Stackage LTS 18.28Self-ContainedAnnounced
Emacs 28Self-ContainedAnnounced
Mumble 1.4Self-ContainedAnnounced
</figure>

Fedora Linux 38

Changes

The table below lists proposed Changes. See the ChangeSet page or Bugzilla for information on approved Changes.

<figure class="wp-block-table">
ProposalTypeStatus
Add -fno-omit-frame-pointer to default compilation flagsSystem-WideFESCo #2817
</figure>

Contributing

Have something you want included? You can file an issue or submit a pull request in the fedora-pgm/pgm_communication repo.

The post Friday’s Fedora Facts: 2022-29 appeared first on Fedora Community Blog.

Toolbx — bypassing the immutability of OCI containers

Posted by Debarshi Ray on July 22, 2022 04:48 PM

This is a deep dive into some of the technical details of Toolbx. I find myself regularly explaining them to various people, so I thought that I should write them down. Feel free to read and comment, or you can also happily ignore it.

The problem

OCI containers are famous for being immutable. Once a container has been created with podman create, it’s attributes can’t be changed anymore. For example, the bind mounts, the environment variables, the namespaces being used, and all the other attributes that can be specified via options to the podman create command. This means that once there’s a Toolbx, it wouldn’t be possible to give it access to a new set of files from the host if the need arose. The Toolbx would have to be deleted and re-created with access to the new paths.

This is a problem, because a Toolbx is where the user sets up her development and troubleshooting environment. Re-creating a Toolbx might mean reinstalling a number of different packages, tweaking configuration files, redeploying various artifacts and so on. Having to repeat all that in the middle of a long hacking session, just because the container’s attributes need to be tweaked, can be annoying.

This is unlike Flatpak containers, where it’s possible to override the permissions of a Flatpak either persistently through flatpak override or temporarily during flatpak run.

Secondly, as the Toolbx code evolves, we want to be able to transparently update existing Toolbxes to enable new features and fix bugs. It would be a real drag if users had to consciously re-create their containers.

The solution

<figure class="wp-block-image"></figure>

Toolbx bypasses this by using a special entry point for the container. Those inquisitive types who have run podman inspect on a Toolbx container might have noticed that the toolbox executable itself is the container’s entry point.

$ podman inspect --format "{{.Config.Cmd}}" --type container fedora-toolbox-36
toolbox --log-level debug debug init-container ...

This means that when Toolbx starts a container using podman start, the toolbox init-container command gets run as the first process inside the container. Only after this has run, does the user’s interactive shell get spawned.

Instead of setting up the container entirely through podman create, Toolbx tries to use this reflexive entry point as much as possible. For example, Toolbx doesn’t use podman create --volume /tmp:/tmp to give access to the host’s /tmp inside the container. It bind mounts the entire root filesystem from the host at /run/host in the container with podman create --volume /:/run/host. Then, later when the container is started, toolbox init-container recursively bind mounts the container’s /run/host/tmp to /tmp. Since the container has its own mount namespace, the /run/host and /tmp bind mounts are neatly hidden away from the host.

Therefore, if in future additional host locations need to be exposed within the Toolbx, then those can be added to toolbox init-container, and once the user restarts the container after updating the toolbox executable, the new locations will show up inside the existing container. Similarly, if the mount parameters of an existing location need to be tweaked, or if a host location needs to be removed from the container.

This is not restricted to just bind mounts from the host. The same approach with toolbox init-container is used to configure as many different aspects of the container as possible. For example, setting up users, keeping the timezone and DNS configuration synchronized with the host, and so on.

Further details

One might wonder how a Toolbx container manages to have a toolbox executable inside it, especially since the toolbox package is not installed within the container. It is achieved by bind mounting the toolbox executable invoked by the user on the host to /usr/bin/toolbox inside the container.

This has some advantages.

There is always only one version of the toolbox executable that’s involved — the one that’s on the host. This means that the exact invocation of toolbox init-container, which is baked into the Toolbx and shows up in podman inspect, is the only interface that needs to be kept stable as the Toolbx code evolves. As long as toolbox init-container can be invoked with that specific command line, everything else can be changed because it’s the same executable on both the host and inside the container.

If the container had a separate toolbox package in it, then the user might have to separately update another executable to get the expected results, and we would have to ensure that different mismatched versions of the executable can work with each other across the host and the container. With a growing number of containers, the former would be a nightmare for the user, while the latter would be almost impossible to test.

Finally, having only one version of the toolbox executable makes it a lot easier for users to file bug reports. There’s only one version to report, not several spread across different environments.

This leads to another problem

Once you let this sink in, you might realize that bind mounting the toolbox executable from the host into the Toolbx means that an executable from a newer or different operating system might be running against an older or different run-time environment inside the container. For example, an executable from a Fedora 36 host might be running inside a Fedora 35 Toolbx, or one from an Arch Linux host inside an Ubuntu container.

This is very unusual. We only expect executables from an older version of an OS to keep working on newer versions of the same OS, but never the other way round, and definitely not across different OSes.

I will leave you with that thought and let you puzzle over it, because it will be the topic of a future post.